id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2304.09487
An Ensemble Approach for Research Article Identification: a Case Study in Artificial Intelligence
This study presents an ensemble approach that addresses the challenges of identification and analysis of research articles in rapidly evolving fields, using the field of Artificial Intelligence (AI) as a case study. Our approach included using decision tree, sciBERT and regular expression matching on different fields of the articles, and a SVM to merge the results from different models. We evaluated the effectiveness of our method on a manually labeled dataset, finding that our combined approach captured around 97% of AI-related articles in the web of science (WoS) corpus with a precision of 0.92. This presents a 0.15 increase in F1 score compared with existing search term based approach. Following this, we analyzed the publication volume trends and common research themes.We found that compared with existing methods, our ensemble approach revealed an increased degree of interdisciplinarity, and was able to identify more articles in certain subfields like feature extraction and optimization. This study demonstrates the potential of our approach as a tool for the accurate identification of scholarly articles, which is also capable of providing insights into the volume and content of a research area.
Lie Tang, Xianke Zhou, Min Lu
2023-04-19T08:17:10Z
http://arxiv.org/abs/2304.09487v3
A GPT-Based Approach for Scientometric Analysis: Exploring the Landscape of Artificial Intelligence Research ###### Abstract This study presents a comprehensive approach that addresses the challenges of scientometric analysis in the rapidly evolving field of Artificial Intelligence (AI). By combining search terms related to AI with the advanced language processing capabilities of generative pre-trained transformers (GPT), we developed a highly accurate method for identifying and analyzing AI-related articles in the Web of Science (WoS) database. Our multi-step approach included filtering articles based on WoS citation topics, category, keyword screening, and GPT classification. We evaluated the effectiveness of our method through precision and recall calculations, finding that our combined approach captured around 94% of AI-related articles in the entire WoS corpus with a precision of 90%. Following this, we analyzed the publication volume trends, revealing a continuous growth pattern from 2013 to 2022 and an increasing degree of interdisciplinarity. We conducted citation analysis on the top countries and institutions and identified common research themes using keyword analysis and GPT. This study demonstrates the potential of our approach to significantly enhance the identification process of AI-related articles, while providing valuable insights into the growth, interdisciplinary nature, and key players in the field. Artificial intelligence, Generative pre-trained transformer, Scientometric analysis, Search strategy ## Introduction Emerging research fields, characterized by complex boundaries and rapid evolution(WIPO 2019), present unique challenges for scientometric analysis. Despite these difficulties, scientometric research on emerging fields holds great importance in understanding the evolution and impact of new technologies and scientific domains. By examining the various aspects of these fields, researchers and policymakers can gain valuable insights into their growth, trends, and potential implications for society(Rotolo, Hicks et al. 2015). Such research plays a crucial role in uncovering future research directions and fostering interdisciplinary collaborations, ultimately contributing to the advancement of science and technology. Artificial Intelligence (AI) has gained significant attention in recent years, becoming a popular topic across various research fields. AI encompasses a wide range of techniques and methods that allow machines to perform tasks that typically require human intelligence. These tasks include, but are not limited to, problem-solving, learning, perception, and language understanding Russell (2010). The advancements in AI have led to a plethora of sub-fields, such as machine learning, natural language processing, computer vision, and robotics WIPO (2019). As a result of the growing interest in AI, the volume of AI-related articles has grown exponentiallyBenefo et al. (2022), highlighting the importance of scientometric analysis to better understand the landscape of AI research. However, accurately identifying AI-related articles within the vast corpus of scientific literature is challenging due to AI's broad, fuzzy, and rapidly-changing natureWIPO (2019). Previous scientometric studies on AI have employed various methods to address this challenge, such as keyword-based search and citation analysis. For instance, some scholars have utilized simple search strategies like TS=("artificial intelligence") Gao et al. (2019), while others have employed more complex strategies incorporating over 100 keywords Liu et al. (2021). In addition, machine learning-based approaches have been explored, with researchers training SciBERT classifiers on arXiv data to infer an article's relevance to AI Dunham et al. (2020) and applying random forest models to the SCOPUS database to identify AI-related articles Siebert et al. (2018). While these approaches have achieved reasonable success, they still struggle with the inherent complexity of AI, and AI's lack of a clear definition makes it difficult to select a comprehensive and precise set of search terms. In order to address this problem, we devised a comprehensive approach that integrates search term based search with machine learning techniques. Large language models (LLMs) represent a category of language models known for their exceptional performance in various natural language processing (NLP) tasks. Their capacity to produce human-like language has made them an increasingly popular research focus Fan et al. (2023). GPT is a LLM that has gained popularity recently due to its ability to generate human-like language, which has been used in a variety of applications, including chatbots, language translation, and even creative writing Bubeck et al. (2023). In this study, we utilized GPT's machine learning capabilities to classify articles as either AI-related or non-AI. By training the model on a subset of articles that were manually labeled as AI-related or non-AI, we were able to improve the accuracy of our search strategy and effectively identify articles related to artificial intelligence. Our approach contains the following steps: 1. Identification of AI-related search terms. Those terms were identified through a direct search and by employing the search strategies of previous researchers. These terms encompass various aspects of AI, such as machine learning, natural language processing, computer vision, and robotics. 2. Identification of AI-related citation topics. WoS citation topics are clusters of citations determined through an algorithm that employs a three-tiered, hierarchical classification system at the document level (Clarivate 2021). All the citation topics were manually reviewed, and those considered part of AI were added to the final results. 3. WoS category "Artificial Intelligence". This category was added to the results to further increase the recall of our approach. 4. Fine-tuned GPT model. We fine-tuned the GPT model on a subset of the retrieved articles, which were manually labeled as either AI-related or non-AI. Once the model was adequately trained, we deployed it to classify the remaining articles (articles retrieved in step 1, excluding those already added to the final result via step 2 and 3) into AI-related and non-AI categories. The final result was obtained by combining the outcomes of WoS citation topics, keyword screening, WoS category and GPT classification. This multi-step approach allowed us to effectively identify and analyze AI-related articles while minimizing the inclusion of irrelevant content. To evaluate the effectiveness of our approach, AI experts manually labeled a set of 200 articles as either AI-related or non-AI, and we used these labels to calculate the precision and recall of our approach. The results yielded a precision of 90% and recall of 94%. We also randomly selected a set of articles from the WoS database to validate the recall. Our results demonstrate that our combined approach significantly increased the coverage of AI-related articles with high precision, and successfully captured around 94% of all AI-related articles in the entire WoS corpus. The entire process of our approach is shown in the flowchart below. ### Construction of keyword-based search strategy for artificial intelligence research articles The volume of scientific literature available can make it difficult to identify relevant articles related to artificial intelligence, since researchers typically do not have full access to the entire WoS database. In addition, running a fine-tuned GPT classifier on the entire WoS corpus can be time-consuming and expensive. Therefore, we need to propose a search method to retrieve a portion of the WoS database for further analysis. To minimize the number of AI-related articles excluded from our study, it is necessary to ensure our search strategy includes as many AI-related articles as possible. To begin with, we used the search term 'TS="Artificial Intelligence" to retrieve all the articles related to AI from the Web of Science database. The publication time range Figure 1: Overview of the search approach was set from January 1, 2013 to December 31, 2022. The same time range was used for all of our future searches to ensure that we could analyze the AI-related articles from the past 10 years. A total of 95,835 results from Web of Science Core Collection were returned. We then conducted a high-frequency keyword analysis to identify the most common and relevant keywords related to AI. We downloaded the Full records of all the articles returned above, and extracted "Author Keywords" and "Keywords Plus" fields from the records. Those keywords were ranked according to their total times of appearance. We then manually reviewed the keywords which appeared 200 or more times, and discarded those unrelated to AI. When necessary, we consulted wikipedia and WoS search results to clarify the meaning of a certain keyword. Since the aim of this step is to ensure a high recall rate for further classification, we retained a keyword whenever we were not sure if it would bring us more AI-related articles. As a result, a total of 196 keywords were retained. To further ensure that we were not leaving any important search terms out, we also reviewed the search strategy used in previous studies. As is mentioned above, Liu proposed a comprehensive search strategy for AI-related articles and the result has been widely used by researchers in scientometrics, which consists of a core lexical query, two expanded lexical queries, and the WoS category "Artificial Intelligence". We also reviewed all WoS citation topics and determined 10 topics that belong to Artificial intelligence, including Natural Language Processing, Face Recognition, Defect Detection, Reinforcement Learning, Video Summarization, Action Recognition, Object Tracking, Deep Learning, Artificial Intelligence & Machine Learning and Visual Servoing. Our selected search terms (referred to as Tang's approach) were combined with the search terms proposed in Liu's paper. The final search strategy is shown in Table 1. This combined approach allowed us to retrieve a large number of AI-related articles that were relevant to our research objectives. To make sure our results are comparable with previous results, we used our search strategy in the WoS Science Citation Index Expanded (SCI-Expanded) and Social Sciences Citation Index (SSCI) databases, and set the same time limit January 1, 2013 to December 31, 2022. 626,913 articles were retrieved with Liu's approach. When combined with Tang's approach, there are a total of 2,490,817 articles for further screening and analysis. The results indicate that most of the AI-related articles from WoS cannot be captured by a simple search of TS="Artificial Intelligence". \begin{tabular}{|c|c|l|} \hline **Author** & \begin{tabular}{c} **Search** \\ **Strategy** \\ \end{tabular} & **Search terms** \\ \hline \multirow{4}{*}{\begin{tabular}{c} Liu et \\ al. \\ \end{tabular} } & \multirow{4}{*}{ \begin{tabular}{c} Core lexical \\ query \\ \end{tabular} } & TS=("Artificial Intelligence" or "Neural Net" or "Machine" Learning" or "Expert System$" or "Natural Language Processing" or "Deep Learning" or "Reinforcement Learning" or "Learning Algorithm$" or "Supervised Learning" or "Intelligent Agent") \\ \cline{1-1} & & \\ \ \begin{tabular}{|c|l|} \hline & TS=(("Backpropagation Learning" or "Back-propagation Learning" or "Bp Learning") or ("Backpropagation Algorithm** or "Back-propagation Algorithm**) or "Long Short-term Memory" or ((Pcnn$ not Pcnt) or "Pulse Coupled Neural Net**) or "Perceptron$" or ("Neuro-evolution" or Neuroevolution) or "Liquid State Machine**" or "Deep Belief Net**" or ("Radial Basis Function Net** or Rbfnn* or "Rbfent** or "Rbfent** or "Deep Net** or Autoencoder* or "Committee \\ \multirow{2}{*}{Expanded lexical query} & Machine** or "Training Algorithm$" or ("Backpropagation Net** or "Back-propagation Net** or "Back-propagation Net** or "Back-propagation Net** or "Bp Network**) or "Q learning" or "Convolution* Net** or "Actor-critic Algorithm$" or ("Feedforward Net** or "Feed-Forward Net**) or "Hopfeld Net**" or Neocognitron* or Xgboost* or "Boltzmann Machine**" or "Activation Function$" or ("Neurodynamic Programming" or "Neuro dynamic Programming") or "Learning Model**" or (Neuro computing or "Neuro-Computing") or "Temporal Difference Learning" or "Echo State* Net**) \\ \hline & TS=("Transfer Learning" or "Gradient Boosting" or "Adversarial \\ & Learning" or "Feature Learning" or "Generative Adversarial Net** or "Representation Learning" or "Multiagent Learning" or "Multi-agent Learning") or "Reservoir Computing" or "Co-training" or ("Pac \\ & Learning" or "Probabl* Approximate* Correct Learning") or "Extreme Learning Machine** or "Ensemble Learning" or "Machine* \\ & Intelligent** or ("Neuro fuzzy" or "Neurofuzzy") or "Lazy Learning" or ("Multi* instance Learning" or "Multi-instance Learning") or ("Multi* task Learning" or "Multitask Learning") or "Computation* \\ & Intelligent** or "Neural Model**" or ("Multi* label Learning" or "Multi label Learning" or "Multi label Learning") or "Multilabel Learning") or "Similarity Learning" or "Statistical \\ & Relation* Learning" or "Support* Vector* Regression" or "Manifold \\ & Regularization" or "Decision Forest** or "Generalization Error** or "Transductive Learning" or (Neurorobotic* or "Neuro-robotic*") or "Inductive Logic Programming" or "Natural Language \\ & 2 & Understanding" or (Ada-boost* or "Adaptive Boosting") or \\ & "Incremental Learning" or "Random Forest*" or "Metric Learning" or "Neural Gas" or "Grammatical Inference" or "Support* Vector* \\ & Machine** or ("Multi* label Classification" or "Multilabel \\ & Classification") or "Conditional Random Field**" or ("Multi* class \\ & Classification" or "Multiclass Classification") or "Mixture Of \\ & Expert** or "Concept* Drift" or "Genetic Programming" or "String \\ & Kernel** or ("Learning To Rank** or "Machine-learned Ranking") or "Boosting Algorithm$" or "Robot* Learning" or "Relevance Vector* \\ & Machine** or Connectionis* or ("Multi* Kernel$ Learning" or "Multikernel$ Learning") or "Graph Learning" or "Naive bayes* \\ & Classif** or "Rule-based System$" or "Classification Algorithm**" or "Graph* Kernel*" or "Rule* induction" or "Manifold Learning" or "Label Propagation" or "Hypergraph* Learning" or "One class \\ \hline & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \end{tabular} \begin{tabular}{|p{28.5pt}|p{284.5pt}|p{284.5pt}|} \hline & Classif*" or "Intelligent Algorithm*") \\ \hline WoS category & WC=("Artificial Intelligence") \\ \hline & TS=("action recognition" OR "activation function$" OR "activity recognition" OR "adaboost" OR "AI" OR "algorithm$" OR "anfis" OR "ann" OR "anomaly detection" OR "ant colony optimization" OR "artificial bee colony" OR "artificial neural-network$" OR "artificial-intelligence" OR "attribute reduction" OR "augmented reality" OR "autoencoder$" OR "automa* detection" OR "automa* segmentation" OR "automa* classification" OR "background subtraction" OR "backpropagation" OR "bankruptcy prediction" OR "bayesian network$" OR "bayesian-inference" OR "bidirectional lstm" OR "big data" OR "bootstrap" OR "brain-computer interface$" OR "canonical correlation-analysis" OR "cellular neural-network$" OR "classifier$" OR ("cluster-analysis" OR "cluster analysis") OR "cnn" OR "community detection" OR "complex dynamical network$" OR "component analysis" OR "computational intelligent*" OR "computer vision" OR "computer-aided detection" OR "concept drift" OR "consensus model" OR ("convolutional network$" OR "convolutional neural-network$") OR "corpus" OR "crack detection" OR "cross-validation" OR "damage detection" OR "data augmentation" OR "data fusion" OR "data mining" OR "decision tree$" OR "deconvolution" OR "deep neural-network$" OR "defect detection" OR "dempster-shafer theory" OR "differential evolution" OR "dimensionality reduction" OR "discriminant-analysis" OR "dynamical network$" OR "edge-detection" OR "eigenface$" OR "emotion recognition" OR "energy minimization" OR "event detection" OR "evidential belief function" OR "evidential reasoning approach" OR "expert-system" OR "exponential stability" OR "exponential synchronization" OR "expression recognition" OR "extended kalman filter" OR "extreme learning-machine$" OR ("face recognition" OR "face-recognition") OR "facial expression recognition" OR "fault-diagnosis" OR "feature subset-selection" OR "feature-extraction" OR "feedforward networks" OR "fuzzy c-means" OR "fuzzy inference system" OR "fuzzy-logic" OR "fuzzy-set$" OR "fuzzy-system$" OR "gan" OR "gaussian process regression" OR "generative adversarial network$" OR "gesture recognition" OR "global exponential stability" OR "gradient descent" OR "grey wolf optimizer" OR "group decision-making" OR "hidden markov-models" OR "human activity recognition" OR "image classification" OR "image registration" OR "image segmentation" OR "image-analysis" OR "imbalanced data" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR " "image segmentation" OR " "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR " "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR " "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR " "image segmentation" OR " "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR " "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR " "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR " "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR " "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR " "image segmentation" OR " "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR " "image segmentation" OR "image segmentation" OR " "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR " "image segmentation" OR " "image segmentation" OR " "image segmentation" OR " OR "image segmentation" OR "image segmentation" OR " "image segmentation" OR "image segmentation" OR "image segmentation" OR " "image segmentation" OR " "image segmentation" OR " "image segmentation" OR "image segmentation" OR " "image segmentation" OR " "image segmentation" OR "image segmentation" OR " "image segmentation" OR " "image segmentation" OR " "image segmentation" OR " "image segmentation" OR "image segmentation" OR " "image segmentation" OR " "image segmentation" OR " "image segmentation" OR " "image segmentation" OR " "image segmentation" OR " "image segmentation" OR " "image segmentation" OR "image segmentation" OR " "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR " "image segmentation" OR "image segmentation" OR " "image segmentation" OR " "image segmentation" OR " "image segmentation" OR "image segmentation" OR " "image segmentation" OR "image segmentation" OR " "image segmentation" OR "image segmentation" OR " "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR " "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR " "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR " "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR " "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR " "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR "image segmentation" OR " "image segmentation" OR "image segmentation" OR " \begin{tabular}{|p{28.5pt}|p{28.5pt}|} \hline & OR "inference system" OR "information extraction" OR "in-silico prediction" OR "intrusion detection" OR "kalman filter" OR "kernel" OR "k-svd" OR "lasso" OR "lda" OR "leader-following consensus" OR "learning-model" OR "learning-based optimization" OR "learning-model" OR "least-squares" OR "linear discriminant-analysis" OR "local binary patterns" OR "logistic-regression" OR "lstm" OR "machine vision" OR "markovian jump systems" OR "metaheuristics" OR "multiagent system$" OR "multilayer feedforward network$" OR "multilayer perceptron" OR "multiobjective optimization" OR "naive bayes" OR "nearest-neighbor" OR "neural-control" OR "neural-network$" OR "nonlinear dimensionality reduction" OR "nonlinear-systems" OR "novelty detection" OR ("object detection" OR "object recognition") OR "object tracking" OR "outlier detection" OR "parameter-estimation" OR "parameter-identification" OR "partial least-squares" OR "particle swarm" OR "pattern-classification" OR "pattern-recognition" OR "pca" OR "pcnn" OR "pedestrian detection" OR "perceptron" OR "permutation entropy" OR "person reidentification" OR "pls" OR "pose estimation" OR "principal component analysis" OR "pso" OR "quantile regression" OR "random forest$" OR "recommender system$" OR "recurrent neural-network$" OR "regression-analysis" OR "regression-models" OR "representation model" OR "robot" OR "robot manipulator$" OR "roc curve" OR "rough set$" OR "rule extraction" OR "scene classification" OR "seizure detection" OR "self-organizing map$" OR "semantic segmentation" OR "semantic similarity" OR "semantic web" OR "sentiment analysis" OR "sequence-based predictor" OR "short-term-memory" OR "smote" OR "sparse representation" OR "species distribution model$" OR ("support vector machine$" OR "support-vector-machine") OR "support vector regression" OR "svm" OR "svr" OR "target detection" OR "text classification" OR "texture analysis" OR "texture classification" OR "time-series" OR "time-varying delay$" OR "traffic flow prediction" OR "trajectory tracking" OR "travel-time prediction" OR "variable selection" OR "variational mode decomposition" OR "visual tracking" OR "visual-attention") \\ \hline & 4.48.672 Natural Language Processing \\ \cline{2-3} & 4.17.118 Face Recognition \\ \cline{2-3} & 4.17.1950 Defect Detection \\ \cline{2-3} & 4.116.862 Reinforcement Learning \\ \cline{2-3} & 4.17.1802 Video Summarization \\ \cline{2-3} & 4.17.630 Action Recognition \\ \cline{2-3} & 4.17.953 Object Tracking \\ \hline \end{tabular} ### Filtering AI-related articles by search strategy We then downloaded the corpus (referred to as the initial corpus) via the search strategy described in Table 1. Since we tried to keep the recall as high as possible in the aforementioned strategy, the next step would be further refining the corpus to increase the precision. To retrieve the initial batch of AI-related articles, we used a combination of WoS citation topics and keyword-based search. The 10 citation topics included in our search strategy were reviewed, and 20 articles were randomly selected from each topic. The authors manually reviewed these 200 articles to determine whether they were AI-related. In each topic, at least 19 out of 20 articles were considered to be AI-related. Using those citation topics, 184,058 articles were retrieved from WoS. We added them to our final result of AI-related articles (referred to as the final corpus). Next, the authors extracted high-frequency keywords from "author keywords" and "keywords plus" fields of the records, then ranked them according to their frequency. We manually reviewed the top 200 high frequency keywords and only retained the keywords which were considered as highly related to AI. Similarly, we consulted wikipedia and WoS search results whenever necessary, and only retained the keywords which are considered inherent parts of AI, among whose search results at least 19 out of 20 were deemed AI-related. The search terms in Liu's core lexical query were also added to the high frequency keywords and subjected to the same review process. Finally, we used the resulting keywords to construct a search strategy which we refer to as our core lexical query (shown in Table 2). A total of 459,004 articles (excluding the ones already in the final corpus) were added to the final corpus via this search strategy. Finally, we noted that there were 77,091 articles in the WoS category "Artificial Intelligence" not included in the previous steps. They were added into our final corpus to ensure a high coverage of our search strategy. \begin{table} \begin{tabular}{|c|p{216.8pt}|} \hline **Search Strategy** & **Search terms** \\ \hline \end{tabular} \end{table} Table 1: Preliminary search approach for artificial intelligence ### Fine tuning a GPT classification model to further identify AI-related articles After the screening with aforementioned search terms, a portion of articles in the initial corpus were already classified as AI-related, via citation topics, category, or keyword search. Therefore, there were 1.84 million articles remaining unclassified in the initial corpus. The initial corpus was retrieved using a search strategy that aimed to keep the recall as high as possible, but it does not guarantee that all articles retrieved will be AI-related. Therefore, a novel approach is necessary to classify the remaining texts and ensure that only AI-related articles are included in the final corpus. The use of a text classification model fine-tuned with GPT provides a more \begin{table} \begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline & TS=(”artificial intelligence*” OR ”artificial-intelligen**” OR ”autoencoder” OR ”backpropagation” OR ”back-propagation” OR ”lstm” OR ”computational intelligence*” OR ”computer vision” OR ”convolutional net**” OR “deep learning” OR ”deep-learning” OR ”extreme learning-machine” OR ”generative adversarial network” OR ”grey wolf optimize” OR ”learning framework” OR ”machine learning” OR ”machine vision” OR ”multiagent system” OR ”pcnn” OR ”perceptron” OR ”random forest” OR ”random-forest” OR ”semantic segmentation” OR ”seminent analysis” OR ”smote” OR ”support vector regression” OR ”support-vector-machine” OR ”natural language processing” OR ”NLP” OR ”neural net**” OR ”neural-net” OR ”reinforcement learning” OR ”learning algorithm” OR ”supervised learning”) \\ \hline \begin{tabular}{c} WoS category \\ \end{tabular} & WC=(”Artificial Intelligence”) \\ \hline \multirow{6}{*}{Citation topics} & 4.48.672 Natural Language Processing \\ \cline{2-3} & 4.17.118 Face Recognition \\ \cline{2-3} & 4.17.1950 Defect Detection \\ \cline{2-3} & 4.116.862 Reinforcement Learning \\ \cline{2-3} & 4.17.1802 Video Summarization \\ \cline{2-3} & 4.17.630 Action Recognition \\ \cline{2-3} & 4.17.953 Object Tracking \\ \cline{2-3} & 4.17.128 Deep Learning \\ \cline{2-3} & 4.61 Artificial Intelligence \& Machine Learning \\ \cline{2-3} & 4.116.2066 Visual Servoing \\ \hline \end{tabular} \end{table} Table 2: Search approach for the final corpus efficient and accurate way to identify relevant articles, which is crucial for scientometric analysis of emerging research fields. Fine-tuning a GPT model allows for more effective use of the API models by providing improved results, the ability to train on a larger number of examples, saving tokens, and reducing request latency [1]. GPT has been pre-trained on a large amount of text from the open internet, and can often intuit the task you are performing with just a few examples, known as "few-shot learning". Fine-tuning builds on this by training on many more examples than can fit in a prompt, leading to better results on a wider range of tasks. Once a model has been fine-tuned, there is no need to provide examples in the prompt, which reduces costs and allows for lower-latency requests. Fine-tuning involves preparing and uploading training data, training a new fine-tuned model, and using the fine-tuned model for desired tasks. In this case, the authors utilized GPT to fine-tune a text classification model for identifying AI-related articles. Of the 1.84 million remaining articles in the initial corpus, we manually classified a batch of sample articles from the initial corpus and WoS into AI-related and non-AI categories and used them to train the text classification model. We then fine-tuned the GPT model using the Ada base model, chosen for its low cost and high speed in classification tasks while avoiding any major loss in performance [1]. To prepare the training dataset, we selected a random set of 200 titles of scholarly articles from the initial corpus, each manually classified as either AI-related or other. The dataset was formatted as JSONL, with each line containing a prompt-completion pair that represented a single training example. For the fine-tuning process, we utilized the OpenAI CLI to train our Ada model on the dataset with default parameters. After training the text classification model on a sample of 200 articles manually classified as AI-related or non-AI, the authors randomly selected another 200 titles from the initial corpus and used the fine-tuned model to classify them. The results were manually reviewed, and falsely classified examples were added to the training set with manually corrected labels. To augment the data, the authors searched the WoS and added titles similar to the examples in the dataset. This process was repeated several times until over 1,000 (1,014) training samples were accumulated. Of the 1,014 samples, 456 (45.0%) were positive, and the rest were negative. Finally, those samples were used to train a final classification model. We then proceeded to apply the fine-tuned Ada classifier to the remaining 1.84 million unclassified articles in the initial corpus. The classifier successfully identified 184,073 additional AI-related articles from this vast pool of unclassified titles. By combining these newly identified articles with the previous results obtained through keyword searches and citation topic analyses, we arrived at a final corpus consisting of 904,226 AI-related articles. This comprehensive dataset now serves as the foundation for our scientometric analysis, allowing us to draw valuable insights into the rapidly evolving field of artificial intelligence research. The use of a fine-tuned GPT classification model, combined with keyword-based search and citation topic strategies, proved to be an effective and efficient approach to creating a robust AI-related article corpus. This methodology has the potential to be applied to other emerging research fields, providing an accurate and reliable means of identifying relevant articles for scientometric analysis and other research purposes. ## Results ### Performance Evaluation At the beginning of this section, we examine the overlap between the different approaches used to identify AI-related articles. The Venn diagram below illustrates the coverage and intersection of the three approaches: "Liu's Approach," "Category="Artificial Intelligence' Approach," and "Tang's Approach". The WoS category = "Artificial Intelligence" approach, which contains 188,414 articles, is included in both Liu's and Tang's approach. Liu's approach is almost included in Tang's approach, with the exception of 12,772 articles. Tang's approach Figure 2: Comparison of different artificial intelligence search approaches contains the biggest number of articles among the three approaches, with 249,050 unique articles. The significant overlap between the approaches indicate that we share a common understanding of the AI research landscape. However, the disparities in the number of articles identified by Liu's Approach and Tang's Approach suggest that there may be unique aspects captured by our methods. Next, we evaluated the performance of various methods to identify AI-related articles. The data used for evaluation came from 200 manually labeled samples provided by two third-party AI experts from the computer science department of Zhejiang university. Those samples were randomly selected from our initial corpus. They read the title, keywords and abstract of each article, and manually classified them into "AI-related" and "other". When experts disagree on whether an article should be deemed AI-related, one of the authors read the full text and discussed its classification with the experts. When necessary, we used a majority vote to determine the result. The performance of each method is evaluated in terms of precision and recall. Precision measures the fraction of true AI-related articles among those identified by the method, while recall measures the fraction of true AI-related articles identified by the method out of all the AI-related articles in the sample. The following chart summarizes the results: Figure 3: Comparison of the performance of different artificial intelligence search approaches In this chart, the Category=AI approach refer to the simple query WoS category="Artificial Intelligence". Liu's approach refer to the combination of core lexical query, expanded lexical queries and the WoS category "Artificial Intelligence". Initial Training means classifying with a GPT model fine-tuned with 200 manually labeled samples. Final Fine Tuning is the result of GPT classification which is fine-tuned by a total of 1,014 samples. Tang's approach is our final approach which uses a combination of citation topics, WoS category, core lexical query and GPT classification with the final fine-tuned classifier. We noted that the simple TS="Artificial Intelligence" search strategy yielded a very low recall of below 10%, so it is omitted in the discussions for simplicity. WIPO (WIPO 2019) also employed a keyword-bases approach to delineate the field of AI, and it turned 37% more results than Liu's approach. But a simple manual check on the difference yielded the result that few of those extra articles are related to AI, indicating a significantly lower precision. To account for AI-related articles not included in our initial corpus, we randomly downloaded 2,500 articles published in 2022 from the WoS and subjected them to our final fine-tuned GPT classifier. The random sampling is implemented by randomly selecting a day in year 2022, downloading all articles published on that day, and then randomly select 2,500 articles from the result. An article is considered AI-related if it is classified as relevant by the classifier and/or deemed relevant by the third-party expert. Out of the 110 relevant articles in the 2,500 random samples, 107 are included in our initial corpus, and 105 of them are included in the final corpus. The results indicates that our initial corpus successfully captures 97% of all the AI-related articles in the entire WoS corpus, and that the 94% recall of our approach is not significantly affected by the AI-related articles omitted by our preliminary search strategy. While the final corpus is very slightly lower than the initial corpus in terms of recall, it does produce a much higher precision, given the fact that it captured 105 out of 107 AI-related articles in the initial corpus, with only 36.3% of the articles from the initial corpus. The simple Category=AI approach yields a 85% precision, since not all texts in this category were classified as AI related by our experts. For example, articles dealing with human-machine interfaces, chemometrics and bilevel optimization were deemed non-AI by our experts, and we excluded them from the AI-related articles. It has a recall of only 27%, which is lower than that of the other methods, meaning it fails to identify a significant portion of AI-related articles. Liu's approach, initial Training, and final Fine-tuning show improvements in recall compared to the Category=AI method, with comparable precisions. Our final combined approach achieved a precision of 90% and a recall of 94%. It has a 0.921 F1-score, outperforming other methods mentioned above, indicating the best balance between precision and recall among those methods. Considering both the initial evaluation and the evaluation of the WoS random samples, Tang's combined approach demonstrates the most promising performance in terms of precision and recall, making it a suitable method for identifying AI-related articles within the given dataset. ### Publication Volume Trends The analysis of publication trends for artificial intelligence (AI) articles demonstrates a continuous growth pattern from 2013 to 2022. This growth follows an exponential trajectory, with a marked acceleration from 2015 onwards, reaching a peak in 2022. This surge in AI research publications is evident in the increase in both the number of AI-related publications and their percentage in the Web of Science (WoS) database. As of 2022, we estimate that more than 6% publications in that year are associated with AI. Meanwhile, the total number of articles in the WoS category "Artificial Intelligence" published each year increased moderately, and slightly declined in the last year. This indicates a rapid growth of interdisciplinarity in the field of artificial intelligence. Researchers from around the world contribute to the AI publication landscape, with the dataset including articles from numerous countries. However, the majority of publication output is associated with a small group of leading countries. The top ten countries contribute to more than 70% of the total worldwide AI articles published in the period 2013-2022. China and the US emerge as the most productive countries in terms of AI publications, followed by India, South Korea, England, Germany, Spain, Canada, Iran and Italy. Figure 4: AI-related publications and percentage of publications by year The distribution of AI publications by year reveals a consistent increase in the number of AI-related publications. This growth is particularly notable for China, witnessing a rapid increase in the number and world share of AI publications from 2013 to 2022. While the US has maintained a steady pace of growth in terms of total publication count, its world share has been slowly dropping since 2018. India ranks third globally, with a steadily increasing number and world share during the past decade. South Korea's growth has been impressive as well, with the number of publications almost sextupling between 2013 and 2022. European countries like England, Germany, Spain, and Italy have experienced more modest growth in scientific publications. Although these countries continue to contribute significantly to global scientific output, their growth rates are relatively lower compared to countries like China and India. Figure 5: Annual AI-related publications by country Top organizations contributing to AI research include the Chinese Academy of Sciences, UDICE-French Research Universities, the University of California System, the University of Chinese Academy of Sciences (CAS), Harvard University, University pf London, Centre National de la Recherche Scientifique (CNRS), University of London, Egyptian Knowledge Bank (EKB), Tsinghua University, and Nanyang Technological University. These organizations have consistently increased their AI research output throughout the period 2013-2022, with the Chinese Academy of Sciences clearly ranked first in terms of total publication count since 2017. In 2022, it has reached an annual publication number of 8,980, 2.52 times as much as the University of California System which ranked second. Discipline-wise, AI research has been particularly prominent in fields such as Engineering (Electrical & Electronic), Computer Science (Artificial Intelligence, Information Systems, Interdisciplinary Applications, Theory & Methods, Software Engineering), Telecommunications, Automation & Control Systems, Instruments & Instrumentation, and Engineering (Multidisciplinary), in terms of WoS category. We noted that the category of 'Artificial Intelligence' itself ranks only second in the chart, making up only 15.8% of all the AI-related publications in 2022. The distribution of AI publications across these disciplines has also grown over the years, showcasing the interdisciplinary nature of AI research. Figure 6: Annual world share of AI-related publications by country Figure 7: Annual AI-related publications by institution ### Citation analysis Citation analysis plays a critical role in assessing the impact and relevance of scientific research. While publication numbers provide a measure of research output, citation analysis offers deeper insight into the influence and significance of individual studies. By examining the frequency and patterns of citations, we can gauge the extent to which an organization has contributed to the advancement of knowledge in its field. Including citation analysis in our evaluation of research performance enables a more comprehensive understanding of a country's contribution to scientific progress and innovation. The data provided represents various metrics of scientific research output for different countries, including the number of publications, total citations, H-index, the number of publications in the top 1% and 10% cited, and the average citation per publication. Figure 8: Annual AI-related publications by WoS category \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline **Country** & **Publications** & **Total citations** & **H-index** & **Top 1\% cited** & **Top 10\% cited** & **Average citation** \\ \hline China & 328,552 & 4,330,629 & 365 & 2,718 & 29,964 & 13.18 \\ \hline USA & 111,011 & 2,373,245 & 408 & 2,101 & 15,246 & 21.38 \\ \hline India & 48,330 & 554,195 & 173 & 249 & 3,731 & 11.47 \\ \hline South Korea & 33,652 & 390,270 & 160 & 191 & 2,583 & 11.60 \\ \hline Iran & 27,021 & 417,681 & 154 & 180 & 3,205 & 15.46 \\ \hline England & 25,031 & 541,074 & 222 & 442 & 3,522 & 21.62 \\ \hline Germany & 23,861 & 439,354 & 203 & 341 & 2,666 & 18.41 \\ \hline Spain & 22,689 & 337,329 & 161 & 188 & 2,231 & 14.87 \\ \hline Italy & 20,518 & 312,534 & 145 & 144 & 2,191 & 15.23 \\ \hline Canada & 19,026 & 362,394 & 176 & 247 & 2,337 & 19.05 \\ \hline \end{tabular} \end{table} Table 3: Publications and citations by country China leads in the number of publications, total citations, top 1% cited, and top 10% cited. With 328,552 publications, China has accumulated 4,330,629 citations, 2,718 publications in the top 1% cited, and 29,964 publications in the top 10% cited. This demonstrates that China is not only producing a high volume of research but also has a significant impact in terms of citations. The USA follows China with 111,011 publications and 2,373,245 total citations. However, the USA has a higher H-index (408) compared to China's 365, and a higher average citation per publication (21.38) compared to China's 13.18. This suggests that while the USA has fewer publications than China, its research output might be of higher overall quality. India, South Korea, and Iran have a lower number of publications compared to China and the USA. However, they have comparable H-indices and a significant number of publications in the top 1% and 10% cited categories, which indicates that these countries are also producing impactful research. England, Germany, Spain, Italy, and Canada have a lower number of publications than the aforementioned Asian countries, but they exhibit strong performance in total citations, H-index, and average citation per publication. This suggests that these countries prioritize quality over quantity in their research output. In summary, the data reveals that China leads in the number of scientific publications, total citations, top 1% cited, and top 10% cited. The USA, while having fewer publications, has a higher H-index and average citation per publication. Other countries like England, Germany, Spain, Italy, and Canada show strong performance in total citations, H-index, and average citation per publication, suggesting a focus on research quality. This analysis emphasizes the importance of considering multiple metrics to assess the true value and influence of scientific research output across different countries. \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Title** & **Year** & **Main Technique(s)** & **Citations** \\ \hline Generative Adversarial Networks & 2020 & GANs & 25,050 \\ \hline Deep learning & 2015 & Deep Learning, CNNs & 22,413 \\ \hline Dropout: A Simple Way to Prevent Neural Networks from Overfitting & 2014 & Dropout & 19,468 \\ \hline RAxML version 8: a tool for phylogenetic analysis and post-analysis of large phylogenies & 2014 & Maximum Likelihood estimation & 19,217 \\ \hline ImageNet Large Scale Visual Recognition Challenge & 2015 & object detection & 17,012 \\ \hline ImageNet Classification with Deep Convolutional Neural Networks & 2017 & Deep Convolutional Neural Networks & 15,905 \\ \hline Human-level control through deep & 2015 & Deep Reinforcement & 10,998 \\ \hline \end{tabular} The top 10 cited articles in AI showcase the significant advancements made in the field over the last few years. Notably, many of these articles focus on deep learning techniques and their applications in various domains such as image recognition, natural language processing, and reinforcement learning. For example, the highly cited "Generative Adversarial Networks" [14] and "Deep Learning" [15] papers have enabled the development of powerful generative models capable of producing realistic images and improving performance in speech recognition and object detection. Similarly, the "Dropout" [16] paper introduced a simple yet effective method to prevent overfitting in deep neural networks, which has had widespread impact across various applications. The list also highlights the importance of large-scale datasets and benchmarks, such as the ImageNet Large Scale Visual Recognition Challenge, in driving progress in the field. These datasets have facilitated the development of more sophisticated models like the deep convolutional neural networks (CNNs), which have significantly improved object recognition and detection. Furthermore, articles like "Human-level control through deep reinforcement learning" [17] demonstrate the potential of AI to achieve human-level performance in complex tasks using end-to-end learning approaches. Overall, this analysis reflects the rapid advancements made in AI research, with a strong emphasis on deep learning techniques and their wide-ranging applications. #### Major Research Themes In this section, we discuss the major research themes in AI by examining the high-frequency keywords and research subfields in the AI-related publications. The analysis of keywords provides insights into the prominent topics and trends in the \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline reinforcement learning & & Learning, Deep Q-networks & \\ \hline Deep learning in neural networks: An overview & 2015 & Deep Learning, Supervised Learning, Unsupervised Learning & 8,696 \\ \hline A new criterion for assessing discriminant validity in variance-based structural equation modeling & 2015 & Variance-based Structural Equation Modeling & 8,537 \\ \hline DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs & 2018 & Deep Convolutional Nets, Atrous Convolution & 8,377 \\ \hline \end{tabular} \end{table} Table 4: Top 10 cited articles field. Here are the top 20 high-frequency keywords and their occurrence in the publications: These high-frequency keywords reveal the core focus areas in AI research, such as machine learning and deep learning. Techniques such as feature extraction, classification, and optimization are central to AI research. Additionally, the prominence of various neural network architectures, including convolutional neural networks, artificial neural networks, and reinforcement learning, indicates the importance of these models in the field. The analysis also highlights the use of specific algorithms, such as support vector machines, genetic algorithms, random forests, and feature selection techniques. Data mining, a technique used to analyze and extract patterns from large datasets, also emerges as a significant theme in AI research. In order to gain a deeper understanding of the research fields and their distribution within AI, we randomly selected the title and abstract of 10,000 articles from the final corpus. GPT 3.5 turbo is the same model used in the ChatGPT product (as of April 2016). \begin{table} \begin{tabular}{|l|l|l|} \hline **Rank** & **Research field** & **Occurrence** \\ \hline 1 & Machine learning & 83,483 \\ \hline 2 & Deep learning & 66,644 \\ \hline 3 & Artificial intelligence & 30,261 \\ \hline 4 & Feature extraction & 24,556 \\ \hline 5 & Neural networks & 17,299 \\ \hline 6 & Classification & 15,881 \\ \hline 7 & Optimization & 15,352 \\ \hline 8 & Convolutional neural & 14,433 \\ & network & \\ \hline 9 & Training & 13,024 \\ \hline 10 & Neural network & 12,965 \\ \hline 11 & Task analysis & 12,947 \\ \hline 12 & Artificial neural network & 12,946 \\ \hline 13 & Reinforcement learning & 10,648 \\ \hline 14 & Genetic algorithm & 10,039 \\ \hline 15 & Convolutional neural & 9,995 \\ \hline 16 & Support vector machine & 9,940 \\ \hline 17 & Feature selection & 9,302 \\ \hline 18 & Random forest & 9,095 \\ \hline 19 & Data mining & 9,013 \\ \hline 20 & Artificial neural networks & 8,955 \\ \hline \end{tabular} \end{table} Table 5: High-frequency keywords in AI-related publications 2023), it is recommended by OpenAI as a general purpose chat model due to its strong performance and relatively low cost (OpenAI). We then utilized GPT 3.5 turbo to identify the subfields of AI to which each article belongs. For each of the randomly selected 10,000 articles, we sent its title and abstract to GPT 3.5 turbo, and instructed it to reply with the subfield names. After calculating the high-frequency research fields that appeared in the answers, we obtained the following chart: Considering the provided data, which comprises 10,000 randomly sampled articles out of a 904,226-article corpus, we can analyze the relative trends in various AI subfields from 2013 to 2022. It is important to note that these numbers may not be entirely accurate, but they do offer a representative snapshot of the overall trends. The growth in each subfield should be understood in the context of the overall growth of the entire final corpus. Here's a summary of the findings: 1. Machine Learning has consistently been a popular subfield in AI research. The increase in the number of publications in our sample suggests that Machine Learning remains an area of focus as the AI field expands, with ongoing interest in its widespread applications across industries. 2. Computer Vision has shown steady growth in the number of publications in our sample, in line with the growth of the entire AI-related corpus. This Figure 9: Annual AI-related publications by subfield suggests that image and video processing technologies continue to be an integral part of AI research as the field evolves. 3. Robotics has experienced consistent growth throughout the years, with the number of publications in the sample increasing alongside the overall growth of the corpus. This highlights the expanding role of robotics in scientific research and its increasing relevance in various sectors. 4. Natural Language Processing (NLP) has exhibited a growing trend in the number of publications in our sample. The growth of NLP publications is in line with the overall growth of the corpus, which demonstrates the increasing importance of NLP. 5. Optimization has seen a steady increase in publications in the sample over the years, reflecting the growing relevance of optimization techniques in AI research and their applicability across various subfields as the field continues to expand. 6. Deep Learning, which had no publications in the sample until 2016, has experienced rapid growth in recent years. The substantial increase in the number of publications, from 5 in 2016 to 320 in 2022, suggests that deep learning techniques have gained prominence in the AI field. In summary, the data suggests that the growth in AI research across all subfields is consistent with the overall growth of the entire AI-related corpus. Among the main AI subfields, deep learning has experienced a particularly rapid growth, its share in the AI-related corpus has been significantly increasing in recent years. This trend highlights the continued expansion and influence of AI research, with increasing interest in various subfields and their applications across multiple sectors. We then examined the co-occurrence network of various subfields to gain insights into their relationships and prominence. Among the 10,000 samples, edges with weights below 50 were omitted to ensure a clear view on the core of the graph. The data revealed several key patterns in the interdisciplinary nature of AI research. 1. Machine Learning emerged as a central hub of AI research, with connections to 21 other fields. This observation underscores the significance of machine learning in the AI landscape, demonstrating its wide-ranging applications and influence on other research areas. 2. The strongest connections between Machine Learning and other fields were observed with the following subfields, ranked by the weight of their relationships: 1. Computer Vision 2. Robotics 3. Natural Language Processing 4. Data Mining 5. Deep Learning These findings indicate that these subfields are highly intertwined with machine learning, suggesting that advancements in these areas are likely to be driven by machine learning techniques and methodologies. 3. Other notable relationships within the AI domain include the connections between Computer Vision and Robotics, Natural Language Processing and Deep Learning, and Computational Biology and Bioinformatics. These relationships highlight the synergies between various subfields and their potential for collaborative research. In summary, our scientometric analysis of the AI domain has revealed the interdisciplinary nature of the field, with machine learning playing a central role in connecting and driving progress in various subfields. This information provides Figure 10: AI-related publications cooccurence network by subfield valuable insights for researchers, funding agencies, and other stakeholders interested in understanding the landscape of AI research and identifying potential areas for collaboration and further investigation. ## Discussion In this study, we analyzed research trends in the field of Artificial Intelligence using a combined approach of search terms and a fine-tuned GPT classification model. The performance of our approach was evaluated with manually labeled data from the initial corpus. The results indicate that we achieved significantly increased recall and similar precision, when compared with strategies from previous research. By further utilizing the GPT classifier on a batch of randomly selected articles from the entire WoS database, we managed prove that our approach nearly captured all AI-related articles in the entire database. This process is particularly difficult for traditional search term based approaches, since it would be very time-consuming to manually label thousands of articles without the help of a GPT classifier. We noted that previous research has employed machine learning on delineating AI research scientometrically [11, 10]. Siebert et al. achieved a self-reported accuracy of 85%, but when applied to arXiv data, Dunham et al. estimated their approach has a precision of 74% and recall of 49%. Dunham et al.'s approach had a precision of 83% and recall of 85%, but their estimation is based on existing categories. Thus, the fact that only a portion of AI-related publications belong to the 'Artificial Intelligence' category would make the actual performance significantly lower than the reported values. In conclusion, our approach has the best performance among the approaches employing machine learning to classify AI-related research articles. While our study provides valuable insights, there are limitations to our approach. As is mentioned earlier, the concept of artificial intelligence itself lacks a clear and widely accepted definition [23]. Previous definitions vary from "The science of making machines do things that would require intelligence if done by men" [12] to "the endowment of machines with human-like capabilities through simulating human consciousness and thinking processes using advanced algorithms or models" [15]. In the manual labeling process, we inevitably encounter cases in which experts were not sure. We tried using different definitions, and the performance evaluation results can vary. However, the differences were small and did not affect our conclusions. During the GPT classifier fine-tuning, we limited the training set size to around 1,000. According to the OpenAI documentation [10], a minimum of 200 samples in generally recommended, while 5 times more samples are usually needed to substantially increase the performance. We began with 200 samples and, since it would be very time-consuming to label 5,000 samples, stopped after gathering around 1,000 samples. We tried altering a few parameters during the training, and adding abstracts and keywords into the training data, but the results did not show any visible improvement. Future searches can try further optimizing the fine tuning process for a better result. Moreover, the selection of WoS database alone is not necessarily representative enough. While it can be considered as a good source of scientometric analysis, it may overrepresent publications in English, and may be biased in favor of Natural Sciences,Engineering or Biomedical Research [14]. In light of the growing importance of AI, future work can extend our approach to scientometric analysis of patent data in AI and text-mining of websites. This proposition is supported by the findings of previous studies. Researchers have conducted a landscape analysis of AI innovation dynamics and technology evolution using a new AI patent search strategy, incorporating patent analyses, network analyses, and source path link count algorithms [15]. Our combined approach of search terms and machine learning can be applied to further enhance the understanding of AI patenting trends and cross-organization knowledge flows. Another study [1] employed topic modeling, a text-mining approach, on archived website data to investigate sales growth for green goods enterprises. This study demonstrated the potential of website data to gauge internal capabilities and market responsiveness. By utilizing the natural language processing ability of the GPT models to enhance text-mining performance, future work can unlock new insights into the strategic management of innovation and entrepreneur. Furthermore, our approach can be directly applied to other fields of research, such as nanotechnology [13], synthetic biology [12] and cancer research [21]. ## Conclusion In conclusion, this study presents a comprehensive approach that combines search term-based search with machine learning techniques, specifically GPT-3, to address the challenges of scientometric analysis in the rapidly evolving field of Artificial Intelligence. Our approach demonstrates a high precision of 90% and successfully captures about 94% of AI-related articles in the Web of Science (WoS) corpus. Our analysis reveals exponential growth in AI research publications, particularly since 2015, with machine learning, computer vision, and deep learning experiencing the most significant growth. This trend underscores the continued expansion and influence of AI research, highlighting the increasing interest in various subfields and their applications across multiple sectors. The interdisciplinary nature of the AI field is evident, with machine learning playing a pivotal role in connecting and driving progress in diverse subfields. The investigation into key players in the AI domain showcases China's dominant position in several metrics, while the USA exhibits a higher H-index and average citation per publication. Other countries, such as England, Germany, Spain, Italy, and Canada, demonstrate strong performance in total citations, H-index, and average citation per publication, emphasizing their focus on research quality. This study highlights the potential of our comprehensive approach in facilitating accurate scientometric analysis in emerging research fields like AI. By incorporating GPT into the text-mining process of different data sources, various stakeholders, such as researchers, policymakers, and industry practitioners, can be enabled to leverage the power of large language models for future research, policy decisions, and technological advancements. Our findings emphasize the importance of a combined approach, utilizing large language models and conventional methods, in text mining and bibliometric studies to effectively understand the evolving landscape of AI research and its far-reaching implications. ## Acknowledgment Lie Tang: Conceptualization, Methodology, Analysis and interpretation of data, Writing, Visualization. Xianke Zhou: Analysis and interpretation of data. Min Lu: Conceptualization, Methodology, Writing, Supervision. ChatGPT-4 is used in writing and visualization.
2303.02575
MITFAS: Mutual Information based Temporal Feature Alignment and Sampling for Aerial Video Action Recognition
We present a novel approach for action recognition in UAV videos. Our formulation is designed to handle occlusion and viewpoint changes caused by the movement of a UAV. We use the concept of mutual information to compute and align the regions corresponding to human action or motion in the temporal domain. This enables our recognition model to learn from the key features associated with the motion. We also propose a novel frame sampling method that uses joint mutual information to acquire the most informative frame sequence in UAV videos. We have integrated our approach with X3D and evaluated the performance on multiple datasets. In practice, we achieve 18.9% improvement in Top-1 accuracy over current state-of-the-art methods on UAV-Human(Li et al., 2021), 7.3% improvement on Drone-Action(Perera et al., 2019), and 7.16% improvement on NEC Drones(Choi et al., 2020).
Ruiqi Xian, Xijun Wang, Dinesh Manocha
2023-03-05T04:05:17Z
http://arxiv.org/abs/2303.02575v2
MITFAS: Mutual Information based Temporal Feature Alignment and Sampling for Aerial Video Action Recognition ###### Abstract We present a novel approach for action recognition in UAV videos. Our formulation is designed to handle occlusion and viewpoint changes caused by the movement of a UAV. We use the concept of mutual information to compute and align the regions corresponding to human action or motion in the temporal domain. This enables our recognition model to learn from the key features associated with the motion. We also propose a novel frame sampling method that uses joint mutual information to acquire the most informative frame sequence in UAV videos. We have integrated our approach with X3D and evaluated the performance on multiple datasets. In practice, we achieve 18.9% improvement in Top-1 accuracy over current state-of-the-art methods on UAV-Human(Li et al., 2021), 7.3% improvement on Drone-Action(Perera et al., 2019), and 7.16% improvement on NEC Drones(Choi et al., 2020). We will release the code at the time of publication. Machine Learning, ICML ## 1 Introduction Unmanned aerial vehicles (UAVs) are increasingly used for different applications, including search and rescue, agriculture, security, construction and aerial surveillance. This results in many challenging perception problems related to detection, tracking, re-identification, and recognition. In particular, action recognition using UAV videos is an important problem. While deep learning based methods(Feichtenhofer, 2020; Carreira and Zisserman, 2017) have achieved good performance for video action recognition on ground camera videos(Carreira and Zisserman, 2017; Monfort et al., 2020), there are many challenges with respect to using them on aerial videos. Compared to ground camera videos, the human actors in UAV videos appear rather small due to high camera altitude (see Figure 1). A wider area of the background occupies most of the pixels in the video frame, and only a small fraction (e.g., less than 10%) corresponds to a human action. Since these videos are captured from a moving (or dynamic) UAV, the position and orientation of the human actor may change considerably between the frames. This can result in making the model infer more from the background changes, as opposed to action information, during training. The motion of the UAV camera can also result in blurry frames and some techniques have been proposed to handle them (Li et al., 2021; Zhao et al., 2021; Kothandaraman et al., 2022). It is harder to collect and annotate UAV videos. Overall, there are fewer and smaller UAV video datasets, as compared to ground video datasets. Additionally, because of continuous changes in the altitude and the camera angle, Figure 1: \(F_{t}\) and \(F_{t+1}\) are two frames at time \(t\) and \(t+1\), respectively, from the same UAV video. The human actor in the two frames occupies less than 10% of the pixels due to the high camera altitude (top images). (a) MITFAS will focus on the regions corresponding to salient motions and use mutual information to find more informative frame. (b) Because of UAV’s motion, the position of human actor in \(F_{t+1}\) appears to be relatively behind comparing to \(F_{t}\). Our algorithm (MITFAS) computes and align these regions, so that the recognition model will infer more from the human motions. As shown in the right image, the main body of the human actor in two frames overlap after feature alignment. videos captured using UAVs tend to be more diversified and have unique viewpoints. Some parts of the human actor may be occluded, and not all parts of the human body that contribute to the action can be seen from the camera. Hence, some of the frames in the video are less informative, and this reduces the overall accuracy (Zhi et al., 2021; Wu et al., 2019; Ren et al., 2020; Gowda et al., 2021). Main Contribution:We present a novel approach for video action recognition in UAV videos with dynamic backgrounds and moving cameras. We take advantage of the mutual information to obtain and align the useful features corresponding to the human actor in the temporal domain. Our alignment method is used to identify the region of the human action, and find the most similar features in the video sequence. As a result, our learning-based recognition model is able to focus more on the human action, rather than the background regions. Due to the varying viewpoints generated by the movement of a UAV camera, not all the human body parts that contribute to the action are visible. We present a novel frame sampling method based on joint mutual information for dynamic UAV videos, which can compute the most informative and distinctive frame sequence for training aerial action recognition models. We have integrated our temporal feature alignment and frame sampling methods with X3D(Feichtenhofer, 2020) and use them for aerial action recognition (as shown in Figure 2). The novel components of our work include: 1. We use mutual information as a criterion to obtain and align the features at the same time. Our method takes the movement of the UAV into account and estimates the overlapping features by maximizing the mutual information. Given a reference image frame, our approach finds the most similar features in the subsequent frames. 2. We present a new frame sampling method for UAV videos. Our approach is designed to compute the most informative frame sequence in the video such that all the frames are mostly different from each other. We combine mutual information and joint mutual information to extract the frame. Our method is flexible and can deal with different variations in the video sequences. Extensive experiments show our sampling method overperforms peers. We test our method on 3 public UAV video datasets. We achieve 20.2% improvement over the baseline method and 18.9% improvement over current state-of-the-art method on UAV-Human (Li et al., 2021). Our method improves the top-1 accuracy on Drone Action (Perera et al., 2019) by 16.6% over the baseline method and 7.3% over the current state-of-the-art methods. On NEC Drones (Choi et al., 2020), our method get 78.62% top-1 accuracy, which is 7.18% higher than the current state-of-the-art and 12.47% over baseline model by using \(1/2\) input frame size. ## 2 Related Work ### Similarity Measurement and Mutual Information Many methods have been proposed to measure the similarity between two image patches. Euclidean distance is a well-known metric (Zhi et al., 2021). However, given the characteristics of UAV videos, small resolution of the human actor and moving cameras, Euclidean distance computations will be dominated by the background changes and may not work well due to shaking frames. Cosine similarity (Hoe et al., 2021) is another measure used for high dimensional data, but it does not take the magnitude of pixel values into account. Mutual information is used as a similarity measure between images by (Viola and Wells, 1995; Maes et al., 1997). As a similarity measure, mutual information has been widely used in medical imaging domain (Pluim et al., 2003; Klein et al., 2007). Liu et al. (Liu et al., 2022) have explored the possibility to use mutual information for person pose estimation task. Ji et al. (Ji et al., 2018) have proposed an unsupervised image clustering and segmentation method by maximizing the mutual information between spatial region pairs. Bachman et al. (Bachman et al., 2019) have proposed a self-supervised representation learning based on maximizing mutual information between features across views. Inspired by the success of mutual information for image processing, we use this concept for temporal feature alignment and frame sampling. Compared to cosine similarity or Euclidean distance, mutual information measures the statistical dependence or information redundancy between two images using pixel value distributions, which makes it more robust. ### Video Recognition for Aerial Videos Aerial video action recognition is a challenging task, especially when the camera is moving. The performance of action recognition on ground-camera video datasets has increased as a result of recent advancements in deep learning techniques. However, we don't get similar level of accuracy on videos captured using UAV cameras (Nguyen et al., 2022). For aerial video, (Geraldes et al., 2019),(Mliki et al., 2020),(Mishra et al., 2020),(Mou et al., 2020),(Barbed et al., 2020),(Gammulle et al., 2019),(Mou et al., 2020) apply 2D CNNs (e.g., ResNet, MobileNet) as the backbones to perform single-frame classification and combine the outputs of all frames in the video for recognition. (Barekatain et al., 2017),(Perera et al., 2019),(Perera et al., 2020) leverage two-stream CNNs to utilize attributes from the human motion and the appearance. 3D CNNs are widely used for aerial action recognition. (Choi et al., 2020),(Demir et al., 2021),(Li et al., 2021),(Mou et al., 2020),(Sultani and Shah, 2021) use I3D network (Carreira and Zisserman, 2017) to learn from spatial-temporal features from human actors and surroundings. (Peng and Razi, 2020) uses 3D convolutions with the Inception-ResNet model for aerial video processing. Other techniques based on transformer-based solutions. To better focus on the target actor in the video, (Kothandaraman et al., 2022) have proposed an attention mechanism with Fourier transform for better feature extraction. In terms of efficiency, (Ding et al., 2020) have presented a lightweight action recognition model by using MobileNet with a focal loss and self-attention. Our feature alignment and sampling method could also be combined with these action recognition methods to improve their accuracy. Given a video captured from a UAV, classic feature representation algorithms for aerial video action recognition are limited by the small size of the human actors in aerial videos. Sometimes, these approaches improperly identify the camera's motion as a feature (Washington et al., 2021; Mi, 2020). (Jain et al., 2013) have proposed 2D affine motion models to approximate the camera motion between the adjacent frames. (Jiang et al., 2012) have proposed a method where the motion patterns of dense trajectories are clustered to characterize foreground-foreground or foreground-background relationships. In iDT (Wang et al., 2016; Mi, 2020), dense trajectory locations between two adjacent frames are transformed to a fixed view for camera motion compensation. Inspired by prior works, our method aligns the human centered views that are transformed from UAV videos to learn from key features corresponding to the parts of the human body that contribute most to the actions. ## 3 Video Recognition using Mutual Information We present a mutual information-based method for action recognition on UAV videos. Our approach is mainly designed for moving cameras and dynamic backgrounds. Our method takes the characteristics of the UAV videos into consideration and uses mutual information as the criterion to compute and align the regions that existing salient motions in the video. We also use joint mutual information to sample the frame sequences that convey most information about human action. Table. 1 highlights the notation and symbols used in this section. ### Mutual Information Mutual information is a concept in information theory that essentially measures the amount of information given by one variable when observing another variable. It can also be interpreted as the reduction of the uncertainty of one variable given the other. Mutual information is highly correlated with entropy and joint entropy. The mutual information between Figure 2: Given a starting frame \(F_{t}\) in a UAV video, we use a localization network to localize the human action, and crop the region containing the human motion as the reference image \(F_{r}\). At time \(t+1\), we use our feature alignment algorithm to estimate the optimal operation parameter \(\omega_{t+1}^{*}\) and find a region in \(L_{\omega_{t+1}^{*}}(F_{t+1})\subset F_{t+1}\) that the mutual information between \(L_{\omega_{t+1}^{*}}(F_{t+1})\) and the reference image \(F_{r}\) is maximized. Next, we use \(L_{\omega_{t+1}^{*}}(F_{t+1})\) as the new reference image to find the optimal parameter \(\omega_{t+2}^{*}\) at time \(t+2\) and repeat for subsequent frames. Then, we use the criterion illustrated in Section. 3.3 Eq. 17 to find a sequence of the most distinctive and informative frames. We use a temporal inference backbone network (e.g., X3D(Feichtenhofer, 2020)) to generate the predicted action label from the spatial-temporal features associated to the sampled frame sequence. image pairs \(X\) and \(Y\) can be equivalently expressed as: \[I(X;Y)=H(X)+H(Y)-H(X,Y), \tag{1}\] where \(H(X)\) and \(H(Y)\) correspond to the entropy of \(X\) and \(Y\), respectively. The entropy quantifies the complexity of all possible outcomes of \(X\) or \(Y\). Given \(p_{X}(x)\), \(x\in\mathcal{X}\) the probability mass function (PMF) of \(X\), the entropy of \(X\), \(H(X)\) can be calculated as: \[H(X)=-\sum_{x\in\mathcal{X}}p_{X}(x)\log p_{X}(x). \tag{2}\] \(H(X,Y)\) is the joint entropy that examines the overall randomness given both \(X\) and \(Y\): \[H(X,Y)=-\sum_{x\in\mathcal{X},y\in\mathcal{Y}}p_{XY}(x,y)\log p_{XY}(x,y), \tag{3}\] where \(p_{XY}(x,y),x\in\mathcal{X},y\in\mathcal{Y}\) is the joint probability distribution of intensities of pixels associated with \(X\) and \(Y\). The joint entropy \(H(X,Y)\) is minimized if and only if there is a one-to-one mapping function \(G\) such that \(p_{X}(x)=p_{Y}(G(x))=p_{XY}(x,G(x))\). It increases when the inherent statistical relationship between \(X\) and \(Y\) weakens. Therefore, as pixels in \(X\) become more distinctive from the counterparts in \(Y\), \(H(X,Y)\) gets larger and \(I(X;Y)\) gets smaller. Note that, if the image or region pairs \(X\) and \(Y\) are completely independent from each other, then: \[\begin{split}& H(X,Y)=H(X)+H(Y),\\ & I(X;Y)=0.\end{split} \tag{4}\] In our case, we use mutual information to obtain and align the region pairs in the temporal domain of a video. Therefore, \(X\) and \(Y\) are always correlated and \(I(X;Y)\neq 0\). Moreover, as we calculate mutual information using probability distribution of discrete pixels, we use sums instead of integrals in Eq. 2 and 3. We use Eq. 2 and 3 to express the mutual information on Eq. 1 using probability distributions. Therefore: \[I(X;Y)=\sum_{x\in\mathcal{X},y\in\mathcal{Y}}p_{XY}(x,y)\log\frac{p_{XY}(x,y) }{p_{X}(x)p_{Y}(y)}. \tag{5}\] From the equation above, we can see that the mutual information quantifies the dependence between two random variables by measuring the distance between the real joint distribution \(p_{XY}(x,y)\) and the distribution under assumption of complete independence of \(p_{X}(x)p_{Y}(y)\). Intuitively, as Viola (Viola & Wells, 1995) observes, maximizing the mutual information between two images or regions tends to find the most complex overlapping areas (by maximizing the individual entropy) such that at the same time they explain each other well (by minimizing the joint entropy). The joint mutual information is an extension of mutual information. It measures the statistical relationship between a single variable and a set of other variables. Given one image \(Y\) and a set of images \(X_{1},X_{2}\), the joint mutual information is expressed as: \[I(X_{1},X_{2};Y)=I(X_{1};Y)+I(X_{2};Y|X_{1}). \tag{6}\] where \(I(X_{2};Y|X_{1})\) is the conditional mutual information that measures the dependence between \(X_{2}\) and \(Y\) when observing \(X_{1}\). ### Temporal Feature Alignment In this section, we describe our approach that uses mutual information (illustrated in Section. 3.1) to obtain and align the features that correspond to salient motions in the temporal domain. In UAV videos, human actors appear significantly small in aerial data, and most pixels in the frame belong to the background. Therefore, we have redundant information about the background in the video that may decrease the performance of our learning model. Moreover, the position of the human actor may change considerably between adjacent frames, which makes the recognition model infer more from the pixels corresponding to redundant background information than the human body movements. Thus, our objective is to find the region that contains dominant information about the action for each frame in the video and the pixels related to the human actors are well matched. Let's assume that all the images have the same 2D image coordinate with the origin positioned in the top left corner, with the \(x\) axis along the rows and \(y\) axis along the columns. Given a video \(V\), which corresponds to a sequence of raw frames at different times, \(V=\{\cup F_{t},t\in N\}\). We generate the reference image \(F_{r}\) that is transformed from a region in the raw frame \(F_{t}\). The reference image \(F_{r}\) is a human centred image that mainly contains salient actions of the human actor. To compute \(F_{r}\), suppose \(\Omega_{t}\) contains all feasible operation parameters \(\omega_{t}\), such that for \(\omega_{t}\in\Omega_{t}\), we can generate a region from \(F_{t}\) using an operation \(L_{\omega_{t}}\). We can consider \(L_{\omega_{t}}\) as a transformation from 2D raw frame coordinates of \(F_{t}\) to \begin{table} \begin{tabular}{c c} \hline \hline Notation & Term \\ \hline I() & Mutual information, joint mutual information \\ H(0) & Entropy, joint entropy \\ p & Probability mass function \\ h & Joint histogram \\ F & Frame sequence in the video \\ L & Operations to get aligned region \\ R & Rotation matrix \\ D & Translation matrix \\ S & Scaling operation \\ \(\omega_{t}\) & Operations parameters \\ M & Mapping function from frames to features \\ C & Candidate pool for frame sampling \\ \hline \hline \end{tabular} \end{table} Table 1: Notation and symbols used in the paper. the 2D reference frame coordinates corresponding to \(F_{r}\), followed by scaling to the same size of \(F_{r}\). Thus, \(L_{\omega_{t}}\) consists of rotation operation \(R(\theta_{t})\), translation operation \(D(d_{t})\) and scaling operation \(S(s_{t})\), where \(\omega_{t}=(\theta_{t},d_{t},s_{t})\in\Omega_{t}\): \[L_{\omega_{t}}=R(\theta_{t})\cdot D(d_{t})\cdot S(s_{t}) \tag{7}\] Our objective is to find \(\omega_{t}^{*}\in\Omega_{t}\) for every \(t\) such that: \[\omega_{t}^{*}=\arg\max_{\omega_{t}\in\Omega_{t}}I(L_{\omega_{t}}(F_{t});F_{r }), \tag{8}\] where \[I(L_{\omega_{t}}(F_{t});F_{r})=H(L_{\omega_{t}}(F_{t}))+H(F_{r})-H(L_{\omega_{ t}}(F_{t}),F_{r}). \tag{9}\] We use this equation to compute the optimal parameter \(\omega_{t}^{*}\), so as to compute the target region in \(F_{t}\) that is aligned with \(F_{r}\). We need to calculate the mutual information between two images \(L_{\omega_{t}}(F_{t})\) and \(F_{r}\). There is no exact mathematical model known to precisely calculate the actual probability distributions related to each image. In general, marginal and joint histograms are used (Viola & Wells, 1995) to approximate the respective distributions. Let \(v_{\omega_{t}}(p)\) denote the value of the pixel at position \(p\) in \(L_{\omega_{t}}(F_{t})\) and \(z_{\omega_{t}}(p)\) the intensity of the corresponding pixel in \(F_{r}\). The joint histogram \(h_{\omega_{t}}(v,z)\) can be computed by binning the values of the pixel pairs \((v_{\omega_{t}}(p),z_{\omega_{t}}(p))\) for all possible \(p\). We conduct ablation experiments on the impact of bin numbers that are used to generate histogram in Section. B. Then, the marginal probability distribution \(p_{V\omega_{t}}(v)\),\(p_{Z\omega_{t}}(z)\) and joint probability distribution \(p_{VZ\omega_{t}}(v,z)\) of \(v\) and \(z\) can be obtained by normalizing the joint histogram \(h_{\omega_{t}}(v,z)\): \[p_{VZ\omega_{t}}(v,z) =\frac{h_{\omega_{t}}(v,z)}{\sum_{v,z}h_{\omega_{t}}(v,z)}, \tag{10}\] \[p_{V\omega_{t}}(v) =\sum_{z}p_{VZ\omega_{t}}(v,z),\] \[p_{Z\omega_{t}}(z) =\sum_{v}p_{VZ\omega_{t}}(v,z).\] The mutual information can be calculated as: \[I(L_{\omega_{t}}(F_{t});F_{r})=\sum_{v,z}p_{VZ\omega_{t}}(v,z)\log\frac{p_{VZ \omega_{t}}(v,z)}{p_{V\omega_{t}}(v)p_{Z\omega_{t}}(z)} \tag{11}\] Mutual information is computed using histograms of low-level pixel values on both target and reference patches, which is similar to the mean shift tracking. However, our method uses histograms to approximate the joint probability distribution and measure the inherent statistical dependence between target and reference patch. Also, it can be applied at the feature level. We use a feature extractor to get the features for both \(F_{t}\) and \(F_{r}\). Suppose the mapping function between the RGB images to the features is \(M\), the features extracted from \(F_{t}\) and \(F_{r}\) are \(M(F_{t})\) and \(M(F_{r})\). Our objective reduces to finding a subset \(M_{s}(F_{t})\subset M(t)\) such that \[M_{s}(F_{t})^{*}=\arg\max_{M_{s}(F_{t})\subset M(F_{t})}I(M_{s}(F_{t});M(F_{r})), \tag{12}\] where \[I(M_{s}(F_{t});M(F_{r}))= H(M_{s}(F_{t}))+H(M(F_{t})) \tag{13}\] \[-H(M_{s}(F_{t}),M(F_{t})).\] ### Mutual Information Sampling Because of high camera altitude, many parts of the human body are not visible. Some parts of the human body that result in the action may be occluded by some other parts that do not contribute to the action. Also, there are lots of "duplicated" frames because of the high frame rate, which essentially contains the redundant information. Therefore, not all the video frames are useful for the training and using some of them may even decrease the overall accuracy. To solve this issue, we present a novel frame sampling method using combination of mutual information and joint mutual information to find the frame sequences that contain more information about action changes in the UAV videos. The main idea behind our method is to find out more informative frame sequences in the video given a start frame. Consider a video as a sequence of frames across time. Suppose we have already sampled \(i\) frames and our goal is to find the \(i+1\)th frame \(F_{i+1}\) in the candidate pool \(C_{i+1}\) where \(C_{i+1}\) consists of all the possible frames that we could choose for \(F_{i+1}\). Let \(F_{s}=\{\cup F_{0},F_{1},F_{2}\cdots F_{i}\}\) denote the set that contains all the sampled frames. Our approach is to choose \(F_{i+1}\) that is the most distinctive as compared with \(F_{i}\) as well as the set of all previously sampled frames, so that it provides more unseen features for the recognition model training: \[F_{i+1}=\arg\min_{F_{i+1}\in C_{i+1}}\alpha I(F_{i};F_{i+1})+\beta I(F_{s};F_{i +1}). \tag{14}\] The first term is used to minimize the mutual information between the current frame and the previous frame. It tends to sample adjacent frames that are least similar, so that the newly sampled frame will contain more information for training. The second term is used to minimize the joint mutual information with all the sampled frames, which could decrease the information redundancy over the whole sampling sequence. We can decompose it using the chain rule of joint mutual information: \[I(F_{s};F_{i+1}) =I(F_{0},F_{1},F_{2}\cdots F_{i};F_{i+1}) \tag{15}\] \[=\sum_{j=0}^{i}I(F_{j};F_{i+1}|F_{j-1}F_{j-1}\cdots F_{0})\] In practice, the conditional mutual information is hard to compute as the conditional probability distribution is hard to calculate. However, to make the problem more tractable, we use the low-dimensional approximation to estimate the joint mutual information between \(F_{i+1}\) and \(F_{s}\)(Gao et al., 2017; Brown et al., 2012). \[I(F_{s};F_{i+1})\approx\frac{1}{i+1}\sum_{j=0}^{i}I(F_{j};F_{i+1}) \tag{16}\] So the overall expression becomes: \[F_{i+1}=\arg\min_{F_{i+1}\in C_{i+1}}\alpha I(F_{i};F_{i+1})+\frac{\beta}{i+1} \sum_{j=0}^{i}I(F_{j};F_{i+1}) \tag{17}\] Here, we add weights \(\alpha\),\(\beta\) to the two terms in Eq. 17 to adjust to different scenarios. We analyze the behavior of \(\alpha\) and \(\beta\) in the Appendix B.5. ### MITFAS: Aerial Video Recognition In this section, we present our overall method for aerial video recognition (see Fig. 2). We use temporal feature alignment and frame sampling and combine them with a temporal inference backbone network (e.g, X3D(Feichtenhofer, 2020)) to disentangle the human actor from superfluous backgrounds and learn from key features associated with the human motions. In our benchmarks, most of the videos available are captured on a UAV camera with anti-shake technology which could stabilize the camera and reduce the camera vibration, we assume no rotation is needed, i.e., \(R(\theta_{t})=Identity\). For general videos, \(R(\theta_{t})\) is the rotation matrix represented and computed as a 2D transformation: \[R(\theta_{t})=\begin{bmatrix}\cos\theta_{t}&-\sin\theta_{t}\\ \sin\theta_{t}&\cos\theta_{t}\end{bmatrix} \tag{18}\] We localize the human actor at the start frame and enlarge the region by about 10% of its height to obtain the reference \(F_{r}\)(Hasan et al., 2021). We conduct ablation studies on the size of \(F_{r}\) in Section. B. Considering the human actor may perform actions that have large vertical changes like stretching arms, we add 15% height as the margin on the top of \(F_{r}\) to ensure all the information about the action are included and crop the region as our final reference image. Therefore, we enlarge the region by 25% vertically and 10% horizontally to get \(F_{r}\). We use the sliding window strategy with scalable window sizes to find the aligned regions or features in all the frames. To make the process more efficient, we do not apply sliding window search over the entire frame. Instead, once we compute \(\omega_{t}^{*}\) at time \(t\), we use the same operation at \(t+1\) to obtain the region \(L_{\omega_{t}^{*}}(F_{t+1})\). We expand \(L_{\omega_{t}^{*}}(F_{t+1})\) by 25% as the searching area at \(t+1\). In this way, we could significantly decrease the overall mutual information computations by only searching in the searching area which is a subset of \(F_{t+1}\). In order to improve the reliability, we occasionally re-perform localization to update the searching area. More ablation studies on the impact of searching area size is given in the supplementary. Once all the \(\omega_{t}^{*}\) are found for all time \(t\), well-aligned frames are obtained by the transformation. We use our frame sampling method illustrated in Section. 3.3 to generate a sequence of 8 or 16 frames for model training. We will randomly pick a start frame as \(F_{0}\), denoting the index of \(F_{0}\) in the sequence as \(k_{0}\). To maintain the randomness in our sampling strategy, we set a randomly generated stride \(r_{1}\) when sampling \(F_{1}\). We compute our candidate pool \(C_{1}\) by a set of all the frames that have index greater than \(k_{0}\), but not exceed \(k_{0}+r_{1}\). Next, we find the most informative frame in the candidate pool using Eq. 17 and use it as \(F_{1}\). We follow the same strategy to sample all the subsequent frames. After obtaining all the sampled frames, we use a temporal inference backbone network to extract and learn from spatial-temporal features from the human actions. We employ X3D(Feichtenhofer, 2020) as the backbone in our method for its efficiency and performance on video tasks. However, our method could be combined with any action recognition models for better behavior understandings on UAV videos. ## 4 Results In this section, we describe our implementation and present the results. We compare the performance with other state-of-the-art video action recognition methods on 3 UAV datasets. The implementation and training details are ### Results on UAV Human UAV Human is currently the largest UAV-based human behavior understanding dataset. It contains scenarios captured from both indoor and outdoor environment with different lighting and weather conditions. The videos are captured in dynamic backgrounds with different UAV motions and flying altitudes. It has 155 annotated actions, many of them are hard to distinguish such as squeeze and yawn. We compare our method against prior state-of-the-art methods on UAV Human. As shown in Table 2, we implement our method and compare the performance with other state-of-the-art methods in various configurations in terms of backbone network, frame rates, frame input sizes and weights initialization. We use X3D-M as the backbone of our method with two different initialization settings. One of them is training from scratch and the other is initialized with Kinetics pretrained weights. First, when using the same configuration (frames, input size, initialization), our method outperforms all the prior methods by a large margin. When training from scratch, we achieve a 12.6% improvement over current state-of-the-art methods. We get 18.9% improvement when using Kinetics pretrained weights. This indicates the effectiveness of our method, which reduces the information redundancy and makes the model learn more from the motion changes rather than background variations. ### Results on NEC Drone NEC Drone is an indoor datasets contains 5,250 videos with 16 actions performed by 19 actors. The videos are captured using a UAV flying at low altitude on a basketball court. Compare to UAV Human, NEC Drone has more consistent lighting conditions while bringing more noises caused by light reflections. We present the results on NEC Drone in Table 4. We obtain a Top-1 accuracy of 78.6%. We compare our method against the baseline X3D-M and shows an improvement of 12.5%. Our approach outperforms the current SOTA FAR on NEC Drone by 7.2%. Note that, the improvement we achieved is obtained with \(1/2\) input frame size, which further demonstrate the advantage of our method. \begin{table} \begin{tabular}{c c c c c} \hline \hline Method & Frames & Input Size & Init. & Top-1 \\ \hline X3D-M & \(8\) & \(960\times 540\) & Kinetics & \(66.1\) \\ FAR & \(8\) & \(960\times 540\) & Kinetics & \(71.4\) \\ **Ours** & \(8\) & \(540\times 540\) & Kinetics & **78.6** \\ \hline \hline \end{tabular} \end{table} Table 4: Results on NEC Drones. Our method shows an improvement of 12.5% on top-1 accuracy against the baseline X3D-M(Feichtenhofer, 2020), 7.2% over current state-of-the-art FAR (Kothandaraman et al., 2022). \begin{table} \begin{tabular}{c c c c c} \hline \hline Method & Backbone & Frames Number & Input Size & Initialization & Top-1 Acc. (\%) \(\uparrow\) \\ \hline X3D-M (Feichtenhofer, 2020) & - & \(16\) & \(224\times 224\) & None & \(27.0\) \\ X3D-L (Feichtenhofer, 2020) & - & \(16\) & \(224\times 224\) & None & \(27.6\) \\ FAR (Kothandaraman et al., 2022) & X3D-M & \(16\) & \(224\times 224\) & None & \(27.6\) \\ **Ours (MITFAS)** & X3D-M & \(16\) & \(224\times 224\) & None & **40.2** \\ \hline FAR (Kothandaraman et al., 2022) & X3D-M & \(8\) & \(540\times 540\) & None & \(28.8\) \\ **Ours (MITFAS)** & X3D-M & \(8\) & \(540\times 540\) & None & **38.4** \\ \hline I3D (Carreira \& Zisserman, 2017) & ResNet-101 & \(8\) & \(540\times 960\) & Kinetics & \(21.1\) \\ FNet (Lee-Thorp et al., 2021) & I3D & \(8\) & \(540\times 960\) & Kinetics & \(24.3\) \\ FAR (Kothandaraman et al., 2022) & I3D & \(8\) & \(540\times 960\) & Kinetics & \(29.2\) \\ FAR (Kothandaraman et al., 2022) & X3D-M & \(8\) & \(620\times 620\) & Kinetics & \(39.1\) \\ **Ours (MITFAS)** & X3D-M & \(8\) & \(620\times 620\) & Kinetics & **46.6** \\ \hline X3D-M (Feichtenhofer, 2020) & - & \(16\) & \(224\times 224\) & Kinetics & \(30.6\) \\ MViT (Fan et al., 2021) & - & \(16\) & \(224\times 224\) & Kinetics & \(24.3\) \\ FAR (Kothandaraman et al., 2022) & X3D-M & \(16\) & \(224\times 224\) & Kinetics & \(31.9\) \\ **Ours (MITFAS)** & X3D-M & \(16\) & \(224\times 224\) & Kinetics & **50.8** \\ \hline \hline \end{tabular} \end{table} Table 2: Benchmarking UAV Human and comparisons with prior arts.. For \(224\times 224\) resolution and 16 frames input, when training from scratch, our approach achieves a \(13.2\%\) improvement over the baseline X3D-M and \(12.6\%\) over the current state-of-the-art FAR. For \(520\times 520\) resolution and 8 frames input, MITFAS overperforms the current state-of-the-art FAR by \(9.6\%\) when training from scratch. For \(224\times 224\) resolution and 16 frames input, when initializing with Kinetics pre-trained weights, MITFAS improves the top-1 accuracy over baseline by \(20.2\%\) and over SOTA method by \(18.9\%\). For resolution over \(620\times 620\) and 8 frames input, when initializing with Kinetics pretrained weights, MITFAS overperforms the current state-of-the-art FAR by \(7.5\%\). Our method obtains better performance in all settings, which illustrates the effectiveness of our proposed MITFAS. \begin{table} \begin{tabular}{c c c c c} \hline \hline Method & Frames & Input Size & Init. & Top-1 \\ \hline X3D-M & \(8\) & \(960\times 540\) & Kinetics & \(66.1\) \\ FAR & \(8\) & \(960\times 540\) & Kinetics & \(71.4\) \\ **Ours** & \(8\) & \(540\times 540\) & Kinetics & **78.6** \\ \hline \hline \end{tabular} \end{table} Table 3: Results on Drone Action. Our method achieves 100% top-1 accuracy, 16.6% over the baseline method X3D-M(Feichtenhofer, 2020), outperforming current state-of-the-art method FAR(Kothandaraman et al., 2022) by 7.3% under same configuration. (HLPF (Jhuang et al., 2013), PCNN (Cheron et al., 2015)) ### Results on Drone Action Drone Action is an outdoor video dataset that was captured using a free-flying UAV in low altitude and low speed. It contains 240 video across 13 human actions performed by 10 human actors. Drone Action is the smallest dataset we used, but it is collected using a free-flying UAV that results in continuous position changes of the human actor. As shown in Table 3, we achieve 100% Top-1 accuracy which outperform current SOTA by 7.3% under same configuration, which further illustrate the benefits of our proposed MITFAS. ### Ablation Experiments In this subsection, we mainly show the results of ablation experiments to demonstrate the effectiveness of the two components of our approach: Temporal Feature Alignment(TFA) and Mutual Information Sampling(MIS). More ablation studies are given in Appendix. We randomly pick 30% videos for each action label int UAV-Human and conduct the ablation experiments on this UAV-Human subset. We use X3D-M(Feichtenhofer, 2020) as the temporal inference backbone network. All results are generated by using a sequence of 16 frames with resolution \(224\times 224\). **Effectiveness of Temporal Feature Alignment** For Temporal Feature Alignment (TFA), our objective is to solve the small resolution corresponding to the human actor and viewpoint changes in the UAV videos. Our TFA finds and aligns the region that contains dominant information about the action for each frame in the video. As shown in Table. 5, our TFA improves the top-1 accuracy by 16 - 17.5% when it is integrated with X3D and different sampling methods. We also compare our TFA with bounding box tracking method (Demir et al., 2021) which applies the person detector for foreground patch detection on all the temporal frames and then extracts the foreground patch based on the bounding boxes. The results are generated using X3D and uniform sampling with same configurations. As shown in Table. 6, our method improves the top-1 accuracy over bounding box tracking method by 3.4% on UAV-Human and 4.1% on Drone Action. Such improvement is attributed to our proposed TFA can not only extract the foreground patches, but also align all the patches so that the main body of the human actor are well matched in the temporal domain. Therefore, the model could focus on the pixels corresponding to the parts of the human body that contribute most to the actions during training. **Effectiveness of Mutual Information Sampling** For Mutual Information Sampling (MIS), our goal is to sample the informative frames which better represent the video for the action recognition methods. We compare it with three other sampling methods. First, we compare with two baseline methods: (1) Random sampling (Fischler and Bolles, 1981) where frames are randomly picked (2) Uniform sampling (Krizhevsky et al., 2017) where frames are sampled uniformly given a randomly generated start and end point. Then, we compare with the current state-of-the-art MG Sampler (Zhi et al., 2021) which uses adaptive sampling strategy based on temporal consistency between adjacent frames. As shown in Table. 5, compared with other sampling methods, MIS results in 0.6 - 6.4% improvement in Top-1 accuracy for UAV videos, which demonstrates the effectiveness of our proposed method. ## 5 Conclusion, Limitations and Future Work We propose a novel approach for video action recognition on UAVs. Our approach is designed to handle the varying and small resolution of the human, large changes in the positions of the human actor between frames and partially occluded key points of the actions caused by continuously movement of the UAVs. We present a mutual information based feature alignment to obtain and align the action features in the temporal domain. Our method is efficient and works well on UAV videos. We also present a novel frame sampling method to find the most informative frames in the video. We compare with prior approaches and demonstrate improvements in Top-1 accuracy on 3 UAV datasets. Our approach has a few limitations. First, we assume there does not exist long range spatial relationship between the human actor and the background. Second, we assume the input videos contain only one scripted human agent performing some action. We would like to explore the possibility of extending our method on multi-human or multi-action videos. \begin{table} \begin{tabular}{c c c} \hline \hline Method & UAV-Human & Drone Action \\ \hline Bounding box tracking & \(47.4\) & \(95.9\) \\ **TFA(ours)** & **50.8** & **100.0** \\ \hline \hline \end{tabular} \end{table} Table 6: Comparison with a bounding box tracking method (Demir et al., 2021). \begin{table} \begin{tabular}{c c|c c} \hline \hline Sampling Method & Top-1 & Sampling Method & Top-1 \\ \hline Random & \(23.8\) & TFA + Random & \(39.8\) \\ Uniform & \(25.8\) & TFA + Uniform & \(42.2\) \\ MG Sampler & \(28.1\) & TFA + MG Sampler & \(45.5\) \\ MIS & **28.7** & TFA + MIS & **46.2** \\ \hline \hline \end{tabular} \end{table} Table 5: Temporal Feature Alignment (TFA) and Mutual Information Sampling (MIS) ablation studies on UAV-Human-Subset. The baseline is vanilla X3D with random (Fischler and Bolles, 1981) and uniform sampling (Krizhevsky et al., 2017), and we add our methods TFA and MIS step by step. From our experiments, TFA boost the accuracy by 16-17.5%. MIS outperforms the random sampling, uniform sampling, and MG Sampler(Zhi et al., 2021).
2303.12766
Spherical Transformer for LiDAR-based 3D Recognition
LiDAR-based 3D point cloud recognition has benefited various applications. Without specially considering the LiDAR point distribution, most current methods suffer from information disconnection and limited receptive field, especially for the sparse distant points. In this work, we study the varying-sparsity distribution of LiDAR points and present SphereFormer to directly aggregate information from dense close points to the sparse distant ones. We design radial window self-attention that partitions the space into multiple non-overlapping narrow and long windows. It overcomes the disconnection issue and enlarges the receptive field smoothly and dramatically, which significantly boosts the performance of sparse distant points. Moreover, to fit the narrow and long windows, we propose exponential splitting to yield fine-grained position encoding and dynamic feature selection to increase model representation ability. Notably, our method ranks 1st on both nuScenes and SemanticKITTI semantic segmentation benchmarks with 81.9% and 74.8% mIoU, respectively. Also, we achieve the 3rd place on nuScenes object detection benchmark with 72.8% NDS and 68.5% mAP. Code is available at https://github.com/dvlab-research/SphereFormer.git.
Xin Lai, Yukang Chen, Fanbin Lu, Jianhui Liu, Jiaya Jia
2023-03-22T17:30:14Z
http://arxiv.org/abs/2303.12766v1
# Spherical Transformer for LiDAR-based 3D Recognition ###### Abstract LiDAR-based 3D point cloud recognition has benefited various applications. Without specially considering the LiDAR point distribution, most current methods suffer from information disconnection and limited receptive field, especially for the sparse distant points. In this work, we study the varying-sparsity distribution of LiDAR points and present **SphereFormer** to directly aggregate information from dense close points to the sparse distant ones. We design radial window self-attention that partitions the space into multiple non-overlapping narrow and long windows. It overcomes the disconnection issue and enlarges the receptive field smoothly and dramatically, which significantly boosts the performance of sparse distant points. Moreover, to fit the narrow and long windows, we propose exponential splitting to yield fine-grained position encoding and dynamic feature selection to increase model representation ability. Notably, our method ranks 1\({}^{\text{st}}\) on both nuScenes and SemanticKITTI semantic segmentation benchmarks with \(81.9\%\) and \(74.8\%\) mIoU, respectively. Also, we achieve the 3\({}^{\text{nd}}\) place on nuScenes object detection benchmark with \(72.8\%\) NDS and \(68.5\%\) mAP. Code is available at [https://github.com/dvlab-research/SphereFormer.git](https://github.com/dvlab-research/SphereFormer.git). ## 1 Introduction Nowadays, point clouds can be easily collected by LiDAR sensors. They are extensively used in various industrial applications, such as autonomous driving and robotics. In contrast to 2D images where pixels are arranged densely and regularly, LiDAR point clouds possess the varying-sparsity property -- points near the LiDAR are quite dense, while points far away from the sensor are much sparser, as shown in Fig. 2 (a). However, most existing work [12, 13, 24, 25, 55, 69, 70, 71] does not specially consider the the varying-sparsity point distribution of outdoor LiDAR point clouds. They inherit from 2D CNNs or 3D indoor scenarios, and conduct local operators (_e.g._, SparseConv [24, 25]) uniformly for all locations. This causes inferior results for the sparse distant points. As shown in Fig. 1, although decent performance is yielded for the dense close points, it is difficult for these methods to deal with the _sparse distant points_ optimally. We note that the root cause lies in limited receptive field. For sparse distant points, there are few surrounding neighbors. This not only results in inconclusive features, but also hinders enlarging receptive field due to information disconnection. To verify this finding, we visualize the Effective Receptive Field (ERF) [40] of the given feature (shown with the yellow star) in Fig. 2 (d). The ERF cannot be expanded due to disconnection, which is caused by the extreme sparsity of the distant _car_. Although window self-attention [22, 30], dilated self-attention [42], and large-kernel CNN [10] have been proposed to conquer the limited receptive field, these methods do not specially deal with LiDAR point distribution, and remain to enlarge receptive field by stacking local operators as before, leaving the information disconnection issue still unsolved. As shown in Fig. 1, the method of cubic self-attention brings a limited improvement. In this paper, we take a new direction to _aggregate long-range information directly in a single operator_ to suit the varying-sparsity point distribution. We propose the module of _SphereFormer_ to perceive useful information from points Figure 1: Semantic segmentation performance on nuScenes _val_ set for points at different distances. 50+ meters away and yield large receptive field for feature extraction. Specifically, we represent the 3D space using spherical coordinates \((r,\theta,\phi)\) with the sensor being the origin, and partition the scene into multiple non-overlapping windows. Unlike the cubic window shape, we design radial windows that are long and narrow. They are obtained by partitioning only along the \(\theta\) and \(\phi\) axis, as shown in Fig. 2 (b). It is noteworthy that we make it a plugin module to conveniently insert into existing mainstream backbones. The proposed module does not rely on stacking local operators to expand receptive field, thus avoiding the disconnection issue, as shown in Fig. 2 (e). Also, it facilitates the sparse distant points to aggregate information from the dense-point region, which is often semantically rich. So, the performance of the distant points can be improved significantly (_i.e_., +17.1% mIoU) as illustrated in Fig. 1. Moreover, to fit the long and narrow radial windows, we propose _exponential splitting_ to obtain fine-grained relative position encoding. The radius \(r\) of a radial window can be over 50 meters, which causes large splitting intervals. It thus results in coarse position encoding when converting relative positions into integer indices. Besides, to let points at varying locations treat local and global information differently, we propose _dynamic feature selection_ to make further improvements. In total, our contribution is three-fold. * We propose SphereFormer to directly aggregate long-range information from dense-point region. It increases the receptive field smoothly and helps improve the performance of _sparse distant points_. * To accommodate the radial windows, we develop exponential splitting for relative position encoding. Our dynamic feature selection further boosts performance. * Our method achieves new state-of-the-art results on multiple benchmarks of both semantic segmentation and object detection tasks. ## 2 Related Work ### LiDAR-based 3D Recognition Semantic Segmentation.Segmentation [6, 14, 15, 31, 32, 34, 49, 59, 60, 61, 82] is a fundamental task for vision perception. Approaches for LiDAR-based semantic segmentation can be roughly grouped into three categories, _i.e_., view-based, point-based, and voxel-based methods. View-based methods either transform the LiDAR point cloud into a range view [67, 68, 43, 3, 46], or use a bird-eye view (BEV) [79] for a 2D network to perform feature extraction. 3D geometric information is simplified. Point-based methods [58, 72, 56, 44, 45, 28, 30] adopt the point features and positions as inputs, and design abundant operators to aggregate information from neighbors. Moreover, the voxel-based solutions [25, 13, 24] divide the 3D space into regular voxels and then apply sparse convolutions. Further, methods of [12, 17, 29, 37, 55, 70, 88] propose various structures for improved effectiveness. All of them focus on capturing local information. We follow this line Figure 2: Effective Receptive Field (ERF) of SparseConv and ours. (a) LiDAR point cloud. (b) Radial window partition. Only a single radial window is shown. Points inside the window are marked in red. (c) Zoom-in sparse distant points. A sparse _car_ is circled in yellow. (d) ERF of SparseConv, given the point of interest (with yellow star). White and red denote high contribution. (e) ERF of ours. of research, and propose to directly aggregate long-range information. Recently, RPVNet [69] combines the three modalities by feature fusion. Furthermore, 2DPASS [71] incorporates 2D images during training, and [48] fuses multi-modal features. Despite extra 2D information, the performance of these methods still lags behind compared to ours. Object Detection.3D object detection frameworks can be roughly categorized into single-stage [11, 26, 36, 75, 83, 84] and two-stage [19, 41, 50, 51] methods. VoxelNet [85] extracts voxel features by PointNet [44] and applies RPN [47] to obtain the proposals. SECOND [73] is efficient thanks to the accelerated sparse convolutions. VoTr [42] applies cubic window attention to voxels. LiDARMultiNet [77] unifies semantic segmentation, panoptic segmentation, and object detection into a single multi-task network with multiple types of supervision. Our experiments are based on CenterPoint [78], which is a widely used anchor-free framework. It is effective and efficient. We aim to enhance the features of sparse distant points, and our proposed module can be conveniently inserted into existing frameworks. ### Vision Transformer Recently, Transformer [64] become popular in various 2D image understanding tasks [5, 16, 20, 21, 38, 42, 54, 62, 63, 65, 66, 74, 80, 87]. ViT [21] tokenizes every image patch and adopts a Transformer encoder to extract features. Further, PVT [66] presents a hierarchical structure to obtain a feature pyramid for dense prediction. It also proposes Spatial Reduction Attention to save memory. Also, Swin Transformer [38] uses window-based attention and proposes the shifted window operation in the successive Transformer block. Moreover, methods of [16, 20, 74] propose different designs to incorporate long-range dependencies. There are also methods [22, 30, 42, 53, 81] that apply Transformer into 3D vision. Few of them consider the point distribution of LiDAR point cloud. In our work, we utilize the varying-sparsity property, and design radial window self-attention to capture long-range information, especially for the sparse distant points. ## 3 Our Method In this section, we first elaborate on radial window partition in Sec. 3.1. Then, we propose the improved position encoding and dynamic feature selection in Sec. 3.2 and 3.3. ### Spherical Transformer To model the long-range dependency, we adopt the window-attention [38] paradigm. However, unlike the cubic window attention [22, 30, 42], we take advantage of the varying-sparsity property of LiDAR point cloud and present the SphereFormer module, as shown in Fig. 3. Radial Window Partition.Specifically, we represent LiDAR point clouds using the spherical coordinate system \((r,\theta,\phi)\) with the LiDAR sensor being the origin. We partition the 3D space along the \(\theta\) and \(\phi\) axis. We, thus, obtain a number of non-overlapping radial windows with a long and narrow 'pyramid' shape, as shown in Fig. 3. We obtain the window index for the token at (\(r_{i}\), \(\theta_{i}\), \(\phi_{i}\)) as \[win\_index_{i}=(|\frac{\theta_{i}}{\Delta\theta}|,|\frac{\phi_{i}}{\Delta \phi}|), \tag{1}\] where \(\Delta\theta\) and \(\Delta\phi\) denote the window size corresponding to the \(\theta\) and \(\phi\) dimension, respectively. Tokens with the same window index would be assigned to the same window. The multi-head self-attention [64] is conducted within each window independently as follows. \[\hat{\mathbf{q}}=\mathbf{f}\cdot\mathbf{W}_{q},\ \ \ \hat{\mathbf{k}}=\mathbf{f}\cdot \mathbf{W}_{k},\ \ \ \hat{\mathbf{v}}=\mathbf{f}\cdot\mathbf{W}_{v}, \tag{2}\] where \(\mathbf{f}\in\mathbb{R}^{n\times c}\) denotes the input features of a window, \(\mathbf{W}_{q},\mathbf{W}_{k},\mathbf{W}_{v}\in\mathbb{R}^{c\times c}\) are the linear projection weights, and \(\hat{\mathbf{q}},\hat{\mathbf{k}},\hat{\mathbf{v}}\in\mathbb{R}^{n\times c}\) are the projected features. Then, we split the projected features \(\hat{\mathbf{q}},\hat{\mathbf{k}},\hat{\mathbf{v}}\) into \(h\) heads (_i.e_., \(\mathbb{R}^{n\times(h\times d)}\)), and reshape them as \(\mathbf{q},\mathbf{k},\mathbf{v}\in\mathbb{R}^{h\times n\times d}\). For each head, we perform dot product and weighted sum as \[\mathbf{attn}_{k} =\mathbf{softmax}(\mathbf{q}_{k}\cdot\mathbf{k}_{k}^{T}), \tag{3}\] \[\hat{\mathbf{z}}_{k} =\mathbf{attn}_{k}\cdot\mathbf{v}_{k}, \tag{4}\] where \(\mathbf{q}_{k},\mathbf{k}_{k},\mathbf{v}_{k}\in\mathbb{R}^{n\times d}\) denote the features of the \(k\)-th head, and \(\mathbf{attn}_{k}\in\mathbb{R}^{n\times n}\) is the corresponding attention weight. Finally, we concatenate the features from all heads and apply the final linear projection with weight \(\mathbf{W}_{proj}\in\mathbb{R}^{c\times c}\) to yield the output \(\mathbf{z}\in\mathbb{R}^{n\times c}\) as \[\hat{\mathbf{z}}=\mathbf{concat}(\{\hat{\mathbf{z}}_{0},\hat{\mathbf{z}}_{1},...,\hat{\mathbf{z}}_{h-1}\}). \tag{5}\] \[\mathbf{z}=\hat{\mathbf{z}}\cdot\mathbf{W}_{proj}. \tag{6}\] SphereFormer serves as a plugin module and can be conveniently inserted into existing mainstream models, _e.g_., Figure 3: Cubic vs. Radial window partition. The radial window can directly harvest information from the dense-point region, especially for the sparse distant points. SparseConvNet [24, 25], MinkowskiNet [13], local window self-attention [22, 30, 42]. In this paper, we find that inserting it into the end of each stage works well, and the network structure is given in the supplementary material. The resulting model can be applied to various downstream tasks, such as semantic segmentation and object detection, with strong performance as produced in experiments. SphereFormer is effective for the sparse distant points to get long-range information from the dense-point region. Therefore, the sparse distant points overcome the disconnection issue, and increase the effective receptive field. Comparison with Cylinder3D.Although both Cylinder3D [88] and ours use polar or spherical coordinates to match LiDAR point distribution, there are two essential differences yet. First, Cylinder3D aims at a more balanced point distribution, while our target is to enlarge the receptive field smoothly and enable the sparse distant points to directly aggregate long-range information from the dense-point region. Second, what Cylinder3D does is replace the cubic voxel shape with the fan-shaped one. It remains to use local neighbors as before and still suffers from limited receptive field for the sparse distant points. Nevertheless, our method changes the way we find neighbors in a single operator (_i.e_., self-attention) and it is not limited to local neighbors. It thus avoids information separation between near and far objects and connects them in a natural way. ### Position Encoding For the 3D point cloud network, the input features have already incorporated the absolute \(xyz\) position. Therefore, there is no need to apply absolute position encoding. Also, we notice that Stratified Transformer [30] develops the contextual relative position encoding. It splits a relative position into several discrete parts uniformly, which converts the continuous relative positions into integers to index the positional embedding tables. This method works well with local cubic windows. But in our case, the radial window is narrow and long, and its radius \(r\) can take even more than 50 meters, which could cause large intervals during discretization and thus coarse-grained positional encoding. As shown in Fig. 4 (a), because of the large interval, \(key_{1}\) and \(key_{2}\) correspond to the same index. But there is still a considerable distance between them. Exponential Splitting.Specifically, since the \(r\) dimension covers long distances, we propose _exponential splitting_ for the \(r\) dimension as shown in Fig. 4 (b). The splitting interval grows exponentially when the index increases. In this way, the intervals near the \(query\) are much smaller, and the \(key_{1}\) and \(key_{2}\) can be assigned to different position encodings. Meanwhile, we remain to adopt the _uniform splitting_ for the \(\theta\) and \(\phi\) dimensions. In notation, we have a query token \(q_{i}\) and a key token \(k_{j}\). Their relative position \((r_{ij},\theta_{ij},\phi_{ij})\) is converted into integer index \((\mathbf{idx}_{ij}^{r},\mathbf{idx}_{ij}^{\theta},\mathbf{idx}_{ij}^{\phi})\) as \[\mathbf{idx}_{ij}^{r}=\left\{\begin{array}{ll}-\max(0,\lceil\log_{2}(\frac{- r_{ij}}{a})\rceil)-1&r_{ij}<0\\ 0&r_{ij}=0\\ \max(0,\lceil\log_{2}(\frac{r_{ij}}{a})\rceil)&r_{ij}>0\end{array}\right.,\] \[\mathbf{idx}_{ij}^{\theta}=\lfloor\frac{\theta_{ij}}{\mathbf{ineval}_{\theta} }\rfloor,\quad\mathbf{idx}_{ij}^{\phi}=\lfloor\frac{\phi_{ij}}{\mathbf{ineval }_{\phi}}\rfloor,\] \[\mathbf{idx}^{x}=\mathbf{idx}^{x}+\frac{L}{2},\ \ \ x\in\{r,\theta,\phi\},\] where \(a\) is a hyper-parameter to control the starting splitting interval, and \(L\) is the length of the positional embedding tables. Note that we also add the indices with \(\frac{L}{2}\) to make sure they are non-negative. The above indices \((\mathbf{idx}_{ij}^{r},\mathbf{idx}_{ij}^{\theta},\mathbf{idx}_{ij}^{\phi})\) are then used to index their positional embedding tables \(\mathbf{t}_{r},\mathbf{t}_{\theta},\mathbf{t}_{\phi}\in\mathbb{R}^{L\times(h \times d)}\) to find the corresponding position encoding \(\mathbf{p}_{ij}^{r},\mathbf{p}_{ij}^{\theta},\mathbf{p}_{ij}^{\phi}\in\mathbb{ R}^{h\times d}\), respectively. Then, we sum them up to yield the resultant positional encoding \(\mathbf{p}\in\mathbb{R}^{h\times d}\), which then performs dot product with the features of \(q_{i}\) and \(k_{j}\), respectively. The original Eq. (3) is updated to \[\mathbf{p} =\mathbf{p}_{ij}^{r}+\mathbf{p}_{ij}^{\theta}+\mathbf{p}_{ij}^{ \phi},\] \[\mathbf{pos\_bias}_{k,i} =\mathbf{q}_{k,i}\cdot\mathbf{p}_{k}^{r}+\mathbf{k}_{k,j}\cdot \mathbf{p}_{k}^{T},\] \[\mathbf{attn}_{k} =\mathbf{softmax}(\mathbf{q}_{k}\cdot\mathbf{k}_{k}^{T}+\mathbf{ pos\_bias}_{k}),\] where \(\mathbf{pos\_bias}\in\mathbb{R}^{h\times n\times n}\) is the positional bias to the attention weight, \(\mathbf{q}_{k,i}\in\mathbb{R}^{d}\) means the the \(k\)-th head of the \(i\)-th query feature, and \(\mathbf{p}_{k}\in\mathbb{R}^{d}\) is the \(k\)-th head of the position encoding \(\mathbf{p}\). The _exponential splitting_ strategy provides smaller splitting intervals for near token pairs and larger intervals for distant ones. This operation enables a fine-grained position representation between near token pairs, and still maintains the same number of intervals in the meanwhile. Even though the splitting intervals become larger for distant token pairs, this solution actually works well since distant token pairs require less fine-grained relative position. Figure 4: Comparison between (a) uniform splitting and (b) exponential splitting. The \(query\) is at the leftmost point. ### Dynamic Feature Selection Point clouds scanned by LiDAR have the varying-sparsity property -- close points are dense and distant points are much sparser. This property makes points at different locations perceive different amounts of local information. For example, as shown in Fig. 5, a point of the _car_ (circled in green) near the LiDAR is with rich local geometric information from its dense neighbors, which is already enough for the model to make a correct prediction - incurring more global contexts might be contrarily detrimental. However, a point of _bicycle_ (circled in red) far away from the LiDAR lacks shape information due to the extreme sparsity and even occlusion. Then we should supply long-range contexts as a supplement. This example shows treating all the query points equally is not optimal. We thus propose to dynamically select local or global features to address this issue. As shown in Fig. 6, for each token, we incorporate not only the radial contextual information, but also local neighbor communication. Specifically, input features are projected into query, key and value features as Eq. (2). Then, the first half of the heads are used for radial window self-attention, and the remaining ones are used for cubic window self-attention. After that, these two features are concatenated and then linearly projected to the final output \(\mathbf{z}\) for feature fusion. It enables different points to dynamically select local or global features. Formally, the Equations (3-5) are updated to \[\mathbf{attn}_{k}^{radial}=\mathbf{softmax}(\mathbf{q}_{k}^{radial}\cdot \mathbf{k}_{k}^{radial}T),\] \[\mathbf{\hat{z}}_{k}^{radial}=\mathbf{attn}_{k}^{radial}\cdot \mathbf{v}_{k}^{radial},\] \[\mathbf{attn}_{k}^{cubic}=\mathbf{softmax}(\mathbf{q}_{k}^{cubic}\cdot \mathbf{k}_{k}^{cubic}T),\] \[\mathbf{\hat{z}}_{k}^{cubic}=\mathbf{attn}_{k}^{cubic}\cdot \mathbf{v}_{k}^{cubic},\] \[\mathbf{\hat{z}}=\mathbf{concat}(\{\mathbf{\hat{z}}_{0}^{radial},\mathbf{\hat {z}}_{1}^{radial},...,\mathbf{\hat{z}}_{h/2-1}^{radial},z_{h/2}^{cubic},..., \mathbf{\hat{z}}_{h-1}^{cubic}\}),\] where \(\mathbf{q}_{k}^{cubic},\mathbf{k}_{k}^{cubic},\mathbf{v}_{k}^{cubic}\in \mathbb{R}^{n^{cubic}\times d}\) denote the query, key and value features for the \(k\)-th head with cubic window partition, and \(\mathbf{attn}_{k}^{cubic}\in\mathbb{R}^{n^{cubic}\times n^{cubic}}\) denotes the cubic window attention weight for the \(k\)-th head. ## 4 Experiments In this section, we first introduce the experimental setting in Sec. 4.1. Then, we show the semantic segmentation and object detection results in Sec. 4.2 and 4.3. The ablation study and visual comparison are shown in Sec. 4.4 and 4.5. Our code and models will be made publicly available. ### Experimental Setting Network Architecture.For semantic segmentation, we adopt the encoder-decoder structure and follow U-Net [49] to concatenate the fine-grained encoder features in the decoder. We follow [88] to use SparseConv [24, 25] as our baseline model. There are a total of 5 stages whose channel numbers are \([32,64,128,256,256]\), and there are two residual blocks at each stage. Our proposed module is stacked at the end of each encoding stage. For object detection, we adopt CenterPoint [78] as our baseline model, where the backbone possesses 4 stages whose channel numbers are \([16,32,64,128]\). Our proposed module is stacked at the end of the second and third stages. Note that our proposed module incurs negligible extra parameters, and more details are given in the supplementary material. Datasets.Following previous work, we evaluate methods on nuScenes [4], SemanticKITTI [3], and Waymo Open Dataset [52] (WOD) for semantic segmentation. For object detection, we evaluate our methods on the nuScenes [4] dataset. The details of the datasets are given in the supplementary material. Implementation Details.For semantic segmentation, we use 4 GeForce RTX 3090 GPUs for training. We train the models for 50 epochs with AdamW [39] optimizer and 'poly' scheduler where _power_ is set to Figure 5: Varying-sparsity property of LiDAR point clouds. The dense close _car_ is marked with a green circle and the sparse distant _bicycle_ is marked with a red circle (best viewed in color). Figure 6: Dynamic feature selection. We split the heads to conduct radial and cubic window self-attention respectively. 0.9. The learning rate and weight decay are set to \(0.006\) and \(0.01\), respectively. Batch size is set to 16 on nuScenes, and 8 on both SemanticKITTI and Waymo Open Dataset. The window size is set to \([120m,2^{\circ},2^{\circ}]\) for \((r,\theta,\phi)\) on both nuScenes and SemanticKITTI, and \([80m,1.5^{\circ},1.5^{\circ}]\) on Waymo Open Dataset. During data preprocessing, we confine the input scene to the range from \([-51.2m,-51.2m,-4m]\) to \([51.2m,51.2m,2.4m]\) on SemanticKITTI and \([-75.2m,-75.2m,-2m]\) to \([75.2m,75.2m,4m]\) on Waymo. Also, we set the voxel size to \(0.1m\) on both nuScenes and Waymo, and \(0.05m\) on SemanticKITTI. For object detection, we adopt the OpenPCDet [57] codebase and follow the default CenterPoint [78] to set the training hyper-parameters. We set the window size to \([120m,1.5^{\circ},1.5^{\circ}]\). ### Semantic Segmentation Results The results on SemanticKITTI _test_ set are shown in Table 1. Our method yields \(74.8\%\) mIoU, a new state-of-the-art result. Compared to the methods based on range images [43, 67] and Bird-Eye-View (BEV) [79], ours gives a result with over \(20\%\) mIoU performance gain. Moreover, thanks to the capability of directly aggregating long-range information, our method significantly outperforms the models based on sparse convolution [12, 55, 69, 70, 88]. It is also notable that our method outperforms 2DPASS [71] that uses extra 2D images in training by \(1.9\%\) mIoU. In Tables 2 and 3, we also show the semantic segmentation results on nuScenes _test_ and _val_ set, respectively. Our method consistently outperforms others by a large margin, and achieves the \(1^{\text{st}}\) place on the benchmark. It is intriguing to note that our method is purely based on LiDAR data, and it works even better than approaches of [23, 71, 89] that use additional 2D information. Moreover, we demonstrate the semantic segmentation results on Waymo Open Dataset _val_ set in Table 4. Our model outperforms the baseline model with a substantial gap of \(3.3\%\) mIoU. Also, it is worth noting that our method achieves a \(9.3\%\) mIoU performance gain for the _far_ points, _i.e_., the sparse distant points. ### Object Detection Results Our method also achieves strong performance in object detection. As shown in Table 8, our method outperforms other published methods on nuScenes _test set_, and ranks 3rd on the LiDAR-only benchmark. It shows that directly aggregating long-range information is also beneficial for object detection. It also manifests the capability of our method to generalize to instance-level tasks. ### Ablation Study To testify the effectiveness of each component, we conduct an extensive ablation study and list the result in Table 5. The Experiment I (Exp. I for short) is our baseline model of SparseConv. Unless otherwise specified, we train the models on nuScenes _train_ set and make evaluations on nuScenes _val_ set for the ablation study. To comprehensively reveal the effect, we also report the performance at different distances, _i.e_., close (\(\leq 20m\)), medium (\(>20m\) & \(\leq 50m\)), far (\(>50m\)) distances. Window Shape.By comparing Experiments I and II in Table 5, we can conclude that the radial window shape is beneficial. Further, the improvement stems mainly from better handling the _medium_ and _far_ points, where we yield \(5.67\%\) and \(13.39\%\) mIoU performance gain, respectively. This result exactly verifies the benefit of aggregating long-range information with the radial window shape. Moreover, we also compare the radial window shape with the cubic one proposed in [42, 22, 30]. As shown in Table 6, the radial window shape considerably outperforms the cubic one. Besides, we investigate the effect of window size as shown in Table 7. Setting it too small may make it hard to capture meaningful information, while setting it too large \begin{table} \begin{tabular}{c|c c c} \hline \hline Method & close & medium & far & overall \\ \hline Cubic & 79.21 & 54.31 & 19.31 & 76.19 \\ Radial & 80.80 & 60.78 & 30.38 & 78.41 \\ \hline \hline \end{tabular} \end{table} Table 6: Comparison between radial and cubic window shapes. \begin{table} \begin{tabular}{l|c|c c c c c c c c c c c c c c c c c c c c c} \hline \hline Method & mIoU & close & mIoU & far & may increase the optimization difficulty. Exponential Splitting.Compared to Exp. IV, Exp. V improves with \(1.36\%\) more mIoU, which shows the effectiveness. Moreover, the consistent conclusion could be drawn from Experiments II and III, where we witness \(3.88\%\) and \(4.43\%\) more mIoU for the _medium_ and _far_ points, respectively. Also, we notice that with exponential splitting, all the _close_, _medium_, and _far_ points are better dealt with. Dynamic Feature Selection.From the comparison between Experiments III and V, we note that dynamic feature selection brings a \(0.8\%\) mIoU performance gain. Interestingly, we further notice that the gain mainly comes from the _close_ points, which indicates that the _close_ points may not rely too much on global information, since the dense local information is already enough for correct predictions for the dense close points. It also reveals the fact that points at varying locations should be treated differently. Moreover, the comparison between Exp. II and IV leads to consistent conclusion. Although the performance of _medium_ and _far_ decreases a little, the _overall_ mIoU still increases, since their points number is much than that of the _close_ points. ### Visual Comparison As shown in Fig. 7, we visually compare the baseline model (_i.e_., SparseConv) and ours. It visually indicates that with our proposed module, more sparse distant objects are recognized, which are highlighted with cyan boxes. More examples are given in the supplementary material. ## 5 Conclusion We have studied and dealt with varying-sparsity LiDAR point distribution. We proposed SphereFormer to enable the sparse distant points to directly aggregate information from the close ones. We designed radial window self-attention, which enlarges the receptive field for distant points to intervene with close dense ones. Also, we presented exponential splitting to yield more detailed position encoding. Dynamically selecting local or global features is also helpful. Our method demonstrates powerful performance, ranking 1\({}^{\text{st}}\) on both nuScenes and SemanticKITTI semantic segmentation benchmarks and achieving the 3\({}^{\text{rd}}\) on nuScenes object detection benchmark. It shows a new way to further enhance 3D visual understanding. Our limitations are discussed in the supplementary material. Figure 7: Visual comparison between vanilla SparseConv and ours (best viewed in color and by zoom-in). The brown box is the zoom-in of the cyan box. The last two columns are the difference maps with the ground truth. More examples are given in the supplementary material.
2307.11940
Envisioning a Safety Island to Enable HPC Devices in Safety-Critical Domains
HPC (High Performance Computing) devices increasingly become the only alternative to deliver the performance needed in safety-critical autonomous systems (e.g., autonomous cars, unmanned planes) due to deploying large and powerful multicores along with accelerators such as GPUs. However, the support that those HPC devices offer to realize safety-critical systems on top is heterogeneous. Safety islands have been devised to be coupled to HPC devices and complement them to meet the safety requirements of an increased set of applications, yet the variety of concepts and realizations is large. This paper presents our own concept of a safety island with two goals in mind: (1) offering a wide set of features to enable the broadest set of safety applications for each HPC device, and (2) being realized with open source components based on RISC-V ISA to ease its use and adoption. In particular, we present our safety island concept, the key features we foresee it should include, and its potential application beyond safety.
Jaume Abella, Francisco J. Cazorla, Sergi Alcaide, Michael Paulitsch, Yang Peng, Inês Pinto Gouveia
2023-07-21T23:44:25Z
http://arxiv.org/abs/2307.11940v1
# Envisioning a Safety Island to Enable HPC Devices in Safety-Critical Domains ###### Abstract HPC (High Performance Computing) devices increasingly become the only alternative to deliver the performance needed in safety-critical autonomous systems (e.g., autonomous cars, unmanned planes) due to deploying large and powerful multicores along with accelerators such as GPUs. However, the support that those HPC devices offer to realize safety-critical systems on top is heterogeneous. Safety islands have been devised to be coupled to HPC devices and complement them to meet the safety requirements of an increased set of applications, yet the variety of concepts and realizations is large. This paper presents our own concept of a safety island with two goals in mind: (1) offering a wide set of features to enable the broadest set of safety applications for each HPC device, and (2) being realized with open source components based on RISC-V ISA to ease its use and adoption. In particular, we present our safety island concept, the key features we foresee it should include, and its potential application beyond safety. ## I Introduction HPC processors can deliver the performance needed for future embedded systems of autonomous cars and aircraft, which will rely heavily on performance-hungry Artificial Intelligence (AI) software, as well as a high number of processes to manage simultaneous events triggered by a plethora of sensors and by timers periodically. However, the support that those devices offer for their use in safety-related applications is different across devices, and so are the guarantees or additional support they need - if any - for their use in applications with varying needs in terms of integrity level, fail-safe or fail-operational needs, performance predictability, and time to recover from different types of errors. In general, HPC processors can be used for fail-safe applications as long as a high-integrity microcontroller unit (MCU) is deployed along with the HPC processor, and the overall system architected so that the MCU can manage all errors affecting both, the HPC processor or the MCU itself, and guide the system to a safe state timely in accordance with application safety requirements. However, if the MCU cannot manage all errors in the HPC processor preserving safety requirements, or if the HPC processor must remain fault-tolerant to meet such safety requirements, then additional support is needed beyond that of the MCU. Such support can be part of the HPC device itself, or be delivered in the form of an enhanced MCU with extended safety capabilities to assist the HPC device. To overcome the potential limitations that some HPC devices may offer for their use in safety-critical applications, _safety islands_ have been proposed recently [1, 2, 3, 4]. While the term _safety island_ includes a heterogeneous set of devices, they normally offer two key sets of features: (1) a safe enclave to run safety-related applications, in the same way an MCU does. In fact, an MCU can be regarded as a safety island. However, safety islands offer lower performance than HPC devices, at least for a set of performance-demanding applications, and hence the need for an accompanying HPC device. (2) A safety island also offers specific support to manage an external device that needs to run safety-related functionalities. Such support may include watchdog services, performance monitoring capabilities, ability to initiate test capabilities in the other device or to test it externally, or support to orchestrate diverse redundant execution in the external device to name a few. When considering the deployment of a safety island along with an HPC device, considerations about integration cannot be neglected. For instance, new cost-efficient chip production and packaging options (multi-die chips - EMIB1, Foveros [5]) provide a solution to deploy separate dies in the same package, which benefits a number of safety island requirements like fault independence with different dies while preserving high bandwidth communication. Footnote 1: Embedded Multi-Die Interconnect Bridge This paper presents our own concept of a safety island, as well as our strategy to realize it based on open source RISC-V [6] components with the aim of easing its use and adoption. In our case, we aim at providing a rich set of safety services to the HPC device to enable the deployment of safety-relevant applications on top with varying safety requirements, despite the potentially limited support that the HPC device may offer natively. Those services foreseen for our safety island include the following: * Controllability features to configure the HPC device for fault management and containment, and for providing predictable performance. * Observability features for system validation, error diagnosis, and as the basis to build informed safety measures on top. * Safety measures to contain and manage errors, and to mitigate abnormal performance conditions. In this paper we provide the following contributions for the design and deployment of our view of a safety island: 1. We present our concept of a safety island along with examples for its practical realization focusing on open source SoCs based on the RISC-V ISA. 2. We identify existing open source components providing controllability and observability features, and safety measures, appropriate for our Safety Island. 3. We identify further features to be developed in the future to complete and complement our safety island. 4. We devise further applications of the Safety Island beyond safety, such as security, reliability, and power and temperature management. To some extent, the foreseen security extensions are comparable to the "Security Island" already offered by some processors (e.g., Arm's TrustZone [7]). The rest of the paper is organized as follows. Section II provides some background and other solutions comparable or related to the safety island proposed in this paper. Section III presents our concept of safety island. Section IV describes convenient components for our safety island, whether they already exist or not. Section V shows how the safety island could be used for other applications. Finally, Section VI summarizes this paper. ## II Background ### _Safety-Critical System Development Process_ Domain specific safety standards describe the development process to be followed for safety-critical systems. Examples of such standards include ISO26262 [8] for the automotive domain, EN5012x [9, 10, 11] for the railway domain, and IEC61508 [12] for industrial systems. Those development processes follow a "V" model (see Figure 1). First, the safety goals of the system need being specified. Safety requirements are obtained out of those safety goals. Then, the architecture of the system is devised ensuring that all safety requirements are mapped to specific items that will have to fulfill them. Safety-critical systems are designed to be correct by construction, hence meaning that both, hardware and software, are error free. This is achieved by following specific rules and processes when architectecting the system and implementing it. However, random hardware faults due to radiation, sporadic deadline violations, voltage fluctuations, and the like cannot be fully avoided. Hence, safety measures must be included in the system design to manage errors emanating from those faults (e.g., tolerating them, or detecting them and reaching a safe state timely). Once the system has been architected, its different components are implemented adhering to specific constraints dictated by the standards (e.g., avoiding unobvious control flow in the software). Verification activities for the system architecture and implementation must be included in the process as a way to assess that the design adheres to its requirements and specifications. Controllability means are generally needed along with the design and verification activities to avoid problematic behavior during operation and to enforce specific behavior for verification purposes. Validation activities (the right part of the "V") include testing activities for individual components, as well as for the subsequent integrations with the goal of spotting design errors and, in their absence, gain confidence on the safety of the system. Those testing activities must be as little intrusive as possible to observe the system in operation conditions. Using observability means allow provision of system information without altering the behavior of a system. Overall, safety-critical systems need to include safety measures, as well as controllability and observability channels to ease their development in accordance with safety standards. ### _Relevant Concepts_ In this paper we build on a number of concepts with broad application across domains, but different names. For the sake of consistency, we resort to automotive naming only (i.e. from ISO26262). **Integrity levels**. There are multiple integrity levels that describe different levels of acceptable risk. They are referred to as Automotive Safety Integrity Levels (ASIL) in the context of automotive, with ASIL-D being the most stringent ASIL, and ASIL-A the least stringent yet with some safety requirements. Finally, an additional level named Quality Managed (QM) is used to refer to items with no safety requirement at all. **Dual (DMR) and Triple Modular Redundancy (TMR)**. Systems with the highest integrity levels are often realized building on DMR or TMR for error detection and/or correction. In the particular case of automotive systems, ASIL-D components are generally deployed on Dual-Core Lockstep (DCLS) CPUs. DCLS is an efficient implementation of time and space redundancy for computing cores where the same software runs on two identical cores, but with some staggering (time shift) among them so that a fault affecting both cores simultaneously would cause different errors - if any - due to their different state. Hence, errors are detectable by means of simple comparison since redundancy is _diverse_. The potential sources for faults affecting redundant cores, as well as their impact, have been carefully analyzed [13], and caveats to deal with the impact of such faults provided [14]. **Safe state**. Many systems have a safe state, i.e. a state that, whenever reached, guarantees system safety. For instance, the safe state could consist of a successful transfer of the control to the driver in the case of partially autonomous cars, or stopping the car in a safe location. Reaching such safe state may imply that the corresponding safety system is no longer working. Hence, the safe state may affect the availability of the faulty component or the overall system. Systems lacking a safe state are often referred to as fail-operational systems, whereas those with a safe state are referred to as fail-safe systems. Note that fault tolerance (for fail-operational systems) and safe states (for fail-safe systems) are intended to be achieved at a given system level, but failures may be allowed at lower levels. For instance, a job of the task controlling the braking system may fail due to a soft error, so that we have a failure at the scope of such microcontroller. However, at a higher level we may have multiple redundant units and a voting system to achieve fault tolerance, or mechanisms to detect the error and potentially enforce a lower driving speed (i.e. a potential safe state) if such fault occurs too often and challenges the timeliness of the braking system. **Fault Tolerant Time Interval (FTTI)**. From the time a fault occurs until it is properly controlled, there is a maximum time affordable in which no hazard can occur. That time interval is referred to as FTTI in automotive, and determines the time needed to recover normal operation, or to reach a safe state where, despite availability may be harmed, safety is preserved. ### _Related Work_ This paper presents a safety island explicitly conceived to provide advanced safety services to an HPC device. However, Fig. 1: V-model of the development process of a safety-critical system. other works such as that by Siemens [1] already envision their own safety island, and some HPC platforms targeting mainly the automotive domain, such as the Intel Go platform [2] and the NVIDIA DRIVE AGX Orin [3, 4] already include an automotive ASIL-D compliant microcontroller as a form of "safety island". In the case of Siemens, their safety island [1] aims at providing a safe enclave for execution and post-mortem test capabilities for the HPC device but, to our knowledge, it lacks advanced observability and controllability features, as well as safety measures, intended to preserve safe operation in the HPC device despite errors, as opposed to our concept. In the case of the HPC platforms with ASIL-D compliant microcontrollers, such automotive microcontroller is an Infineon AURIX processor in both cases, which is primarily intended to operate as a standalone microcontroller. However, if configured properly, it can provide some of the services provided by the safety island described in this paper, potentially with lower efficiency due to the lack of explicit hardware support for some features, such as multicore interference monitoring and diverse redundancy support for the cores in the HPC device. For the realm of IoT, the safety island on Intel's Atom(r) x6000FE Series [15] enables functional safety (FuSa) capabilities that detect and attenuate a system fault before it causes or exacerbates further errors (if using the fault \(\rightarrow\) error \(\rightarrow\) failure sequence terms as described in [16]). Functionality includes fault monitoring and reporting, on-demand diagnostics measurements, watchdog timers, temperature monitoring, self-diagnostics of the safety island itself and encoding/decoding of communication protocols between the safety island and other external elements. In the area of security, some Arm processors include the Arm TrustZone [7], which is a form of highly-coupled security island providing a security enclave in Arm processors. Hence, despite with different purposes (security instead of safety), and with a particular degree of coupling (only for a highly-coupled implementation), Arm's TrustZone has some commonalities with the safety island. ## III Architecture of our Safety Island Our goal is presenting a safety island with a number of specific characteristics as follows: * Include features for monitoring and controlling the behavior of an HPC device, as well as capabilities to accurately diagnose the cause of any error so that the most effective remedy can be applied. * Be suitable for its realization as a chiplet, hence easing integration with COTS HPC devices, and minimizing integration challenges. * Be built on open source technologies as much as possible to ease its adoption and extension by the community. This section presents our safety island, including key design and integration considerations for its effectiveness. ### _Safety Features of our safety island_ The purpose of our safety island is providing HPC devices with appropriate capabilities to execute performance-demanding applications while preserving their safety requirements. As explained before, safety-related systems require a number of features that can be generally classified into the following categories. #### Iii-A1 Controllability Features Controllability features are needed to guarantee that execution conditions for safety-relevant applications are controlled to a sufficient extent (e.g., limiting mixed-criticality interference, guaranteeing some performance levels, etc.). Proactive FeaturesThis type of features includes the ability to restrict the operation of HPC components whenever needed by setting appropriate configurations (e.g., network-on-chip (NoC) policies, cache replacement policies, cache sharing policies, shared resource usage quotas, etc.) so that safety requirements can be implemented through the safety island without introducing disruptive changes in the HPC processor. Reactive FeaturesAlternatively, if some behavior cannot be avoided by means of appropriate configurations, monitoring capabilities are needed to detect misbehavior (e.g., abnormal error detection rates in caches, overutilization of some components, etc.), diagnose the cause, and take actions to mitigate such misbehavior (e.g., stalling the offending process for a while). #### Iii-A2 Observability Features Observability features are critically important during both system validation and operation. During system validation, they allow collecting detailed information of the behavior of the overall system. The information can be from individual hardware and software components. This information can then be processed offline to detect any misbehavior. Such observability features typically decrease the need for longer test campaigns, since they ease obtaining the evidence; evidence whether some behavior occurs or not. In some circumstances, observability feature may be mandatory to detect certain behavior that is not observable otherwise. Those features are expected not only to identify specific situations, but to provide enough information to ease diagnosis to mitigate the source of the unexpected situations. Observability features are fundamental during operation since they can be linked to - reactive - controllability features, as indicated before, as well as to safety measures that require detailed diagnostics to guarantee safety without impacting other key metrics such as availability and performance. The type of information to be observed and how it is collected is highly diverse and must be properly tailored to not incur excessive cost. For instance, one could trace all transactions across two components (e.g., across a shared cache and DRAM memory) or summarize them by counting the amount of data transferred over a period of time. The former may require tracking each individual address accessed along with some information about the transaction (e.g., whether it is a read or store operation, amount of data transferred, etc.), which requires huge storage but provides highly detailed information, whereas the latter provides much less information but only needs few counters to track the amount of data transferred potentially broken down across components originating the transactions (e.g., across cores, accelerators, etc.). Overall, it is key to enable the safety island with appropriate support to collect different types of traces, and either output them at high speed, or compress them on-the-fly to generate compressed logs for a "post-mortem" analysis. Information to be monitored includes abnormal performance behavior, activity in shared resources (e.g., NoCs, caches and memory controllers), etc. Also, programmable filters to monitor only specific subsets of information are highly convenient. #### Iii-A3 Safety Measures Random hardware faults and sporadic abnormal performance conditions are generally unavoidable. The safety island must provide capabilities to monitor faults, diagnose the cause, tolerate some of them, and contain the impact of all of them so that software layers can easily and timely manage errors and preserve fault-free operation at all times. Some safety measures can be implemented by mostly relying on controllability and observability features already described before, but some others may require additional support. For instance, hardware or software monitors may be needed so that, upon the detection of abnormal behavior in the HPC device, specific corrective actions are taken (e.g., resetting specific components, switching to a degraded operation mode, etc.). Since those monitors may have the highest integrity levels, they may require DMR or TMR support in the safety island. Similarly, error detection capabilities for the HPC device may require the safety island to orchestrate some form of diverse and redundant execution on the HPC device regardless of whether the HPC device has specific hardware support for that. ### _Hardware Integration Considerations_ Integration of the safety-relevant features with the HPC device (aka as HPC island) to be mastered is a challenging concern. The higher the coupling between the safety island and the HPC island, the higher the efficiency of the safety island due to having higher controllability and observability, but the lower the modularity since mutual dependencies across the safety and HPC islands would increase. For instance, two extremes of the integration could be as follows: * **Coupled integration**: the islands could be integrated connecting the safety island as a master to the different interconnects in the HPC island (e.g., all AMBA interconnects). This would grant detailed observability and controllability to the safety island, which would have direct information from the different internal interfaces, and could react almost immediately to any predefined event. On the other hand, such integration would be highly device-specific, hence potentially requiring non-negligible modifications to tailor the safety island to a particular HPC device. * **Loose integration**: the safety island could be encapsulated into a chiplet with standard predefined interfaces agnostic of the particular characteristics of the HPC device to be mastered. This approach would favor portability and modularity, but would be detrimental for observability and controllability purposes since (i) some interfaces may not be directly observable, and (ii) access to the connected interfaces may have higher latency than in the coupled approach, hence increasing reaction time, which ultimately may inhibit the use of some safety measures requiring immediate actions (e.g., to avoid error propagation). When integrating both islands, the safety island and the HPC island, it is critically important to understand what is reachable (and how) through the existing interfaces to program the safety island accordingly (e.g., monitoring modules, traffic injection modules). Note that, even if some parts of the chip are not directly reachable by the safety island, they may still be managed indirectly. For instance, one could inject traffic reaching specific devices attached to not directly reachable interconnects, and observe the latency to obtain a response to guess the amount of load in that interconnect. #### Iii-B1 Chiplets While integrating all features in a single chip die generally provides advantages in terms of power and performance, some issues are driving industry towards the use of chiplets instead. In particular, single-chip solutions hinder chip reuse, may lead to lower yield due to the increased number of transistors per chip, and provide diminishing returns in terms of performance as the chip size grows. Instead, chiplets provide a number of key advantages, particularly relevant for the safety island: (1) they allow for chip reuse, so the same safety island can be used for different HPC devices. (2) Designs can be specialized for efficiency reasons, hence not needing to build a single chip for all targets. (3) Heterogeneous technologies can be used across different chiplets, hence easing integration. (4) Due to being smaller and simpler devices than monolithic chip solutions, they ease testability. Finally, (5) their smaller size allows increasing yield. Still, chiplets have to face some challenges that relate to using larger boards to accommodate multiple chiplets, increased latency for chiplet-to-chiplet communication, increased reliability concerns due to the increased number of soldered joints, thermal/mechanical constraints to place multiple chiplets together, and the lack of chiplet-to-chiplet communication standards, although the Universal Chiplet Interconnect Express (UCIe) has recently appeared to mitigate the latter. #### Iii-B2 Physical and Logical Integration A number of integration aspects related to the physical location of the different chips and the communication interfaces emerge in chiplet Fig. 2: Schematic of our Safety Island. based solutions. For instance, UCIe provides a physical solution to manage the integration of multiple chiplets. However, the particular protocols to be used to communicate chiplets over such communication interface remain to be defined. Analogously, whenever multiple chiplets are deployed, the physical location of the different chiplets, including memories, must be carefully laid out to maximize performance, while minimizing power, area and reliability concerns. Also, it must be carefully analyzed which chiplets need to interact. For instance, it is unclear whether all or just a subset of the chiplets need to access main memory, whether it is better deploying memory controllers in all chiplets or concentrating memory access through a single chiplet, whether some specific memory technologies are more convenient than others when using chiplets. Overall, new integration-related challenges emerge when using chiplets. Aspects related to the power supply, power domains, and power monitoring must also be taken into account since the safety island is intended to be used for safety critical functionalities. In general, the safety island inherits similar requirements to those of any other safety-relevant microcontroller. Using other chiplets for other devices (e.g., for the HPC device) brings increased costs, as mentioned before, but may ease implementing some safety measures since physical segregation, in general, reduces the number of potential single-point faults. ### _System Software Considerations_ A number of observability and controllability features in the safety island may require accessing specific modules in the HPC island such as, memory interfaces to inject traffic, or configuration registers to exercise control on the HPC device. However, existing Memory Management Units (MMUs) or Input/Output MMUs (IOMMUs) may exercise control on the permissions and privileges to access (and potentially modify) different components in the HPC island. Therefore, it becomes critically important that the safety island - or the system software on its behalf - is capable of properly configuring permissions and privileges (e.g., at boot time) so that it can perform its work during operation. Note that such boot and configuration process is not exempt of security risks, and, hence, is a delicate process that needs to follow appropriate rules for a secure boot and configuration. Additionally, in order to retain flexibility, namely in regards to fault tolerance and containment, permissions and privileges should as well be configurable at runtime. However, such flexibility can equally incur security risks if the reconfiguration interface represents a single point of failure and can be directly manipulated by exploiting, e.g., a page table vulnerability. Another software aspect relates to the fact that user level software with safety requirements may need to run on the safety island. For instance, control applications needing native DCLS only available in the safety island may require running on the cores of the safety island. Similarly, some parts of the applications needing to run on the HPC island may also need to run on the safety island. To guarantee a safe environment, virtualization becomes mandatory as well as an appropriate hypervisor or real-time operating system (RTOS) providing partitioning services to those applications, such as frontISS 'XtratuM [17], SYSGO's PikeOS [18], GMV-Portugal's Air [19], Lynx Software Technologies' LynxSecure, Wind River's hypervisor, Green Hill's Integrity, Continental's OSEK VDX, and Erika Enterprise, to name a few. ### _System Safety Considerations_ The integration of the safety and HPC islands is not exempt of some safety considerations, mostly related to the hardware and software considerations above. A key safety consideration relates to the latency to retrieve information from the HPC island as well as to exercise control. Coupled integrations generally lead to lower latencies, which favor decreased safety risks, as opposed to loose integrations. The ability to observe and control the HPC island with low latency is key for a number of safety-related aspects such as fault containment and reaction times at system level. For instance, in the context of automotive safety-relevant systems, an FTTI is defined, as explained before. Exceeding that FTTI implies that hazards can occur potentially violating the safety goals of the system. Therefore, the safety island design and integration must be planned to adhere to safety considerations relevant at hardware, software and system level. ### _Security Considerations_ While the goal of the safety island is providing safety capabilities, it must realize some security support to avoid its improper use. Security aspects relevant for the safety island include authentication, permissions and secure boot. The safety island must realize a secure boot process, in cooperation with software, to guarantee that software executed during booting, as well as drivers loaded, come from legitimate sources. Authentication is also key if any such software requires being updated with updates being initiated by external sources. During operation, in order to monitor and even control the HPC device, appropriate permissions must be set in the configuration of the HPC device so that safety island actions are allowed. In that context, authentication becomes fundamental since analogous actions triggered by devices other than the safety island must not be allowed. Some technologies, such as Intel's CSME (Converged Security and Management Engine) [20] provide capabilities to authenticate and load firmware into relevant IPs. Such type of technology could be expanded towards use among chiplets, allowing for example the safety island to authenticate itself towards the cores in the HPC device. ### _Extendability Considerations_ The use of a safety island, especially if conceived as a separate chiplet, brings opportunities in terms of extensions and updates. For instance, improved safety island designs can be deployed along with a given HPC device without needing to update the latter. Such improvements may come in the form of extensions to enable more efficient means to manage safety aspects (e.g., resorting to hardware support instead of software-only solutions), such as enhanced monitoring units, or more powerful cores in the safety island to perform specific services (e.g., implementing domain-specific ISA extensions in the safety island cores). Such extensions can be also realized deploying an eFPGA in the safety island so that a firmware update suffices, hence avoiding physical changes. ## IV Key Components and Technologies Realizing a safety island requires an SoC capable of executing safety-relevant functionalities, as well as capable of providing safety services to the HPC island. To build a functional and open source safety island, we identify a number of existing and under development components and technologies that need to be consistently integrated to form the safety island. Some of those components and technologies are introduced in this section. ### _Baseline MPSoC_ As part of the H2020 SELENE project, an open source RISC-V based MPSoC suitable for the space, automotive and railway domains has been released [21]. The SELENE SoC offers a 6-core multicore based on Gaisler's NOEL-V cores [22] and other GPL IPs [23]. Moreover, it includes a wide subset of the IPs described in the remaining of this subsection that make it further appropriate as the starting point to develop a safety island. However, there are other alternatives. Unfortunately, high-performance RISC-V cores are mostly proprietary, such as SiFive's P650 and others. Some open source cores have been recently compared [24], including Rocket [25], BOOM [26], CVA6 [27], and SHAKTI [28] C-Class implementations. No core is proven superior to the others in all fronts with varying conclusions for both ASIC and FPGA realizations if we consider performance, power efficiency, area, or maintainability. ### _Multicore Interference Monitors_ A key safety service in multicores relates to monitoring the interference across cores or other type of devices (e.g., accelerators) since such interference may affect real-time guarantees for safety-critical real-time tasks. Recently, the Safe Statistics Unit (SafeSU) [29, 30] has been proposed. It provides capabilities to monitor the traffic in AMBA interfaces such as AHB and AXI4, although its design has been made modular to enable its porting to other interfaces. The SafeSU allows measuring the interference each master device causes on each other device in different interfaces, and has been successfully integrated in the SELENE SoC [31]. It remains to be studied how to tailor it to monitor the traffic in remote interconnects rather than those in the safety island itself. ### _Multicore Interference Quotas_ The SafeSU [29, 30] has also been equipped with an interference control mechanism building on its interference monitoring capabilities. In particular, the SafeSU allows programming interference quotas that, upon being exceeded, trigger interrupts that can be immediately captured by the hypervisor or RTOS so that any action needed can be taken, in accordance with system needs (e.g., dropping the offending task, stalling it for a while, increasing QoS guarantees for the offended task). These interrupts have been properly connected to the corresponding interrupt controller at hardware level and successfully captured by the operating system on top, so the integration of the SafeSU with the software layers is simple. ### _Performance Validation_ While monitoring and quota features during operation are key features needed for the design of the system, such system must be thoroughly tested to guarantee that timing overruns will not occur making deadline violation risk residual. Software tests provide limited controllability to exercise all performance corners since multicore interference scenarios can only be induced indirectly and, in some cases, without synchronous control. For instance, some traffic with long bursts can only be produced by devices such as Ethernet ports of the Direct Memory Access (DMA) controller, which are too hard to synchronize with traffic produced by the computing cores. To tackle this issue, the Safe Traffic Injector (SafeTI) [32] has been recently proposed. It allows programming arbitrary traffic patterns, including delays between consecutive transactions, fully synchronously, and allowing to generate any type of traffic including read/write, with arbitrary data transfer sizes, with/without burst behavior, etc., including repeated traffic, as well as fixed-size and infinite traffic patterns. As for the SafeSU, it remains to be studied how to tailor the SafeTI to inject traffic in the HPC island from the safety island. ### _Diverse Redundancy for Cores_ Functionalities with the highest integrity level (e.g., ASIL-D in automotive) require diverse redundancy in several domains, which is efficiently implemented with DCLS. Hence, at least some cores in the safety island need to implement DCLS. The SafeLS realizes DCLS for NOEL-V cores in the SELENE SoC [33]. However, as explained before, DCLS is generally expensive if not needed for some tasks since redundant cores are not user visible. Hence, different flavors of diverse redundancy can be deployed providing different tradeoffs, such as allowing cores to be used independently, although failing to provide diverse redundancy for I/O code. This is the case of the Safe Diversity Monitor (SafeDM) module [34], which allows measuring whether diversity exists across two cores. Conversely, the Safe Diversity Enforcement (SafeDE) [35] module allows enforcing some time staggering, and hence, diversity across two cores running a task redundantly. The SafeSoftDR software module [36] could be used instead since it provides the same functionality as SafeDE in a less efficient manner but without requiring any hardware support. A comparison across the different mechanisms can be found in [37]. Note that, DCLS is intrinsically highly coupled with the redundant cores, and hence, only available for the safety island. Instead, SafeDE, SafeDM and SafeSoftDR can manage diversity for non-DCLS cores. Therefore, they can be tailored to deliver diverse redundancy to cores in the HPC island from the safety island. ### _Diverse Redundancy for Accelerators_ Full redundancy for accelerators such as GPUs is generally not present in HPC devices. Therefore, it is not possible orchestrating diverse redundancy across multiple accelerator instances as done for cores with SafeDE and SafeSoftDR. However, accelerators are often highly parallel and offer large internal redundancy this has been leveraged in some works to implement some form of diverse redundancy with appropriate software and hardware support [38, 39]. This type of support can be potentially integrated in the safety island, which can, for instance, offload redundant kernels in a GPU of the HPC island inducing diversity with different means (e.g., intrinsics support, scheduling policy characteristics). In the context of Deep Neural Networks (DNNs), high - yet not perfect - accuracy rates are obtained for processes such as object detection and classification. DNNs often rely on approximation and stochastic behavior, and hence, do not generally require bit-level precision. Instead, high - yet not full - precision is wanted at semantic level (e.g., properly detecting and classifying an object) regardless of whether the accuracy is a bit higher or lower. In that context, it is possible deploying lower-cost and approximate (e.g., using lower precision arithmetic) accelerators in the safety island providing diverse redundancy to large and precise accelerators in the HPC island as long as the former are capable of detecting large deviations for the predictions of the latter [40]. Such DMR scheme has already been realized in [41]. ### _Watchogs_ As part of the architectural design of safety-related functionalities, watchdogs are popular since they allow checking the aliveness of specific components. At hardware level, watchdogs are also popular and, in the context of the safety island, they can be deployed to monitor the aliveness of specific components in the safety island as well as in the HPC island (or the complete HPC island). Generally, watchdogs are expected to be made sufficiently independent of the item being monitored, e.g., with independent clock and power supply. Hence, this is expected to hold by construction in the case of loose integration of the safety island. However, specific design rules must be followed for both coupled integration of the safety and HPC islands, and watchdogs monitoring components part of the safety island itself. Watchdogs can monitor clock signals, cycle counters, instruction counters, or time-to-response for some devices. For instance, one could couple a watchdog to the SafeTI so that the latter sends a request requiring response to a specific component in the HPC island, while the watchdog awaits for an answer within a specific time bound. If such response does not arrive timely, the watchdog may raise an interrupt to be captured by the system software in the safety island. ### _Virtualization Extensions_ Hypervisors and RTOSs, often required in safety-critical systems, require appropriate virtualization capabilities to offer partitioning services to the guest operating system. Such virtualization can only be realized if supported by the hardware platform. Hence, virtualization extensions become mandatory for the safety island. For instance, the aforementioned NOEL-V cores in the SELENE SoC implement such extension and have been proven effective to run hypervisors on top, such as fentlSS' XtratuM [17, 42]. ### _Logging Support_ Most HPC devices include some form of tracing support. However, information traced can be abundant and produced continuously, which, in general, requires a host computer to process it. Such information can be of much use to diagnose the source of some errors, or, at least, to enable reproducibility for diagnostics purposes. The safety island can act as such host computer. However, storage capabilities are limited for the safety island (e.g., typically KBs for on-chip storage and MBs for off-chip storage), and hence, either information is dropped by, for instance, retaining the most recent traced information only, or summarized in the form of logs. Retaining recent information can be easily done with trace buffers where information is stored using a FIFO policy. Logging requires, instead, a tradeoff between the details retained and the hardware cost to retain them. The higher the degree of information loss, the lower the cost to store remaining information. For instance, one can track timestamps for specific events, which would require large storage capabilities or restricting trace recording to a limited time window. Alternatively, one could use counters to track occurrences of those events - potentially broken down across multiple categories, with much lower storage cost, but losing information relative to the timing of those events. For instance, some authors proposed an error logger for caches tracking error location to diagnose permanent faults [43]. ### _Chiplet Integration Technologies_ In case of a loose integration with a chiplet-based safety island, Universal Chiplet Interconnect Express (UCIe) comes as the standardizing solution to die-to-die interconnectivity. The layered protocol specifies a die-to-die adapter layer and a protocol layer, the latter supporting PCIe or CXL, with further protocol mappings planned. This requires, however, that communicating chiplets adhere to standards. For instance, UCIe's specification does not cover packaging/bridging technology used to provide the physical link between chiplets. It is bridge-agnostic, meaning chiplets can be linked via different mechanisms such as fanout bridge, silicon interposers (i.e. 2.5D packaging) or other packaging technologies such as 3D packaging. Nevertheless, standards such as bump pitch, must be taken into account, meaning RISC-V platforms would require dedicated, standardized support for UCIe, which could potentially hinder observability. In terms of packaging technology, for instance, Intel's EMIB (Embedded Multi-Die Interconnect Bridge) is a 2.5D packaging technique used to connect dies on the same substrate. 2.5D refers to the integration of dies/chiplets on a substrate using an interposer. It brings specific advantages such as larger die count and larger package configurations, lower cost than full size silicon interposer and support for high data rate signaling between adjacent die. 3D packaging is an alternative to interposers. 3D packaging refers to the direct high-density interconnection of chips through TSV (through-silicon via). In 3D packaging, chiplets are placed on top of one another instead of horizontally next to one another, forming a 3D structure with each chiplet occupying a layer. Finally, the interposer connects the 3D assembly of the chiplet with the substrate. For instance, Foveros is a high-performance 3D packaging technology. ## V Other Applications The safety island can be used, for obvious reasons, in contexts where system requirements include the combination of high-performance needs and safety requirements. However, most of its features can provide other type of services to HPC devices, such as, security and Reliability, Availability and Serviceability (RAS). Therefore, there is a broad area of application for the safety island beyond safety requirements strictly. For the sake of illustration, we introduce the use of the safety island for RAS and security applications in the remaining of this section. ### _Ras_ HPC devices used for servers and supercomputers, to name some application domains, have strict RAS requirements. This relates to the relatively higher criticality of the applications run in those domains when compared with most desktop computers, as well as the much higher exposure to faults due to having very high occupancy and, in the context of supercomputers, typically thousands of computing nodes operating in parallel cooperatively. For instance, in the case of a supercomputer where parallel applications required on average 1,000 CPUs, acceptable failure rates would decrease at least by a factor of 1,000 with respect to single-CPU computers. Components such as the SafeSU or watchdogs can provide error detection capabilities able to trigger recovery actions at software level. Analogously, logging features can be used to assist recovery by providing relevant information about the error detected, or can also be used to anticipate unrecoverable errors by monitoring recoverable ones. For instance, permanent faults can be detected by diagnosing error location with specific logging capabilities so that appropriate actions (e.g., CPU replacement in a supercomputer) can be taken before permanent faults lead to any unrecoverable error. ### _Security_ It is well-known that all safety-critical systems are also security-critical. This relates to the fact that unintended failures in safety-critical systems could be produced intendedly by an attacker, hence creating at least similar risks. The opposite, instead, is not true, and systems can be security-critical but not safety-critical (e.g., a system managing personal information). A subset of the security concerns have analogous behavior to that of the safety concerns, being the only difference whether their root cause is intended (security concerns) or unintended (safety concerns). For instance, abuse in the access to shared resources may be caused by a faulty application or by a malicious attack. In both cases, the SafeSU could be leveraged to, at least, detect the abuse and take corrective actions. Tracing and logging features could be used to discern intended from unintended attacks based on their history or frequency of occurrence, for instance. Security concerns may often require countermeasures to stop or fool attacks. Components such as the SafeTI may be leveraged for that purpose generating traffic that degrades the ability of the attacker to deduce information from the victim. For instance, in the case of side-channel attacks learning from memory access patterns of the victim, traffic can be injected making the attacker believe such traffic belongs to the victim so that wrong conclusions are reached (e.g., obtaining a wrong key or failing to narrow down enough the possibilities for the victim's key). In any case, if the safety island is also used for security purposes, a number of additional security-specific technologies may need to be added to the safety (now safety-and-security) island, such as cryptographic accelerators, and means for detecting and defeating attacks. Note that those security-related features focus on providing security services to the HPC device, whereas those discussed in Section III-E focus on making safety island operation secure. ## VI Conclusions There is an increasing need for the use of HPC devices in safety-critical systems, but those devices lack enough controllability and observability channels, as well as adequate support to realize key safety measures. Hence, solutions are required to enable the safe use of HPC devices. This paper presents our concept of a safety island and its main constituents to enable the safe use of HPC devices for safety-critical systems. In particular, we analyzed some key tradeoffs related to the degree of coupling of the safety and HPC islands, identified key components needed or highly convenient to have in the safety island to fulfill its due, and assessed some other types of applications of the safety island with overlapping needs, such as applications with RAS and/or security requirements. ## Acknowledgements BSC authors contribution is part of the project (ISOLDE), funded by MCIN/AEI/10.13039/501100011033 and the European Union NextGenerationEU/PRTR, and the European Union's Horizon Europe Programme under project KDT Joint Undertaking (JU) under grant agreement No 101112274. This work has also been partially supported by the Spanish Ministry of Science and Innovation under grant PID2019-107255GB-C21 funded by MCIN/AEI/10.13039/501100011033.
2307.00695
Convergence Rate of LQG Mean Field Games with Common Noise
This paper focuses on exploring the convergence properties of a generic player's trajectory and empirical measures in an N-player Linear-Quadratic-Gaussian Nash game, where Brownian motion serves as the common noise. The study establishes three distinct convergence rates concerning the representative player and empirical measure. To investigate the convergence, the methodology relies on a specific decomposition of the equilibrium path in the N-player game and utilizes the associated Mean Field Game framework.
Jiamin Jian, Qingshuo Song, Jiaxuan Ye
2023-07-03T00:59:40Z
http://arxiv.org/abs/2307.00695v1
# Convergence Rate of LQG Mean Field Games with Common Noise ###### Abstract This paper focuses on exploring the convergence properties of a generic player's trajectory and empirical measures in an \(N\)-player Linear-Quadratic-Gaussian Nash game, where Brownian motion serves as the common noise. The study establishes three distinct convergence rates concerning the representative player and empirical measure. To investigate the convergence, the methodology relies on a specific decomposition of the equilibrium path in the \(N\)-player game and utilizes the associated Mean Field Game framework. ## 1 Introduction Mean Field Game (MFG) theory was introduced by Lasry and Lions in their seminal paper ([19]), and by Huang, Caines, and Malhame ([15, 13, 14, 12]). It aims to provide a framework for studying the asymptotic behavior of \(N\)-player differential games being invariant under the reshuffling of the players' indices. For a comprehensive overview of recent advancements and relevant applications of MFG theory, it is recommended to refer to the two-volume book by Carmona and Delarue ([4, 5]) published in 2018 and the references provided therein. Mean Field Games (MFG) have become widely accepted as an approximation for \(N\)-player games, particularly when the number of players, \(N\), is large enough. A fundamental question that arises in this context concerns the convergence rate of this approximation. Convergence can be analyzed from different perspectives, such as convergence in value, the trajectory followed by the representative player, or the behavior of the mean field term. Each of these perspectives offers valuable insights into the behavior and characteristics of the MFG approximation. Furthermore, they raise a variety of intriguing questions within this context. To be more concrete, we examine the behavior of the triangular array \(\hat{X}_{t}^{(N)}=(\hat{X}_{it}^{(N)}:1\leq i\leq N)\) as \(N\to\infty\), where \(\hat{X}_{it}^{(N)}\) represents the equilibrium state of the \(i\)-th player at time \(t\) in the \(N\)-player game, defined within the probability space \(\left(\Omega^{(N)},\mathcal{F}^{(N)},\mathbb{F}^{(N)},\mathbb{P}^{(N)}\right)\). Additionally, we denote \(\hat{X}_{t}\) as the equilibrium path at time \(t\) derived from the associated MFG, defined in the probability space \((\Omega,\mathcal{F},\mathbb{F},\mathbb{P})\). Considering the identical but not independent distribution \(\mathcal{L}(\hat{X}_{it}^{(N)})\), the first question pertains to the convergence of \(\hat{X}_{1t}^{(N)}\), which represents the generic path. It can be framed as follows: 1. The \(\mathbb{W}_{p}\)-convergence rate of the representative equilibrium path, \[\mathbb{W}_{p}\left(\mathcal{L}\left(\hat{X}_{1t}^{(N)}\right),\mathcal{L} \left(\hat{X}_{t}\right)\right)=O\left(N^{-\gamma}\right).\] Here, \(\mathbb{W}_{p}\) denotes the \(p\)-Wasserstein metric. The existing literature extensively explores the convergence rate in this context. For (Q1), Theorem 2.4.9 of the monograph [3] establishes a convergence rate of \(O(N^{-1/2})\) using the \(\mathbb{W}_{1}\) metric. More recently, [17] addresses (Q1) by introducing displacement monotonicity and controlled common noise, and Theorem 2.23 applies the maximum principle of forward-backward propagation of chaos to achieve the same convergence rate. Within the LQG framework, [18] also provides a convergence rate of \(1/2\) for the representative player. The second question pertains to the convergence of the mean-field term, which is equivalent to the convergence of the empirical measure \(\rho(\hat{X}_{t}^{(N)})=\frac{1}{N}\sum_{i=1}^{N}\delta_{\hat{X}_{it}^{(N)}}\) of \(N\) players. Given the Brownian motion, denoted as \(\tilde{W}_{t}\), to be the common noise, the problem lies in determining the rate of convergence of the empirical measures to the MFG equilibrium measure \[\hat{m}_{t}=\mathcal{L}\left(\left.\hat{X}_{t}\right|\mathcal{F}_{t}^{\tilde{W }}\right),\quad\forall t\in(0,T].\] Thus, the second question can be stated as follows: 1. The \(\mathbb{W}_{p}\)-convergence rate of empirical measures in \(L^{p}\) sense, \[\left(\mathbb{E}\left[\mathbb{W}_{p}^{p}\left(\rho\left(\hat{X}_{t}^{(N)} \right),\mathcal{L}\left(\left.\hat{X}_{t}\right|\mathcal{F}_{t}^{\tilde{W}} \right)\right)\right]\right)^{\frac{1}{p}}=O\left(N^{-\gamma}\right).\] As for (Q2), Theorem 3.1 of [8] provides an answer, stating that the empirical measures exhibit a convergence rate of \(O(N^{-1/(2p)})\) in the \(\mathbb{W}_{p}\) distance for \(p\in[1,2]\). In [8], they also explore a related question that is both similar and more intriguing, which concerns the uniform \(\mathbb{W}_{p}\)-convergence rate: 1. The \(t\)-uniform \(\mathbb{W}_{p}\)-convergence rate of empirical measures in \(L^{p}\) sense, \[\left(\mathbb{E}\left[\sup_{t\in[0,T]}\mathbb{W}_{p}^{p}\left(\rho\left(\hat{ X}_{t}^{(N)}\right),\mathcal{L}\left(\left.\hat{X}_{t}\right|\mathcal{F}_{t}^{ \tilde{W}}\right)\right)\right]\right)^{\frac{1}{p}}=O\left(N^{-\gamma}\right).\] The answer provided by Theorem 3.1 in [8] reveals that the uniform convergence rate, as formulated in (Q3), is considerably slower compared to the convergence rate mentioned in (Q2). Specifically, the convergence rate for (Q3) is \(O\left(N^{-1/(d+8)}\right)\) when \(p=2\), where \(d\) represents the dimension of the state space. In our paper, we specifically focus on a class of one-dimensional Linear-Quadratic-Gaussian (LQG) Mean Field Nash Games with Brownian motion as the common noise. It is important to note that the assumptions made in the aforementioned papers except [18] only account for linear growth in the state and control elements for the running cost, thus excluding the consideration of LQG. It is also noted that differences between [18] and the current paper lie in various aspects: (1) The problem setting in our paper considers Brownian motion as the common noise, whereas [18] employs a Markov chain. This discrepancy leads to significant differences in the subsequent analysis; (2) The work in [18] does not address the questions posed in (Q2) and (Q3). Our main contribution is the establishment of the convergence rate of all three questions in the above in LQG framework. Firstly, the paper establishes that the convergence rate of the \(p\)-Wasserstein metric for the distribution of the representative player is \(O(N^{-1/2})\) for \(p\in[1,2]\). Secondly, it demonstrates that the convergence rate of the \(p\)-Wasserstein metric for the empirical measure in the \(L^{p}\) sense is \(O(N^{-1/(2p)})\) for \(p\in[1,2]\). Lastly, the paper shows that the convergence rate of the uniform \(p\)-Wasserstein metric for the empirical measure in the \(L^{p}\) sense is \(O(N^{-1/(2p)})\) for \(p\in(1,2]\), and \(O(N^{-1/2}\ln(N))\) for \(p=1\). It is worth noting that the convergence rates obtained for (Q1) and (Q2) in the LQG framework align with the results found in existing literature, albeit under different conditions. Additionally, it is revealed that the uniform convergence rate of (Q3) may be slower than that of (Q2), which is consistent with the observations made by [8] from a similar perspective. Interestingly, when considering the specific case where \(p=2\) and \(d=1\), the uniform convergence rate of (Q3) is established as \(O(N^{-1/9})\) according to [8], while it is determined to be \(O(N^{-1/4})\) within our framework that incorporates the LQG structure. Regarding (Q2), if the states \((\hat{X}^{(N)}_{it}:1\leq i\leq N)\) were independent, the convergence rate could be determined as \(1/(2p)\) based on Theorem 1 of [10] and Theorem 5.8 of [4], which provide convergence rates for empirical measures of independent and identically distributed sequences. However, in the mean-field game, the states \(\hat{X}^{(N)}_{it}\) are not independent of each other, despite having identical distributions. The correlation is introduced mainly by two factors: One is the system coupling arising from the mean-field term and the other is the common noise. Consequently, determining the convergence rate requires understanding the contributions of these two factors to the correlation among players. In our proof, we rely on a specific decomposition (refer to Lemma 6 and the proof of the main theorem) of the underlying states. This decomposition reveals that the states can be expressed as a sum of a weakly correlated triangular array and a common noise. By analyzing the behavior of these components, we can address the correlation and establish the convergence rate. Additionally, it is worth mentioning that a similar technique of dimension reduction in \(N\)-player LQG games have been previously utilized in [16] and related papers to establish decentralized Nash equilibria and the convergence rate in terms of value functions. The remainder of the paper is organized as follows: Section 2 outlines the problem setup and presents the main result. The proof of the main result, which relies on two propositions, is provided in Section 3. We establish the proof for these two propositions in Section 4 and Section 5. Some lemmas are given in the Appendix. ## 2 Problem setup and main results ### The formulation of equilibrium in Mean Field Game In this section, we present the formulation of the Mean Field Game in the sample space \(\Omega\). Let \(T>0\) be a given time horizon. We assume that \(W=\{W_{t}\}_{t\geq 0}\) is a standard Brownian motion constructed on the probability space \((\bar{\Omega},\bar{\mathcal{F}}=\bar{\mathcal{F}}_{T},\bar{\mathbb{P}},\bar{ \mathbb{F}}=\{\bar{\mathcal{F}}_{t}\}_{t\geq 0})\). Similarly, the process \(\tilde{W}=\{\tilde{W}_{t}\}_{t\geq 0}\) is a standard Brownian motion constructed on the probability space \((\tilde{\Omega},\tilde{\mathcal{F}}=\tilde{\mathcal{F}}_{T},\tilde{\mathbb{P} },\bar{\mathbb{F}}=\{\bar{\mathcal{F}}_{t}\}_{t\geq 0})\). We define the product structure as follows: \[\Omega=\bar{\Omega}\times\tilde{\Omega},\quad\mathcal{F},\quad\mathbb{F}=\{ \mathcal{F}_{t}\}_{t\geq 0},\quad\mathbb{P},\] where \((\mathcal{F},\mathbb{P})\) is the completion of \((\bar{\mathcal{F}}\otimes\tilde{\mathcal{F}},\bar{\mathbb{P}}\otimes\tilde{ \mathbb{P}})\) and \(\mathbb{F}\) is the complete and right continuous augmentation of \(\{\bar{\mathcal{F}}_{t}\otimes\tilde{\mathcal{F}}_{t}\}_{t\geq 0}\). Note that, \(W\) and \(\tilde{W}\) are two Brownian motions from separate sample spaces \(\bar{\Omega}\) and \(\tilde{\Omega}\), they are independent of each other in their product space \(\Omega\). In our manuscript, \(W\) is called individual or idiosyncratic noise, and \(\tilde{W}\) is called common noise, see their different roles in the problem formulation later defined via fixed point condition (4). To proceed, we denote by \(L^{p}:=L^{p}(\Omega,\mathbb{P})\) the set of random variables \(X\) on \((\Omega,\mathcal{F},\mathbb{P})\) with finite \(p\)-th moment with norm \(\|X\|_{p}=(\mathbb{E}\,[|X|^{p}])^{1/p}\) and by \(L^{p}_{\mathbb{F}}:=L^{p}_{\mathbb{F}}(\Omega\times[0,T])\) the space of all \(\mathbb{R}\) valued \(\mathbb{F}\)-progressively measurable random processes \(\alpha\) such that \[\mathbb{E}\left[\int_{0}^{T}|\alpha_{t}|^{p}dt\right]<\infty.\] Let \(\mathcal{P}_{p}(\mathbb{R})\) denote the Wasserstein space of probability measures \(\mu\) on \(\mathbb{R}\) satisfying \(\int_{\mathbb{R}}x^{p}d\mu(x)<\infty\) endowed with \(p\)-Wasserstein metric \(\mathbb{W}_{p}(\cdot,\cdot)\) defined by \[\mathbb{W}_{p}(\mu,\nu)=\inf_{\pi\in\Pi(\mu,\nu)}\left(\int_{\mathbb{R}\times \mathbb{R}}|x-y|^{p}d\pi(x,y)\right)^{\frac{1}{p}},\] where \(\Pi(\mu,\nu)\) is the collection of all probability measures on \(\mathbb{R}\times\mathbb{R}\) with its marginals agreeing with \(\mu\) and \(\nu\). Let \(X_{0}\in L^{2}\) be a random variable that is independent with \(W\) and \(\tilde{W}\). For any control \(\alpha\in L^{2}_{\mathbb{F}}\), consider the state \(X=\{X_{t}\}_{t\geq 0}\) of the generic player is governed by a stochastic differential equation (SDE) \[dX_{t}=\alpha_{t}dt+dW_{t}+d\tilde{W}_{t} \tag{1}\] with the initial value \(X_{0}\), where the underlying process \(X:[0,T]\times\Omega\mapsto\mathbb{R}\). Given a random measure flow \(m:(0,T]\times\Omega\mapsto\mathcal{P}_{2}(\mathbb{R})\), the generic player wants to minimize the expected accumulated cost on \([0,T]\): \[J(x,\alpha)=\mathbb{E}\left[\left.\int_{0}^{T}\left(\frac{1}{2}\alpha_{s}^{2} +F(X_{s},m_{s})\right)\,ds\right|X_{0}=x\right] \tag{2}\] with some given cost function \(F:\mathbb{R}\times\mathcal{P}_{2}(\mathbb{R})\mapsto\mathbb{R}\). The objective of the control problem for the generic player is to find its optimal control \(\hat{\alpha}\in\mathcal{A}:=L^{4}_{\mathbb{F}}\) to minimize the total cost, i.e., \[V[m](x)=J[m](x,\hat{\alpha})\leq J[m](x,\alpha),\quad\forall\alpha\in \mathcal{A}. \tag{3}\] Associated to the optimal control \(\hat{\alpha}\), we denote the optimal path by \(\hat{X}=\{\hat{X}_{t}\}_{t\geq 0}\). Next, to introduce the MFG Nash equilibrium, it is useful to emphasize the dependence of the optimal path and optimal control of the generic player, as well as its associated value, on the underlying measure flow \(m\). These quantities are denoted as \(\hat{X}_{t}[m]\), \(\hat{\alpha}_{t}[m]\), \(J[m]\), and \(V[m]\), respectively. We now present the definitions of the equilibrium measure, equilibrium path, and equilibrium control. Please also refer to page 127 of [5] for a general setup with a common noise. **Definition 1**.: _Given an initial distribution \(\mathcal{L}(X_{0})=m_{0}\in\mathcal{P}_{2}(\mathbb{R})\), a random measure flow \(\hat{m}=\hat{m}(m_{0})\) is said to be an MFG equilibrium measure if it satisfies the fixed point condition_ \[\hat{m}_{t}=\mathcal{L}\left(\left.\hat{X}_{t}[\hat{m}]\right|\tilde{F}_{t} \right),\ \forall 0<t\leq T,\ \text{ almost surely in }\mathbb{P}. \tag{4}\] _The path \(\hat{X}\) and the control \(\hat{\alpha}\) associated with \(\hat{m}\) are called the MFG equilibrium path and equilibrium control, respectively._ The flowchart of the MFG diagram is given in Figure 1. It is noted from the optimality condition (3) and the fixed point condition (4) that \[J[\hat{m}](x,\hat{\alpha})\leq J[\hat{m}](x,\alpha),\quad\forall\alpha\] holds for the equilibrium measure \(\hat{m}\) and its associated equilibrium control \(\hat{\alpha}\), while it is not \[J[\hat{m}](x,\hat{\alpha})\leq J[m](x,\alpha),\quad\forall\alpha,m.\] Otherwise, this problem turns into a McKean-Vlasov control problem, which is essentially different from the current Mean Field Games setup. Readers refer to [7, 6] to see the analysis of this different model as well as some discussion of the differences between these two problems. ### The formulation of Nash equilibrium in \(N\)-player game In this subsection, we set up \(N\)-player game and define the Nash equilibrium of \(N\)-player game in the sample space \(\Omega^{(N)}\). Firstly, let \(W^{(N)}=(W^{(N)}_{i}:i=1,2,\ldots,N)\) be an \(N\)-dimensional standard Brownian motion constructed on the space \((\bar{\Omega}^{(N)},\tilde{\mathcal{F}}^{(N)},\bar{\mathbb{P}}^{(N)},\bar{ \mathbb{F}}^{(N)}=\{\tilde{\mathcal{F}}^{(N)}_{t}\}_{t\geq 0})\) and \(\tilde{W}=\{\tilde{W}_{t}\}_{t\geq 0}\) be the common noise in MFG defined in Section 2.1 on \((\bar{\Omega},\tilde{\mathcal{F}},\tilde{\mathbb{P}})\). The probability space for the \(N\)-player game is \(\left(\Omega^{(N)},\mathcal{F}^{(N)},\mathbb{F}^{(N)},\mathbb{P}^{(N)}\right)\), which is constructed via the product structure with \[\Omega^{(N)}=\bar{\Omega}^{(N)}\times\tilde{\Omega},\quad\mathcal{F}^{(N)}, \quad\mathbb{F}^{(N)}=\left\{\mathcal{F}^{(N)}_{t}\right\}_{t\geq 0},\quad \mathbb{P}^{(N)}.\] where \((\mathcal{F}^{(N)},\mathbb{P}^{(N)})\) is the completion of \((\bar{\mathcal{F}}^{(N)}\otimes\tilde{\mathcal{F}},\bar{\mathbb{P}}^{(N)} \otimes\tilde{\mathbb{P}})\) and \(\mathbb{F}^{(N)}\) is the complete and right continuous augmentation of \(\{\tilde{\mathcal{F}}^{(N)}_{t}\otimes\tilde{\mathcal{F}}_{t}\}_{t\geq 0}\). Consider a stochastic dynamic game with \(N\) players, where each player \(i\in\{1,2,\ldots,N\}\) controls a state process \(X^{(N)}_{i}=\{X^{(N)}_{it}\}_{t\geq 0}\) in \(\mathbb{R}\) given by \[dX^{(N)}_{it}=\alpha^{(N)}_{it}dt+dW^{(N)}_{it}+d\tilde{W}_{t},\quad X^{(N)}_{ i0}=x^{(N)}_{i} \tag{5}\] with a control \(\alpha^{(N)}_{i}\) in an admissible set \(\mathcal{A}^{(N)}:=L^{4}_{\mathbb{P}^{(N)}}\) and random initial state \(x^{(N)}_{i}\). Given the strategies \(\alpha^{(N)}_{-i}=(\alpha^{(N)}_{1},\ldots,\alpha^{(N)}_{i-1},\alpha^{(N)}_{i+ 1},\ldots,\alpha^{(N)}_{N})\) from other players, the objective of player \(i\) is to select a control \(\alpha^{(N)}_{i}\in\mathcal{A}^{(N)}\) to minimize her expected total cost given by \[J^{N}_{i}\left(x^{(N)},\alpha^{(N)}_{i};\alpha^{(N)}_{-i}\right)=\mathbb{E} \left[\,\int_{0}^{T}\left(\frac{1}{2}\left(\alpha^{(N)}_{it}\right)^{2}+F \left(X^{(N)}_{it},\rho\left(X^{(N)}_{t}\right)\right)\right)dt\right|X^{(N)}_ {0}=x^{(N)}\right], \tag{6}\] where \(x^{(N)}=(x^{(N)}_{1},x^{(N)}_{2},\ldots,x^{(N)}_{N})\) is a \(\mathbb{R}^{N}\)-valued random vector in \(\Omega^{(N)}\) to denote the initial state for \(N\) players, and \[\rho\left(x^{(N)}\right)=\frac{1}{N}\sum_{i=1}^{N}\delta_{x^{(N)}_{i}}\] is the empirical measure of the vector \(x^{(N)}\) with Dirac measure \(\delta\). We use the notation \(\alpha^{(N)}:=(\alpha^{(N)}_{i},\alpha^{(N)}_{-i})=(\alpha^{(N)}_{1},\alpha^{( N)}_{2},\ldots,\alpha^{(N)}_{N})\) to denote the control from \(N\) players as a whole. Next, we give the equilibrium value function and equilibrium path in the sense of the Nash game. Figure 1: The MFG diagram. **Definition 2**.: 1. _The value function of player_ \(i\) _for_ \(i=1,2,\ldots,N\) _of the Nash game is defined by_ \(V^{N}=(V^{N}_{i}:i=1,2,\ldots,N)\) _satisfying the equilibrium condition_ \[V^{N}_{i}\left(x^{(N)}\right):=J^{N}_{i}\left(x^{(N)},\hat{\alpha}^{(N)}_{i}; \hat{\alpha}^{(N)}_{-i}\right)\leq J^{N}_{i}\left(x^{(N)},\alpha^{(N)}_{i}; \hat{\alpha}^{(N)}_{-i}\right),\quad\forall\alpha^{(N)}_{i}\in\mathcal{A}^{(N)}.\] (7) 2. _The equilibrium path of the_ \(N\)_-player game is the_ \(N\)_-dimensional random path_ \(\hat{X}^{(N)}_{t}=(\hat{X}^{(N)}_{1t},\hat{X}^{(N)}_{2t},\ldots,\hat{X}^{(N)}_ {Nt})\) _driven by (_5_) associated to the control_ \(\hat{\alpha}^{(N)}_{t}\) _satisfying the equilibrium condition of (_7_)._ ### Main result We consider three convergence questions on \(N\)-player game defined in \(\Omega^{(N)}\): The first one is the convergence of the representative path \(\hat{X}^{(N)}_{it}\), the second one is the convergence of the empirical measure \(\rho(\hat{X}^{(N)}_{t})\), while the last one is the \(t\)-uniform convergence of the empirical measure \(\rho(\hat{X}^{(N)}_{t})\). To be precise, we shall assume the following throughout the paper: **Assumption 1**.: * \(\mathbb{E}[|X_{0}|^{q}]<\infty\) _for some_ \(q>4\)_._ * _The initials_ \(X^{(N)}_{i0}\) _of the_ \(N\)_-player game is i.i.d. random variables in_ \(\Omega^{(N)}\) _with the same distribution as_ \(\mathcal{L}(X_{0})\) _in the MFG._ Note that the equilibrium path \(\hat{X}^{(N)}_{t}=(\hat{X}^{(N)}_{it}:i=1,2,\ldots,N)\) is a vector-valued stochastic process. Due to the Assumption 1, the game is invariant to index reshuffling of \(N\) players and the elements in \((\hat{X}^{(N)}_{it}:i=1,2,\ldots,N)\) have identical distributions, but they are not independent of each other. So, the first question on the representative path is indeed about \(\hat{X}^{(N)}_{1t}\) in \(\Omega^{(N)}\) and we are interested in how fast it converges to \(\hat{X}_{t}\) in \(\Omega\) in distribution: 1. The \(\mathbb{W}_{p}\)-convergence rate of the representative equilibrium path, \[\mathbb{W}_{p}\left(\mathcal{L}\left(\hat{X}^{(N)}_{1t}\right),\mathcal{L} \left(\hat{X}_{t}\right)\right)=O\left(N^{-?}\right).\] The second question is about the convergence of the empirical measure \(\rho(\hat{X}^{(N)}_{t})\) of the \(N\)-player game defined by \[\rho\left(\hat{X}^{(N)}_{t}\right)=\frac{1}{N}\sum_{i=1}^{N}\delta_{\hat{X}^{ (N)}_{it}}.\] We are interested in how fast this converges to the MFG equilibrium measure given by \[\hat{m}_{t}=\mathcal{L}\left(\left.\hat{X}_{t}\right|\tilde{\mathcal{F}}_{t} \right),\quad\forall t\in(0,T].\] 2. The \(\mathbb{W}_{p}\)-convergence rate of empirical measures, \[\mathbb{W}_{p}\left(\rho\left(\hat{X}^{(N)}_{t}\right),\mathcal{L}\left(\left. \hat{X}_{t}\right|\tilde{\mathcal{F}}_{t}\right)\right)=O\left(N^{-?}\right).\] Note that the left-hand side of the above equality is a random quantity and one shall be more precise about what the Big \(O\) notation means in this context. Indeed, by the definition of the empirical measure, \(\rho(\hat{X}^{(N)}_{t})\) is a random distribution measurable by \(\sigma\)-algebra generated by the random vector \(\hat{X}_{t}^{(N)}\). On the other hand, \(\mathcal{L}(\hat{X}_{t}|\tilde{\mathcal{F}}_{t})\) is a random distribution measurable by the \(\sigma\)-algebra \(\tilde{\mathcal{F}}_{t}\). Therefore, from the construction of the product probability space \(\Omega^{(N)}\) in Section 2.2, both random distributions \(\rho(\hat{X}_{t}^{(N)})\) and \(\mathcal{L}(\hat{X}_{t}|\tilde{\mathcal{F}}_{t})\) are measurable with respect to \(\mathcal{F}_{t}^{(N)}=\bar{\mathcal{F}}_{t}^{(N)}\otimes\tilde{\mathcal{F}}_ {t}\). Consequently, \(\mathbb{W}_{p}(\rho(\hat{X}_{t}^{(N)}),\mathcal{L}(\hat{X}_{t}|\tilde{ \mathcal{F}}_{t}))\) is a random variable in the probability space \((\Omega^{(N)},\mathcal{F}^{(N)},\mathbb{P}^{(N)})\) and we will focus on a version of (Q2') in the \(L^{p}\) sense: 1. The \(\mathbb{W}_{p}\)-convergence rate of empirical measures in \(L^{p}\) sense for each \(t\in[0,T]\), \[\left(\mathbb{E}\left[\mathbb{W}_{p}^{p}\left(\rho\left(\hat{X}_{t}^{(N)} \right),\mathcal{L}\left(\left.\hat{X}_{t}\right|\tilde{\mathcal{F}}_{t}\right) \right)\right]\right)^{\frac{1}{p}}=O\left(N^{-?}\right).\] In addition, we also study the following related question: 2. The \(t\)-uniform \(\mathbb{W}_{p}\)-convergence rate of empirical measures in \(L^{p}\) sense, \[\left(\mathbb{E}\left[\sup_{0\leq t\leq T}\mathbb{W}_{p}^{p}\left(\rho\left( \hat{X}_{t}^{(N)}\right),\mathcal{L}\left(\left.\hat{X}_{t}\right|\tilde{ \mathcal{F}}_{t}\right)\right)\right]\right)^{\frac{1}{p}}=O\left(N^{-?}\right).\] In this paper, we will study the above three questions (Q1), (Q2), and (Q3) in the framework of LQG structure with Brownian motion as a common noise with the following function \(F\) in the cost functional (2). **Assumption 2**.: _Let the function \(F:\mathbb{R}\times\mathcal{P}_{2}(\mathbb{R})\mapsto\mathbb{R}\) be given in the form of_ \[F(x,m)=k\int_{\mathbb{R}}(x-z)^{2}m(dz)=k(x^{2}-2x[m]_{1}+[m]_{2}) \tag{8}\] _for some \(k>0\), where \([m]_{1},[m]_{2}\) are the first and second moment of the measure \(m\)._ The main result of this paper is presented below. Let us recall that \(q\) denotes the parameter defined in Assumption 1. **Theorem 1**.: _Under Assumptions 1-2, for any \(p\in[1,2]\), we have_ 1. _The_ \(\mathbb{W}_{p}\)_-convergence rate of the representative equilibrium path is_ \(1/2\)_, i.e.,_ \[\mathbb{W}_{p}\left(\mathcal{L}\left(\hat{X}_{1t}^{(N)}\right),\mathcal{L} \left(\hat{X}_{t}\right)\right)=O\left(N^{-\frac{1}{2}}\right).\] 2. _The_ \(\mathbb{W}_{p}\)_-convergence rate of empirical measures in_ \(L^{p}\) _sense is_ \[\mathbb{E}\left[\mathbb{W}_{p}^{p}\left(\rho\left(\hat{X}_{t}^{(N)}\right), \mathcal{L}\left(\left.\hat{X}_{t}\right|\tilde{\mathcal{F}}_{t}\right) \right)\right]=O\left(N^{-\frac{1}{2}}\right).\] 3. _The uniform_ \(\mathbb{W}_{p}\)_-convergence rate of empirical measures in_ \(L^{p}\) _sense is_ \[\mathbb{E}\left[\sup_{0\leq t\leq T}\mathbb{W}_{p}^{p}\left(\rho\left(\hat{X}_ {t}^{(N)}\right),\mathcal{L}\left(\left.\hat{X}_{t}\right|\tilde{\mathcal{F}} _{t}\right)\right)\right]=\begin{cases}O\left(N^{-\frac{1}{2}}\ln(N)\right),& \text{ if }p=1,\\ O\left(N^{-\frac{1}{2}}\right),&\text{ if }1<p\leq 2.\end{cases}\] We would like to provide some additional remarks on our main result. Firstly, the cost function \(F\) defined in (6) applies to the running cost for the \(i\)-th player in the \(N\)-player game, and it takes the form: \[F\left(X_{it}^{(N)},\rho\left(X_{t}^{(N)}\right)\right)=\frac{k}{N}\sum_{j=1}^{N }\left(X_{it}^{(N)}-X_{jt}^{(N)}\right)^{2}. \tag{9}\] Interestingly, if \(k<0\), although \(F\) does satisfy the Lasry-Lions monotonicity ([2]) as demonstrated in Appendix 6.1 of [18], there is no global solution for MFG due to the concavity in \(x\). On the contrary, when \(k>0\), \(F\) satisfies the displacement monotonicity proposed in [11] as shown by the following derivation: \[\mathbb{E}\left[(F_{x}(X_{1},\mathcal{L}(X_{1}))-F_{x}(X_{2},\mathcal{L}(X_{2 })))(X_{1}-X_{2})\right]=2k\left(\mathbb{E}\left[(X_{1}-X_{2})^{2}\right]-( \mathbb{E}[X_{1}-X_{2}])^{2}\right)\geq 0.\] ## 3 Proof of the main result with two propositions Our objective is to investigate the relations between \((\hat{X}_{1t}^{(N)},\hat{X}_{2t}^{(N)},\ldots,\hat{X}_{Nt}^{(N)})\) and \(\hat{X}_{t}\) described in (Q1), (Q2), and (Q3). In this part, we will give the proof of Theorem 1 based on two propositions whose proof will be given later. **Proposition 1**.: _Under Assumptions 1-2, the MFG equilibrium path \(\hat{X}=\hat{X}[\hat{m}]\) is given by_ \[d\hat{X}_{t}=-2a(t)\left(\hat{X}_{t}-\hat{\mu}_{t}\right)dt+dW_{t}+d\tilde{W}_ {t},\quad\hat{X}_{0}=X_{0}, \tag{10}\] _where \(a\) is the solution of_ \[a^{\prime}(t)-2a^{2}(t)+k=0,\quad a(T)=0, \tag{11}\] _and \(\hat{\mu}\) is_ \[\hat{\mu}_{t}:=\mathbb{E}\left[\left.\hat{X}_{t}\right|\tilde{\mathcal{F}}_{t }\right]=\mathbb{E}[X_{0}]+\tilde{W}_{t}.\] _Moreover, the equilibrium control follows_ \[\hat{\alpha}_{t}=-2a(t)\left(\hat{X}_{t}-\hat{\mu}_{t}\right). \tag{12}\] **Proposition 2**.: _Suppose Assumptions 1-2 hold. For the \(N\)-player game, the path and the control of player \(i\) under the equilibrium are given by_ \[d\hat{X}_{it}^{(N)}=-2a^{N}(t)\left(\hat{X}_{it}^{(N)}-\frac{1}{N-1}\sum_{j \neq i}^{N}\hat{X}_{jt}^{(N)}\right)dt+dW_{it}^{(N)}+d\tilde{W}_{t}, \tag{13}\] _and_ \[\hat{\alpha}_{it}^{(N)}=-2a^{N}(t)\left(\hat{X}_{it}^{(N)}-\frac{1}{N-1}\sum_{ j\neq i}^{N}\hat{X}_{jt}^{(N)}\right)\] _respectively for \(i=1,2,\ldots,N\), where \(a^{N}\) is the solution of_ \[a^{\prime}-\frac{2(N+1)}{N-1}a^{2}+\frac{N-1}{N}k=0,\quad a(T)=0. \tag{14}\] ### Preliminaries We first recall the convergence rate of empirical measures of i.i.d. sequence provided in Theorem 1 of [10] and Theorem 5.8 of [4]. **Lemma 1**.: _Let \(d=1\) or \(2\). Suppose \(\{X_{i}:i\in\mathbb{N}\}\) is a sequence of \(d\) dimensional i.i.d. random variables with \(\mathbb{E}[|X_{1}|^{q}]<\infty\) for some \(q>4\). Then, the empirical measure_ \[\rho^{N}(X)=\frac{1}{N}\sum_{i=1}^{N}\delta_{X_{i}}\] _satisfies_ \[\mathbb{E}\left[\mathbb{W}_{p}^{p}\left(\rho^{N}(X),\mathcal{L}(X_{1})\right) \right]=\begin{cases}O\left(N^{-1/2}\right),&\text{if }p\in(1,2],\\ O\left(N^{-1/2}\right),&\text{if }p=1,d=1,\\ O\left(N^{-1/2}\ln N\right),&\text{if }p=1,d=2.\end{cases}\] Next, we give the definition of some notations that will be used in the following part. Denote \(C_{b}(\mathbb{R}^{d})\) to be the collection of bounded and continuous functions on \(\mathbb{R}^{d}\), and let \(C_{b}^{1}(\mathbb{R}^{d})\subset C_{b}(\mathbb{R}^{d})\) be the space of functions on \(\mathbb{R}^{d}\) whose first order derivative is also bounded and continuous. **Lemma 2**.: _Suppose \(m_{1},m_{2}\) are two probability measures on \(\mathcal{B}(\mathbb{R}^{d})\) and \(f\in C_{b}^{1}(\mathbb{R}^{d},\mathbb{R})\), where \(\mathcal{B}(\mathbb{R}^{d})\) is the Borel set on \(\mathbb{R}^{d}\). Then,_ \[\mathbb{W}_{p}(f_{*}m_{1},f_{*}m_{2})\leq|Df|_{0}\mathbb{W}_{p}(m_{1},m_{2}),\] _where \(f_{*}m_{j}\) is the pushforward measure for \(j=1,2\), and \(|Df|_{0}=\sup_{x\in\mathbb{R}^{d}}\max\{|\partial_{x_{i}}f(x)|:i=1,2,\ldots,d\}\)._ Proof.: We define a function \(F(x,y)=(f(x),f(y)):\mathbb{R}^{2d}\mapsto\mathbb{R}^{2}\). Note that, for any \(\pi\in\Pi(m_{1},m_{2})\), \(F_{*}\pi\in\Pi(f_{*}m_{1},f_{*}m_{2})\), i.e., \[F_{*}\Pi(m_{1},m_{2})\subset\Pi(f_{*}m_{1},f_{*}m_{2}).\] Therefore, we have the following inequalities: \[\mathbb{W}_{p}^{p}(f_{*}m_{1},f_{*}m_{2}) =\inf_{\pi^{\prime}\in\Pi(f_{*}m_{1},f_{*}m_{2})}\int_{\mathbb{R} ^{2}}|x-y|^{p}\pi^{\prime}(dx,dy)\] \[\leq\inf_{\pi^{\prime}\in F_{*}\Pi(m_{1},m_{2})}\int_{\mathbb{R} ^{2}}|x-y|^{p}\pi^{\prime}(dx,dy)\] \[=\inf_{\pi\in\Pi(m_{1},m_{2})}\int_{\mathbb{R}^{2d}}|f(x)-f(y)|^{ p}\pi(dx,dy)\] \[\leq|Df|_{0}^{p}\inf_{\pi\in\Pi(m_{1},m_{2})}\int_{\mathbb{R}^{2d }}|x-y|^{p}\pi(dx,dy)\] \[=|Df|_{0}^{p}\mathbb{W}_{p}^{p}(m_{1},m_{2}).\] **Lemma 3**.: _Let \(\{X_{i}:i\in\mathbb{N}\}\) be a sequence of \(d\) dimensional random variables in \((\Omega,\mathcal{F},\mathbb{P})\). Let \(f\in C_{b}^{1}(\mathbb{R}^{d})\). We also denote by \(f(X)\) the sequence \(\{f(X_{i}):i\in\mathbb{N}\}\). Then_ \[\mathbb{W}_{p}\left(\rho^{N}(f(X)),\mathcal{L}(f(X_{1}))\right)\leq|Df|_{0} \mathbb{W}_{p}\left(\rho^{N}(X),\mathcal{L}(X_{1})\right),\ \ \text{almost surely}\] _where \(|Df|_{0}=\sup_{x\in\mathbb{R}^{d}}\max\{|\partial_{x_{i}}f(x)|:i=1,2,\ldots,d\}\)._ Proof.: For any sequence \(\{c_{i}:i\in\mathbb{N}\}\) in \(\mathbb{R}^{d}\), the empirical measure \(\rho^{N}(c):=\frac{1}{N}\sum_{i=1}^{N}\delta_{c_{i}}\) satisfies \[\rho^{N}(f(c))=f_{*}\rho^{N}(c),\] since \[\langle\phi,\rho^{N}(f(c))\rangle=\frac{1}{N}\sum_{i=1}^{N}\phi(f(c_{i}))= \langle\phi\circ f,\rho^{N}(c)\rangle,\quad\forall\phi\in C_{b}(\mathbb{R}^{d}).\] This implies that \[\rho^{N}(f(X))=f_{*}\rho^{N}(X),\ \text{ almost surely}.\] On the other hand, we also have \[\mathcal{L}(f(X_{1}))(A)=\mathbb{P}(f(X_{1})\in A)=\mathbb{P}(X_{1}\in f^{-1} (A))=f_{*}\mathcal{L}(X_{1})(A),\quad\forall A\in\mathcal{B}(\mathbb{R}^{d}).\] Therefore, the conclusion follows by applying Lemma 2. ### Empirical measures of a sequence with a common noise We are going to apply lemmas from the previous subsection to study the convergence of empirical measures of a sequence with a common noise in the following sense. **Definition 3**.: _We say a sequence of random variables \(X=\{X_{i}:i\in\mathbb{N}\}\) is a sequence with a common noise, if there exists a random variable \(\beta\) such that_ * \(X-\beta=\{X_{i}-\beta:i\in\mathbb{N}\}\) _is a sequence of i.i.d. random variables,_ * \(\beta\) _is independent to_ \(X-\beta\)_._ By this definition, a sequence with a common noise is i.i.d. if and only if \(\beta\) is a deterministic constant. **Example 1**.: _Let \(q>4\) be a given constant and \(X=\{X_{i}:i\in\mathbb{N}\}\) be a \(1\)-dimensional sequence of \(L^{q}\) random variables with a common noise term \(\beta\), where_ \[X_{i}-\beta=\gamma_{i}+\sigma\alpha_{i}.\] _In above, \(\{(\alpha_{i},\gamma_{i}):i\in\mathbb{N}\}\) is a sequence of \(2\)-dimensional i.i.d. random variables independent to \(\beta\), and \(\sigma\) is a given non-negative constant. Let \(\rho^{N}(X)\) be the empirical measure defined by_ \[\rho^{N}(X)=\frac{1}{N}\sum_{i=1}^{N}\delta_{X_{i}}.\] The first question is 1. In Example 1, where does \(\rho^{N}(X)\) converge to? For any test function \(\phi\in C_{b}(\mathbb{R})\), \[\langle\phi,\rho^{N}(X)\rangle=\frac{1}{N}\sum_{i=1}^{N}\phi(X_{i})=\frac{1}{ N}\sum_{i=1}^{N}\phi(\gamma_{i}+\sigma\alpha_{i}+\beta).\] Since \(\beta\) is independent to \((\alpha_{i},\gamma_{i})\), by Example 4.1.5 of [9] together with the Law of Large Numbers, we have \[\frac{1}{N}\sum_{i=1}^{N}\phi(\gamma_{i}+\sigma\alpha_{i}+c)\to\mathbb{E}[\phi( \gamma_{1}+\sigma\alpha_{1}+c)]=\mathbb{E}[\phi(\gamma_{1}+\sigma\alpha_{1}+ \beta)|\beta=c],\quad\forall c\in\mathbb{R}.\] Therefore, we conclude that \[\langle\phi,\rho^{N}(X)\rangle \to\mathbb{E}[\phi(\gamma_{1}+\sigma\alpha_{1}+\beta)|\beta], \quad\beta-a.s.\] \[=\langle\phi,\mathcal{L}(\gamma_{1}+\sigma\alpha_{1}+\beta|\beta )),\quad\beta-a.s.\] Hence, the answer for the (Qa) is * \(\rho^{N}(X)\Rightarrow\mathcal{L}(X_{1}|\beta)\), \(\beta\)-a.s. More precisely, since all random variables are square-integrable, the weak convergence implies, for all \(p\in[1,2]\), \[\mathbb{W}_{p}\left(\rho^{N}(X),\mathcal{L}\left(X_{1}|\beta\right)\right) \to 0,\quad\beta-a.s.\] The next question is * In Example 1, what's the convergence rate in the sense \(\mathbb{E}\left[\mathbb{W}_{p}^{p}\left(\rho^{N}(X),\mathcal{L}\left(X_{1}| \beta\right)\right)\right]\)? Since \(\beta\) is independent to \(\gamma_{1}+\sigma\alpha_{1}\), by Example 4.1.5 of [9], we have \[\mathbb{E}[\phi(\gamma_{1}+\sigma\alpha_{1}+\beta)|\beta=c]=\mathbb{E}[\phi( \gamma_{1}+\sigma\alpha_{1}+c)],\quad\forall\phi\in C_{b}(\mathbb{R}),c\in \mathbb{R},\] or equivalently, if one takes \(c=\beta(\omega)\), \[\mathcal{L}(X_{1}|\beta)(\omega)=\mathcal{L}(\gamma_{1}+\sigma\alpha_{1}+\beta |\beta)(\omega)=\mathcal{L}(\gamma_{1}+\sigma\alpha_{1}+c).\] On the other hand, with \(c=\beta(\omega)\), \[\rho^{N}(X)(\omega)=\rho^{N}(X(\omega))=\frac{1}{N}\sum_{i=1}^{N}\delta_{ \gamma_{i}(\omega)+\sigma\alpha_{i}(\omega)+c}.\] From the above two identities, with \(c=\beta(\omega)\), we can write \[\mathbb{W}_{p}\left(\rho^{N}(X)(\omega),\mathcal{L}(X_{1}|\beta=c)(\omega) \right)=\mathbb{W}_{p}\left(\frac{1}{N}\sum_{i=1}^{N}\delta_{\gamma_{i}(\omega )+\sigma\alpha_{i}(\omega)+c},\mathcal{L}(\gamma_{1}+\sigma\alpha_{1}+c) \right).\] (15) Now we can conclude (Qb) in the next lemma. **Lemma 4**.: _Let \(p\in[1,2]\) be a given constant. For a sequence \(X=\{X_{i}:i\in\mathbb{N}\}\) with a common noise \(\beta\) as of Example 1, we have_ \[\mathbb{E}\left[\mathbb{W}_{p}^{p}\left(\rho^{N}(X),\mathcal{L}(X_{1}|\beta) \right)\right]=O\left(N^{-\frac{1}{2}}\right).\] Proof.: Originally, \(X_{i}=\gamma_{i}+\sigma\alpha_{i}+\beta\) of Example 1 are dependent due to the common term \(\beta\). We apply (49) in Lemma 11 in Appendix to (15) and obtain \[\mathbb{W}_{p}\left(\rho^{N}(X)(\omega),\mathcal{L}(X_{1}|\beta) (\omega)\right) =\mathbb{W}_{p}\left(\frac{1}{N}\sum_{i=1}^{N}\delta_{\gamma_{i}( \omega)+\sigma\alpha_{i}(\omega)+\beta(\omega)},\mathcal{L}(\gamma_{1}+\sigma \alpha_{1}+\beta(\omega))\right)\] \[=\mathbb{W}_{p}\left(\rho^{N}(\gamma(\omega)+\sigma\alpha(\omega) ),\mathcal{L}(\gamma_{1}+\sigma\alpha_{1})\right).\] Now, the convergence of empirical measures is equivalent to the ones of i.i.d. sequence \(\{\gamma_{i}+\sigma\alpha_{i}:i\in\mathbb{N}\}\). The conclusion follows from Lemma 1. Next, we present the uniform convergence rate by combining Lemma 3. **Lemma 5**.: _In Example 1, we use \(X(\sigma)\) to denote \(X\) to emphasize its dependence on \(\sigma\). Then,_ \[\mathbb{E}\left[\sup_{\sigma\in[0,1]}\mathbb{W}_{p}^{p}\left(\rho^{N}(X(\sigma)),\mathcal{L}\left(X_{1}(\sigma)|\beta\right)\right)\right]=\begin{cases}O \left(N^{-\frac{1}{2}}\ln(N)\right),&\text{ if }p=1,\\ O\left(N^{-\frac{1}{2}}\right),&\text{ if }1<p\leq 2.\end{cases}\] Proof.: Note that, by (49) in Lemma 11 in Appendix, \[\mathbb{W}_{p}^{p}\left(\rho^{N}(X(\sigma)),\mathcal{L}\left(X_{1}(\sigma)| \beta\right)\right)=\mathbb{W}_{p}^{p}\left(\rho^{N}(\gamma_{i}+\sigma\alpha_{ i}),\mathcal{L}\left(\gamma_{1}+\sigma\alpha_{1}\right)\right).\] Next, applying Lemma 3 with \(f(x,y)=x+\sigma y\), we obtain \[\sup_{\sigma\in[0,1]}\mathbb{W}_{p}^{p}\left(\rho^{N}(\gamma_{i} +\sigma\alpha_{i}),\mathcal{L}\left(\gamma_{1}+\sigma\alpha_{1}\right)\right) \leq\sup_{\sigma\in[0,1]}\max\{1,\sigma^{p}\}\mathbb{W}_{p}^{p} \left(\rho^{N}((\gamma,\alpha)),\mathcal{L}\left((\gamma_{1},\alpha_{1}) \right)\right)\] \[=\mathbb{W}_{p}^{p}\left(\rho^{N}((\gamma,\alpha)),\mathcal{L} \left((\gamma_{1},\alpha_{1})\right)\right).\] At last, using Lemma 1 for the \(2\)-dimensional i.i.d. sequence \(\{(\gamma_{i},\alpha_{i}):i\in\mathbb{N}\}\), we obtain the desired conclusion. ### Generalization of the convergence to triangular arrays Unfortunately, \((\hat{X}_{1t}^{(N)},\hat{X}_{2t}^{(N)},\ldots,\hat{X}_{Nt}^{(N)})\) of the \(N\)-player's game does not have a clean structure with a common noise term \(\beta\) given in Example 1. Therefore, we need a generalization of the convergence result in Example 1 to a triangular array. To proceed, we provide the following lemma. **Lemma 6**.: _Let \(\sigma>0\), \(q>4\), and_ \[X_{i}^{N}(\sigma)=\gamma_{i}^{N}+\sigma\alpha_{i}^{N}+\Delta_{i}^{N}(\sigma)+ \beta,\text{ and }\hat{X}(\sigma)=\hat{\gamma}+\sigma\hat{\alpha}+\beta,\] _where_ * \((\gamma^{N},\alpha^{N})=\{(\gamma_{i}^{N},\alpha_{i}^{N}):i\in\mathbb{N}\}\) _is a sequence of_ \(2\)_-dimensional i.i.d. random variables with distribution identical to_ \(\mathcal{L}((\hat{\gamma},\hat{\alpha}))\) _with_ \((\hat{\gamma},\hat{\alpha})\in L^{q}\) _for some_ \(q>4\)_,_ * \(\beta\in L^{q}\) _is independent to the random variables_ \((\gamma_{i}^{N},\alpha_{i}^{N},\hat{\gamma},\hat{\alpha})\)_,_ * \(\max_{i=1,2,\ldots,N}\mathbb{E}\left[\sup_{\sigma\in[0,1]}|\Delta_{i}^{N}( \sigma)|^{2}\right]=O(N^{-1})\)_._ _Let \(\rho^{N}(X^{N})\) be the empirical measure given by_ \[\rho^{N}(X^{N})=\frac{1}{N}\sum_{i=1}^{N}\delta_{X_{i}^{N}}.\] _Then, we have the following three results: For \(p\in[1,2]\),_ \[\mathbb{W}_{p}\left(\mathcal{L}\left(X_{1}^{N}(\sigma)\right), \mathcal{L}\left(\hat{X}(\sigma)\right)\right)=O\left(N^{-\frac{1}{2}}\right), \tag{16}\] \[\sup_{\sigma\in[0,1]}\mathbb{E}\left[\mathbb{W}_{p}^{p}\left(\rho ^{N}\left(X^{N}(\sigma)\right),\mathcal{L}\left(\hat{X}(\sigma)\Big{|}\,\beta \right)\right)\right]=O\left(N^{-\frac{1}{2}}\right), \tag{17}\] _and_ \[\mathbb{E}\left[\sup_{\sigma\in[0,1]}\mathbb{W}_{p}^{p}\left(\rho^{N}\left(X^ {N}(\sigma)\right),\mathcal{L}\left(\hat{X}(\sigma)\Big{|}\,\beta\right)\right) \right]=\begin{cases}O\left(N^{-\frac{1}{2}}\ln(N)\right),&\text{ if }p=1,\\ O\left(N^{-\frac{1}{2}}\right),&\text{ if }p>1.\end{cases} \tag{18}\] Proof.: We will omit the dependence of \(\sigma\) if there is no confusion, for instance, we use \(X\) in lieu of \(X(\sigma)\). Since \(\mathcal{L}(\hat{X})=\mathcal{L}(X_{1}^{N}-\Delta_{1}^{N})\), the first result (16) directly follows from \[\mathbb{W}_{p}^{p}\left(\mathcal{L}\left(X_{1}^{N}\right),\mathcal{L}\left(\hat {X}\right)\right)\leq\mathbb{E}\left[\left|\Delta_{1}^{N}\right|^{p}\right] \leq\left(\mathbb{E}\left[\left|\Delta_{1}^{N}\right|^{2}\right]\right)^{\frac {p}{2}}=O\left(N^{-\frac{p}{2}}\right).\] Next, we set \(Y_{i}^{N}(\sigma)=\gamma_{i}^{N}+\sigma\alpha_{i}^{N}+\beta\). By the definition of empirical measures, we have \[\mathbb{W}_{p}^{p}\left(\rho^{N}\left(X^{N}\right),\rho^{N}\left(Y^{N}\right) \right)\leq\frac{1}{N}\sum_{i=1}^{N}\left|X_{i}^{N}-Y_{i}^{N}\right|^{p}= \frac{1}{N}\sum_{i=1}^{N}\left|\Delta_{i}^{N}\right|^{p}. \tag{19}\] From the third condition on \(\Delta_{i}^{N}\), we obtain \[\mathbb{E}\left[\mathbb{W}_{p}^{p}\left(\rho^{N}\left(X^{N}\right),\rho^{N} \left(Y^{N}\right)\right)\right]=O\left(N^{-\frac{p}{2}}\right).\] By Lemma 4, we also have \[\mathbb{E}\left[\mathbb{W}_{p}^{p}\left(\rho^{N}\left(Y^{N}\right),\mathcal{L }\left(\hat{X}\right|\beta\right)\right)\right]=O\left(N^{-\frac{1}{2}}\right).\] In the end, (17) follows from the triangle inequality together with the fact that \(p\geq 1\). Finally, for the proof of (18), we first use (19) to write Applying Lemma 5 and the third condition on \(\Delta_{i}^{N}(\sigma)\), we can conclude (18). ### Proof of Theorem 1 For simplicity, let us introduce the following notations: \[\mathcal{E}_{t}(a)=\exp\left\{\int_{0}^{t}a(s)ds\right\},\quad\mathcal{E}_{t} (a,M)=\int_{0}^{t}\mathcal{E}_{s}(a)dM_{s}\] for a deterministic function \(a(\cdot)\) and a martingale \(M=\{M_{t}\}_{t\geq 0}\). With these notations, one can write the solution to the Ornstein-Uhlenbeck process \[dX_{t}=-a_{t}X_{t}dt+dM_{t}\] for a determinant function \(a\) in the form of \[\mathcal{E}_{t}(a)X_{t}=X_{0}+\mathcal{E}_{t}(a,M). \tag{20}\] For MFG equilibrium, we define \[\tilde{X}_{t}=\tilde{X}_{t}-\hat{\mu}_{t}.\] According to (10) in Proposition 1, \(\tilde{X}\) satisfies the following equation: \[\tilde{X}_{t}=\tilde{X}_{0}-\int_{0}^{t}2a_{s}\tilde{X}_{s}ds+W_{t}.\] Next, we express the solution of the above SDE in the form of \[\tilde{Y}_{t}:=\mathcal{E}_{t}(2a)\tilde{X}_{t}=\tilde{X}_{0}+\mathcal{E}_{t}(2a,W).\] Note that \(\tilde{Y}\) and \(\hat{\mu}\) are independent. Therefore, \(\hat{X}\) admits a decomposition of two independent processes as \[\hat{X}_{t}=\tilde{X}_{t}+\hat{\mu}_{t}.\] Furthermore, we have \[\hat{Y}_{t}:=\mathcal{E}_{t}(2a)\hat{X}_{t}=\tilde{X}_{0}+\mathcal{E}_{t}(2a,W )+\mathcal{E}_{t}(2a)\left(\hat{\mu}_{0}+\tilde{W}_{t}\right).\] In the \(N\)-player game, we define the following quantities: \[\bar{X}_{t}^{(N)}=\frac{1}{N}\sum_{i=1}^{N}\hat{X}_{it}^{(N)},\quad\bar{W}_{t }^{(N)}=\frac{1}{N}\sum_{i=1}^{N}W_{it}^{(N)},\] and \[\tilde{X}_{it}^{(N)}=\hat{X}_{it}^{(N)}-\bar{X}_{t}^{(N)}.\] It is worth noting that, by Proposition 2, we have \[\hat{X}_{it}^{(N)}=\hat{X}_{i0}^{(N)}-\int_{0}^{t}2\frac{N}{N-1}a^{N}(s)\left( \hat{X}_{is}^{(N)}-\frac{1}{N}\sum_{j=1}^{N}\hat{X}_{js}^{(N)}\right)ds+W_{it} ^{(N)}+\tilde{W}_{t}\] for all \(i=1,2,\ldots,N\), then the mean-field term satisfies \[\bar{X}_{t}^{(N)}=\bar{X}_{0}^{(N)}+\bar{W}_{t}^{(N)}+\tilde{W}_{t}\] and the \(i\)-th player's path deviated from the mean-field path can be rewritten by \[\tilde{X}_{it}^{(N)}=\tilde{X}_{i0}^{(N)}-\int_{0}^{t}2\hat{a}^{N}(s)\tilde{X} _{is}^{(N)}ds+W_{it}^{(N)}-\bar{W}_{t}^{(N)},\] where \[\hat{a}^{N}=\frac{N}{N-1}a^{N}.\] Next, we introduce \[\hat{Y}_{it}^{(N)}=\mathcal{E}_{t}\left(2\hat{a}^{N}\right)\hat{X}_{it}^{(N)},\quad\tilde{Y}_{it}^{(N)}=\mathcal{E}_{t}\left(2\hat{a}^{N}\right)\hat{X}_{ it}^{(N)},\quad\bar{Y}_{t}^{(N)}=\mathcal{E}_{t}\left(2\hat{a}^{N}\right)\bar{X}_{ t}^{(N)}.\] Consequently, we obtain the following relationships: \[\tilde{Y}_{it}^{(N)}=\tilde{X}_{i0}^{(N)}+\mathcal{E}_{t}\left(2\hat{a}^{N},W _{i}^{(N)}-\bar{W}^{(N)}\right),\] \[\bar{Y}_{t}^{(N)}=\mathcal{E}_{t}\left(2\hat{a}^{N}\right)\left(\bar{W}_{t}^{ (N)}+\tilde{W}_{t}+\bar{X}_{0}^{(N)}\right),\] and \[\hat{Y}_{it}^{(N)}=\bar{Y}_{it}^{(N)}+\tilde{Y}_{it}^{(N)}.\] To compare the process \(\tilde{Y}_{it}^{(N)}\) with the target process \[\begin{split}\hat{Y}_{t}&=\tilde{X}_{0}+\mathcal{E}_{t }\left(2a,W\right)+\mathcal{E}_{t}(2a)\left(\hat{\mu}_{0}+\tilde{W}_{t}\right) \\ &=\tilde{X}_{0}+\sigma_{t}Z_{t}+\mathcal{E}_{t}(2a)\left(\hat{\mu} _{0}+\tilde{W}_{t}\right),\end{split} \tag{21}\] where \[\sigma_{t}=\left(\int_{0}^{t}\mathcal{E}_{s}(4a)ds\right)^{1/2},\] and \[Z_{t}=\sigma_{t}^{-1}\mathcal{E}_{t}\left(2a,W\right)\sim\mathcal{N}(0,1),\] we write \(\hat{Y}_{it}^{(N)}\) by \[\begin{split}\hat{Y}_{it}^{(N)}&=\tilde{X}_{i0}^{( N)}+\mathcal{E}_{t}\left(2a,W_{i}^{(N)}\right)+\Delta_{it}^{(N)}+\mathcal{E}_{t}(2a )\left(\hat{\mu}_{0}+\tilde{W}_{t}\right)\\ &=\tilde{X}_{i0}^{(N)}+\sigma_{t}\mathcal{Z}_{it}^{(N)}+\Delta_{ it}^{(N)}+\mathcal{E}_{t}(2a)\left(\hat{\mu}_{0}+\tilde{W}_{t}\right),\end{split} \tag{22}\] where \[Z_{it}^{(N)}=\sigma_{t}^{-1}\mathcal{E}_{t}\left(2a,W_{i}^{(N)}\right)\sim \mathcal{N}(0,1),\] and \[\begin{split}\Delta_{it}^{(N)}&=\left(\mathcal{E}_{ t}\left(2\hat{a}^{N},W_{i}^{(N)}\right)-\mathcal{E}_{t}\left(2a,W_{i}^{(N)} \right)\right)\\ &\qquad-\mathcal{E}_{t}\left(2\hat{a}^{N},\bar{W}^{(N)}\right) \\ &\qquad+\left(\mathcal{E}_{t}\left(2\hat{a}^{N}\right)-\mathcal{E}_ {t}(2a)\right)\left(\hat{\mu}_{0}+\tilde{W}_{t}\right)\\ &\qquad+\mathcal{E}_{t}\left(2\hat{a}^{N}\right)\left(\bar{X}_{0 }^{(N)}-\hat{\mu}_{0}+\bar{W}_{t}^{(N)}\right)\\ &:=I_{it}^{(N)}+II_{t}^{(N)}+III_{t}^{(N)}+IV_{t}^{(N)}.\end{split} \tag{23}\] To apply Lemma 6 to the processes of (22) and (21), we only need to show the second moment on \(\sup_{t\in[0,T]}\Delta_{it}^{(N)}\) is \(O(N^{-1})\) for each \(i=1,2,\ldots,N\). In the following analysis, we will utilize the explicit solution of the ODE: * Let \(c,d>0\) be two constants. The solution of \[v^{\prime}(t)-c^{2}v^{2}(t)+d^{2}=0,\quad v(T)=0\] is \[v(t)=\frac{d}{c}\cdot\frac{1-e^{2dc(t-T)}}{1+e^{2dc(t-T)}}.\] (24) We will employ this solution to derive the second-moment estimations of \(\sup_{t\in[0,T]}\Delta_{it}^{(N)}\). 1. From (24), we have an estimation of \[\left|a^{N}(t)-a(t)\right|=\frac{k|T-t|}{N}+o\left(N^{-1}\right).\] (25) Therefore, we have \[\left|\mathcal{E}_{t}(2\hat{a}^{N})-\mathcal{E}_{t}(2a)\right|=\frac{2t(T-t) }{N}+o\left(N^{-1}\right)\] (26) and thus by Burkholder-Davis-Gundy (BDG) inequality \[\mathbb{E}\left[\sup_{t\in[0,T]}\left(I_{it}^{(N)}\right)^{2}\right] =\mathbb{E}\left[\sup_{t\in[0,T]}\left(\int_{0}^{t}\left(\mathcal{ E}_{s}\left(2\hat{a}^{N}\right)-\mathcal{E}_{s}\left(2a\right)\right)dW_{is}^{(N)} \right)^{2}\right]\] \[\leq C\mathbb{E}\left[\left(\int_{0}^{T}\left(\mathcal{E}_{s} \left(2\hat{a}^{N}\right)-\mathcal{E}_{s}\left(2a\right)\right)dW_{is}^{(N)} \right)^{2}\right]\text{ for some constant }C>0\] \[=C\int_{0}^{T}\left(\mathcal{E}_{s}\left(2\hat{a}^{N}\right)- \mathcal{E}_{s}\left(2a\right)\right)^{2}ds\] \[=O\left(N^{-2}\right).\] 2. Since \(\hat{a}^{N}\) is uniformly bounded by \(\sqrt{k/2}\), \(II_{t}^{(N)}\) is a martingale with its quadratic variance \[[II^{(N)}]_{T}=\frac{1}{N}\int_{0}^{T}\mathcal{E}_{s}(4\hat{a}^{N})ds=O\left( N^{-1}\right).\] So, we have \[\mathbb{E}\left[\sup_{t\in[0,T]}\left(II_{t}^{(N)}\right)^{2}\right]=O\left( N^{-1}\right).\] 3. From the estimation (26), we also have \[\mathbb{E}\left[\sup_{t\in[0,T]}\left(III_{t}^{(N)}\right)^{2}\right]=O\left( N^{-2}\right).\] 4. By the assumption of i.i.d. initial states, we have \[\mathbb{E}\left[\sup_{t\in[0,T]}\left(IV_{t}^{(N)}\right)^{2}\right]=\mathcal{ E}_{T}\left(4\hat{a}^{N}\right)\left(Var\left(\bar{X}_{0}^{(N)}\right)+ \mathbb{E}\left[\sup_{t\in[0,T]}\left(\bar{W}_{t}^{(N)}\right)^{2}\right] \right)=O\left(N^{-1}\right).\] As a result, we have the following expression: \[\mathbb{E}\left[\sup_{t\in[0,T]}\left(\Delta_{it}^{(N)}\right)^{2}\right]=O \left(N^{-1}\right),\quad\forall i=1,2,\ldots,N. \tag{27}\] By combining equations (21), (22), and (27), we can conclude Theorem 1 by applying Lemma 6. ## 4 Proposition 1: Derivation of the MFG path This section is dedicated to proving Proposition 1, which provides insights into the MFG solution. To proceed, in Subsection 4.1, we begin by reformulating the MFG problem, assuming a Markovian structure for the equilibrium. Then, in Subsection 4.2, we solve the underlying control problem and derive the corresponding Riccati system. Finally, in Subsection 4.3, we examine the fixed-point condition of the MFG problem, leading to the conclusion. ### Reformulation To determine the equilibrium measure, as defined in Definition 2, one needs to explore the infinite-dimensional space of random measure flows \(m:(0,T]\times\Omega\to\mathcal{P}_{2}(\mathbb{R})\) until a measure flow satisfies the fixed-point condition \(m_{t}=\mathcal{L}(\hat{X}_{t}|\tilde{\mathcal{F}}_{t})\) for all \(t\in(0,T]\), as illustrated in Figure 1. The first observation is that the cost function \(F\) in (8) is only dependent on the measure \(m\) through the first two moments with the quadratic cost structure, which is given by \[F(x,m)=k(x^{2}-2x[m]_{1}+[m]_{2}).\] Consequently, the underlying stochastic control problem for MFG can be entirely determined by the input given by the \(\mathbb{R}^{2}\) valued random processes \(\mu_{t}=[m_{t}]_{1}\) and \(\nu_{t}=[m_{t}]_{2}\), which implies that the fixed point condition can be effectively reduced to merely checking two conditions: \[\mu_{t}=\mathbb{E}\left[\,\hat{X}_{t}\Big{|}\,\tilde{\mathcal{F}}_{t}\right], \;\nu_{t}=\mathbb{E}\left[\,\hat{X}_{t}^{2}\Big{|}\,\tilde{\mathcal{F}}_{t} \right].\] This observation effectively reduces our search from the space of random measure-valued processes \(m:(0,T]\times\Omega\mapsto\mathcal{P}_{2}(\mathbb{R})\) to the space of \(\mathbb{R}^{2}\)-valued random processes \((\mu,\nu):(0,T]\times\Omega\mapsto\mathbb{R}^{2}\). It is important to note that if the underlying MFG does not involve common noise, the aforementioned observation is adequate to transform the original infinite-dimensional MFG into a finite-dimensional system. In this case, the moment processes \((\mu,\nu)\) become deterministic mappings \([0,T]\to\mathbb{R}^{2}\). However, the following example demonstrates that this is not applicable to MFG with common noise, which presents a significant drawback in characterizing LQG-MFG using a finite-dimensional system. **Example 2**.: _To illustrate this point, let's consider the following uncontrolled mean field dynamics: Let the mean field term \(\mu_{t}:=\mathbb{E}[\hat{X}_{t}|\tilde{\mathcal{F}}_{t}]\), where the underlying dynamic is given by_ \[d\hat{X}_{t}=-\mu_{t}\tilde{W}_{t}dt+dW_{t}+d\tilde{W}_{t},\quad\hat{X}_{0}=X _{0}.\] _Here are two key observations:_ * \(\mu_{t}\) _is path dependent on entire path of_ \(\tilde{W}\)_, i.e.,_ \[\mu_{t}=\mu_{0}e^{-\int_{0}^{t}\tilde{W}_{s}ds}+e^{-\int_{0}^{t}\tilde{W}_{s} ds}\int_{0}^{t}e^{\int_{0}^{s}\tilde{W}_{r}dr}d\tilde{W}_{s}.\] _This implies that the_ \((t,\tilde{W})\mapsto\mu_{t}\) _is a function on an infinite dimensional domain._ * \(\mu_{t}\) _is Markovian, i.e.,_ \[d\mu_{t}=-\mu_{t}\tilde{W}_{t}dt+d\tilde{W}_{t}.\] _It is possible to express the_ \(\mu_{t}\) _via a SDE with finite-dimensional coefficient functions of_ \((t,\mu_{t})\)_._ To make the previous idea more concrete, we propose the assumption of a Markovian structure for the first and second moments of the MFG equilibrium. In other words, we restrict our search for equilibrium to a smaller space \(\mathcal{M}\) of measure flows that capture the Markovian structure of the first and second moments. **Definition 4**.: _The space \(\mathcal{M}\) is the collection of all \(\tilde{\mathcal{F}}_{t}\)-adapted measure flows \(m:[0,T]\times\Omega\mapsto\mathcal{P}_{2}(\mathbb{R})\), whose first moment \([m_{t}]_{1}:=\mu_{t}\) and second moment \([m_{t}]_{2}:=\nu_{t}\) satisfy a system of SDE_ \[\mu_{t} =\mu_{0}+\int_{0}^{t}\left(w_{1}(s)\mu_{s}+w_{2}(s)\right)ds+ \tilde{W}_{t}, \tag{28}\] \[\nu_{t} =\nu_{0}+\int_{0}^{t}\left(w_{3}(s)\mu_{s}+w_{4}(s)\nu_{s}+w_{5}(s )\mu_{s}^{2}+w_{6}(s)\right)ds+2\int_{0}^{t}\mu_{s}d\tilde{W}_{s},\] _for some smooth deterministic functions \((w_{i}:i=1,2,\ldots,6)\) for all \(t\in[0,T]\)._ The MFG problem originally given by Definition 1 can be recast as the following combination of stochastic control problem and fixed point condition: * RLQG(Revised LQG): Given smooth functions \(w=(w_{i}:i=1,2,\ldots,6)\), we want to find the value function \(\bar{V}=\bar{V}[w]:[0,T]\times\mathbb{R}^{3}\to\mathbb{R}\) and optimal path \((\hat{X},\hat{\mu},\hat{\nu})[w]\) from the following control problem: \[\bar{V}(t,x,\bar{\mu},\bar{\nu})=\inf_{\alpha\in\mathcal{A}}\mathbb{E}\left[ \left.\int_{t}^{T}\left(\frac{1}{2}\alpha_{s}^{2}+\bar{F}(X_{s},\mu_{s},\nu_{s} )\right)\ ds\right|X_{t}=x,\mu_{t}=\bar{\mu},\nu_{t}=\bar{\nu}\right]\] with the underlying process \(X\) of (1) and \((\mu,\nu)\) of (28) and with the cost functions: \(\bar{F}:\mathbb{R}^{3}\mapsto\mathbb{R}\) given by \[\bar{F}(x,\bar{\mu},\bar{\nu})=k(x^{2}-2x\bar{\mu}+\bar{\nu}),\] where \(\bar{\mu},\bar{\nu}\) are scalars, while \(\mu,\nu\) are used as processes. * RFP(Revised fixed point condition): Determine \(w\) satisfying the following fixed point condition: \[\hat{\mu}_{s}=\mathbb{E}\left[\left.\hat{X}_{s}\right|\tilde{\mathcal{F}}_{s} \right]\text{ and }\hat{\nu}_{s}=\mathbb{E}\left[\left.\hat{X}_{s}^{2}\right| \tilde{\mathcal{F}}_{s}\right],\quad\forall s\in[0,T].\] (29) The equilibrium measure is then \(\mathcal{N}(\hat{\mu}_{t},\hat{\nu}_{t}-\hat{\mu}_{t}^{2})\). **Remark 1**.: _It is important to highlight that the Markovian structure for the first and second moments of the MFG equilibrium in this manuscript differs significantly from that presented in [18]. In [18], the processes \(\mu_{t}\) and \(\nu_{t}\) are pairs of processes with finite variation, while in our case, they are quadratic variation processes._ _Specifically, in [18], the coefficient functions depend on the common noise \(Y\), whereas in (28), the coefficient functions \((w_{i}:i=1,2,\ldots,6)\) are independent of the common noise \(\tilde{W}\). Instead, the first and second moments of the MFG equilibrium are only influenced by the common noise through an additive term._ ### The generic player's control with a given population measure This section is devoted to the control problem RLQG parameterized by \(w\). #### 4.2.1 HJB equation To simplify the notation, let's denote each function \(w_{i}(t)\) as \(w_{i}\) for \(i\in\{1,2,\ldots,6\}\). Assuming sufficient regularity conditions, and according to the dynamic programming principle (refer to [20] for more details), the value function \(\bar{V}\) defined in the RLQG problem can be obtained as a solution \(v\) of the following Hamilton-Jacobi-Bellman (HJB) equation \[\left\{\begin{aligned} &\partial_{t}v+\inf_{a\in\mathbb{R}}\left(a \partial_{x}v+\frac{1}{2}a^{2}\right)+\left(w_{1}\bar{\mu}+w_{2}\right) \partial_{\bar{\mu}}v+\left(w_{3}\bar{\mu}+w_{4}\bar{\nu}+w_{5}\bar{\mu}^{2}+w _{6}\right)\partial_{\bar{\nu}}v+\partial_{xx}v+\frac{1}{2}\partial_{\bar{\mu }\bar{\mu}}v\\ &\qquad\qquad\qquad+\partial_{x\bar{\mu}}v+2\bar{\mu}^{2} \partial_{\bar{\nu}\bar{\nu}}v+2\bar{\mu}\partial_{\bar{\mu}\bar{\nu}}v+2\bar {\mu}\partial_{x\bar{\nu}}v+k(x^{2}-2\bar{\mu}x+\bar{\nu})=0,\\ & v(T,x,\mu_{T},\nu_{T})=0.\end{aligned}\right.\] Therefore, the optimal control has to admit the feedback form of \[\hat{\alpha}(t)=-\partial_{x}v\left(t,\hat{X}_{t},\mu_{t},\nu_{t}\right), \tag{30}\] and then the HJB equation can be reduced to \[\left\{\begin{aligned} &\partial_{t}v-\frac{1}{2}(\partial_{x}v)^{2}+ \left(w_{1}\bar{\mu}+w_{2}\right)\partial_{\bar{\mu}}v+\left(w_{3}\bar{\mu}+w _{4}\bar{\nu}+w_{5}\bar{\mu}^{2}+w_{6}\right)\partial_{\bar{\nu}}v+\partial_{ xx}v+\frac{1}{2}\partial_{\bar{\mu}\bar{\nu}}v\\ &\qquad\qquad\qquad+\partial_{x\bar{\mu}}v+2\bar{\mu}^{2} \partial_{\bar{\nu}\bar{\nu}}v+2\bar{\mu}\partial_{\bar{\mu}\bar{\nu}}v+2\bar {\mu}\partial_{x\bar{\nu}}v+k(x^{2}-2\bar{\mu}x+\bar{\nu})=0,\\ & v(T,x,\mu_{T},\nu_{T})=0.\end{aligned}\right. \tag{31}\] Next, we identify what conditions are needed for equating the control problem RLQG and the above HJB equation. Denote \(\mathcal{S}\) to be the set of \(v\) such that \(v\in C^{\infty}\) satisfies \[\left(1+|x|^{2}\right)^{-1}\left(|v|+|\partial_{t}v|\right)+\left(1+|x|+|\mu| \right)^{-1}\left(|\partial_{x}v|+|\partial_{\mu}v|\right)+\left(|\partial_{ xx}v|+|\partial_{x\mu}v|+|\partial_{\mu\mu}v|+|\partial_{\nu}v|\right)<K\] for all \((t,x,\mu,\nu)\) for some positive constant \(K\). **Lemma 7**.: _Consider the control problem RLQG with some given smooth functions \(w=(w_{i}:i=1,2,\ldots,6)\)._ 1. _(Verification theorem) Suppose there exists a solution_ \(v\in\mathcal{S}\) _of (_31_). Then_ \(v(t,x,\bar{\mu},\bar{\nu})=\bar{V}(t,x,\bar{\mu},\bar{\nu})\)_, and an optimal control is provided by (_30_)._ 2. _Suppose that the value function_ \(\bar{V}\) _belongs to_ \(\mathcal{S}\)_, and then_ \(\bar{V}(t,x,\bar{\mu},\bar{\nu})\) _solves HJB equation (_31_). Moreover,_ \(\hat{\alpha}\) _of (_30_) is an optimal control._ Proof.: 1. First, we prove the verification theorem. Since \(v\in\mathcal{S}\), for any admissible \(\alpha\in\mathcal{H}_{\mathbb{F}}^{4}\), the process \(X^{\alpha}\) is well defined and one can apply Ito's formula to obtain \[\mathbb{E}\left[v(T,X_{T},\mu_{T},\nu_{T})\right]=v(t,x,\bar{\mu},\bar{\nu})+ \mathbb{E}\left[\int_{t}^{T}\mathcal{G}^{\alpha(s)}v(s,X_{s},\mu_{s},\nu_{s}) ds\right],\] where \[\mathcal{G}^{a}f(s,x,\bar{\mu},\bar{\nu})=\left(\partial_{t}+a \partial_{x}+\partial_{xx}+\left(w_{1}\bar{\mu}+w_{2}\right)\partial_{\bar{ \mu}}+\left(w_{3}\bar{\mu}+w_{4}\bar{\nu}+w_{5}\bar{\mu}^{2}+w_{6}\right) \partial_{\bar{\nu}}\\ +\frac{1}{2}\partial_{\bar{\mu}\bar{\mu}}+2\bar{\mu}^{2}\partial_ {\bar{\nu}\bar{\nu}}+\partial_{x\bar{\mu}}+2\bar{\mu}\partial_{\bar{\mu}\bar{ \nu}}+2\bar{\mu}\partial_{x\bar{\nu}}\right)\!f(s,x,\bar{\mu},\bar{\nu}).\] Note that the HJB equation actually implies that \[\inf_{a}\left\{\mathcal{G}^{a}v+\frac{1}{2}a^{2}\right\}=-\bar{F},\] which again yields \[-\mathcal{G}^{a}v\leq\frac{1}{2}a^{2}+\bar{F}.\] Hence, we obtain that for all \(\alpha\in\mathcal{H}_{\mathbb{F}}^{4}\), \[\quad v(t,x,\bar{\mu},\bar{\nu})\] \[= \mathbb{E}\left[\int_{t}^{T}-\mathcal{G}^{\alpha(s)}v(s,X_{s}, \mu_{s},\nu_{s})ds\right]+\mathbb{E}\left[v(T,X_{T},\mu_{T},\nu_{T})\right]\] \[\leq \mathbb{E}\left[\int_{t}^{T}\left(\frac{1}{2}\alpha^{2}(s)+\bar{F }(X_{s},\mu_{s},\nu_{s})\right)ds\right]\] \[= J(t,x,\alpha,\bar{\mu},\bar{\nu}).\] In the above, if \(\alpha\) is replaced by \(\hat{\alpha}\) given by the feedback form (30), then since \(\partial_{x}v\) is Lipschitz continuous in \(x\), there exists corresponding optimal path \(\hat{X}\in\mathcal{H}_{\mathbb{F}}^{4}\). Thus, \(\hat{\alpha}\) is also in \(\mathcal{H}_{\mathbb{F}}^{4}\). One can repeat all the above steps by replacing \(X\) and \(\alpha\) by \(\hat{X}\) and \(\hat{\alpha}\), and \(\leq\) sign by \(=\) sign to conclude that \(v\) is indeed the optimal value. 2. The opposite direction of the verification theorem follows by taking \(\theta\to t\) for the dynamic programming principle, for all stopping time \(\theta\in[t,T]\), \[\bar{V}(t,x,\bar{\mu},\bar{\nu})\] \[= \mathbb{E}\left[\int_{t}^{\theta}\left(\frac{1}{2}\alpha_{s}^{2} +\bar{F}(X_{s},\mu_{s},\nu_{s})\right)ds+\bar{V}(\theta,X_{\theta},\mu_{\theta },\nu_{\theta})\right|X_{t}=x,\mu_{t}=\bar{\mu},\nu_{t}=\bar{\nu}\right],\] which is valid under our regularity assumptions on all the partial derivatives. #### 4.2.2 LQG solution It is worth noting that the costs \(\bar{F}\) of RLQG are quadratic functions in \((x,\bar{\mu},\bar{\nu})\), while the drift function of the process \(\nu\) of (28) is not linear in \((x,\bar{\mu},\bar{\nu})\). Therefore, the stochastic control problem RLQG does not fit into the typical LQG control structure. Nevertheless, similarly to the LQG solution, we guess the value function to be a quadratic function in the form of \[v(t,x,\bar{\mu},\bar{\nu})=a(t)x^{2}+b(t)\bar{\mu}^{2}+c(t)\bar{\nu}+d(t)+e(t) x+f(t)\bar{\mu}+g(t)x\bar{\mu}. \tag{32}\] Under the above setup for the value function \(v\), for \(t\in[0,T]\), the optimal control is given by \[\hat{\alpha}_{t}=-\partial_{x}v(t,\hat{X}_{t},\mu_{t},\nu_{t})=-2a(t)\hat{X}_{ t}-e(t)-g(t)\mu_{t}, \tag{33}\] and the optimal path \(\hat{X}\) is \[\begin{cases}d\hat{X}_{t}=\left(-2a(t)\hat{X}_{t}-e(t)-g(t)\mu_{t}\right)dt+dW_ {t}+d\tilde{W}_{t},\\ \hat{X}_{0}=X_{0}.\end{cases} \tag{34}\] To proceed, we introduce the following Riccati system of ODEs for \(t\in[0,T]\), \[\begin{cases}a^{\prime}-2a^{2}+k=0,\\ b^{\prime}-\frac{1}{2}g^{2}+2bw_{1}+cw_{5}=0,\\ c^{\prime}+cw_{4}+k=0,\\ d^{\prime}-\frac{1}{2}e^{2}+fw_{2}+cw_{6}+2a+b+g=0,\\ e^{\prime}-2ae+w_{2}g=0,\\ f^{\prime}-eg+w_{1}f+2bw_{2}+cw_{3}=0,\\ g^{\prime}-2ag+w_{1}g-2k=0,\end{cases} \tag{35}\] with terminal conditions \[a(T)=b(T)=c(T)=d(T)=e(T)=f(T)=g(T)=0. \tag{36}\] **Lemma 8**.: _Suppose there exists a unique solution \((a,b,c,d,e,f,g)\) to the Riccati system of ODEs (35)-(36) on \([0,T]\). Then the value function of (RMFG) is given by_ \[\begin{split}&\bar{V}(t,x,\bar{\mu},\bar{\nu})=v(t,x,\bar{\mu}, \bar{\nu})\\ =& a(t)x^{2}+b(t)\bar{\mu}^{2}+c(t)\bar{\nu}+d(t)+e(t )x+f(t)\bar{\mu}+g(t)x\bar{\mu}\end{split} \tag{37}\] _for \(t\in[0,T]\) and the optimal control and optimal path are given by (33) and (34), respectively._ Proof.: With the form of value function \(v\) given in (32) and the conditional first and second moment of \(\tilde{X}_{t}\) under the \(\sigma\)-algebra \(\tilde{\mathcal{F}}_{t}\) given in (28), we have \[\begin{split}&\partial_{t}v=a^{\prime}(t)x^{2}+e^{\prime}(t)x+b^{ \prime}(t)\bar{\mu}^{2}+f^{\prime}(t)\bar{\mu}+g^{\prime}(t)x\bar{\mu}+c^{ \prime}(t)\bar{\nu}+d^{\prime}(t),\\ &\partial_{x}v=2xa(t)+e(t)+g(t)\bar{\mu},\\ &\partial_{xx}v=2a(t),\\ &\partial_{\bar{\mu}}v=2b(t)\bar{\mu}+f(t)+g(t)x,\\ &\partial_{\bar{\nu}}v=c(t),\\ &\partial_{\bar{\mu}\bar{\nu}}v=2b(t),\\ &\partial_{x\bar{\mu}}v=g(t),\\ &\partial_{\bar{\mu}\bar{\nu}}v=\partial_{\bar{\nu}\bar{\nu}}v= \partial_{x\bar{\nu}}v=0.\end{split}\] Plugging them back to the HJB equation in (31), we get a system of ODEs in (35) by equating \(x\), \(\bar{\mu}\), \(\bar{\nu}\)-like terms in each equation with the terminal conditions given in (36). Therefore, any solution \((a,b,c,d,e,f,g)\) of a system of ODEs (35) leads to the solution of HJB (31) in the form of the quadratic function given by (37). Since the \((a,b,c,d,e,f,g)\) are differentiable functions on the closed set \([0,T]\), they are also bounded, and thus the regularity conditions needed for \(v\in\mathcal{S}\) is valid. Finally, we invoke the verification theorem given by Lemma 7 to conclude the desired result. ### Fixed point condition and the proof of Proposition 1 Returning to the ODE system (35), there are 7 equations, whereas we need to determine a total of 13 deterministic functions of \([0,T]\times\mathbb{R}\) to characterize MFG. These are \[(a,b,c,d,e,f,g)\quad\text{ and }\quad(w_{i}:i=1,2,\ldots,6).\] In this below, we identify the missing 6 equations by checking the fixed point condition of RFP. This leads to a complete characterization of the equilibrium for MFG in Definition 1. **Lemma 9**.: _With the dynamic of the optimal path \(\hat{X}\) defined in (34), the fixed point condition (29) implies that the first moment \(\hat{\mu}_{s}:=\mathbb{E}[\hat{X}_{s}|\tilde{\mathcal{F}}_{s}]\) and the second moment \(\hat{\nu}_{s}:=\mathbb{E}[\hat{X}_{s}^{2}|\tilde{\mathcal{F}}_{s}]\) of the optimal path conditioned on \(\tilde{\mathcal{F}}_{t}\) satisfy_ \[\begin{cases}\hat{\mu}_{s}=\bar{\mu}+\int_{t}^{s}\left(\left(-2a(r)-g(r) \right)\hat{\mu}_{r}-e(r)\right)dr+\tilde{W}_{s},\\ \hat{\nu}_{s}=\bar{\nu}+\int_{t}^{s}\left(2-4a(r)\hat{\nu}_{r}-2e(r)\hat{\mu}_ {r}-2g(r)\hat{\mu}_{r}^{2}\right)dr+\int_{t}^{s}2\hat{\mu}_{r}d\tilde{W}_{r}, \end{cases} \tag{38}\] _for \(s\geq t\), and thus the coefficient functions \(w=(w_{i}:i=1,2,\ldots,6)\) in (28) satisfy the following equations:_ \[w_{1}=-2a-g,\ w_{2}=-e,\ w_{3}=-2e,\ w_{4}=-4a,\ w_{5}=-2g,\ w_{6}=2,\quad \forall t\in[0,T]. \tag{39}\] Proof.: With the dynamic of the optimal path \(\hat{X}\) given by (34), we have \[\hat{X}_{t}=X_{0}+\int_{0}^{t}\left(-2a(s)\hat{X}_{s}-e(s)-g(s)\hat{\mu}_{s} \right)ds+W_{t}+\tilde{W}_{t},\] and since the functions \(a,e,g\) are continuous on \([0,T]\), then we can change of order of integration and expectation and it yields \[\hat{\mu}_{t} =\mathbb{E}\left[\left.\hat{X}_{t}\right|\tilde{\mathcal{F}}_{t}\right]\] \[=\mathbb{E}\left[\left.X_{0}\right|\tilde{\mathcal{F}}_{t}\right] +\int_{0}^{t}\left(-2a(s)\hat{\mu}_{s}-e(s)-g(s)\hat{\mu}_{s}\right)ds+ \mathbb{E}\left[\left.W_{t}\right|\tilde{\mathcal{F}}_{t}\right]+\mathbb{E} \left[\left.\tilde{W}_{t}\right|\tilde{\mathcal{F}}_{t}\right]\] \[=\mathbb{E}\left[\left.X_{0}\right|\tilde{\mathcal{F}}_{t}\right] +\int_{0}^{t}\left(-2a(s)\hat{\mu}_{s}-e(s)-g(s)\hat{\mu}_{s}\right)ds+ \tilde{W}_{t}.\] Similarly, applying Ito's formula, we obtain \[\hat{X}_{t}^{2}=X_{0}^{2}+\int_{0}^{t}\left(2-4a(s)\hat{X}_{s}^{2}-2e(s)\hat{ X}_{s}-2g(s)\hat{\mu}_{s}\hat{X}_{s}\right)ds+\int_{0}^{t}2\hat{X}_{s}dW_{s}+\int_{0}^{t }2\hat{X}_{s}d\tilde{W}_{s},\] and it follows that \[\hat{\nu}_{t} =\mathbb{E}\left[\left.\hat{X}_{t}^{2}\right|\tilde{\mathcal{F}}_ {t}\right]\] \[=\mathbb{E}\left[\left.X_{0}^{2}\right|\tilde{\mathcal{F}}_{t} \right]+\int_{0}^{t}\left(2-4a(s)\hat{\nu}_{s}-2e(s)\hat{\mu}_{s}-2g(s)\hat{ \mu}_{s}^{2}\right)ds+\mathbb{E}\left[\left.\int_{0}^{t}2\hat{X}_{s}dW_{s} \right|\tilde{\mathcal{F}}_{t}\right]+\mathbb{E}\left[\left.\int_{0}^{t}2\hat {X}_{s}d\tilde{W}_{s}\right|\tilde{\mathcal{F}}_{t}\right]\] \[=\mathbb{E}\left[\left.X_{0}^{2}\right|\tilde{\mathcal{F}}_{t} \right]+\int_{0}^{t}\left(2-4a(s)\hat{\nu}_{s}-2e(s)\hat{\mu}_{s}-2g(s)\hat{ \mu}_{s}^{2}\right)ds+\int_{0}^{t}2\hat{\mu}_{s}d\tilde{W}_{s}.\] Thus the desired result in (38) is obtained. Next, comparing the terms in (28) and (38), to satisfy the fixed point condition in MFG, we require another 6 equations in (39) for the coefficient functions \(w=(w_{i}:i=1,2,\ldots,6)\) Using further algebraic structures, one can reduce the ODE system of 13 equations composed by (35) and (39) into a system of 4 equations. **Proof of Proposition 1.** Let the smooth and bounded functions \(\{w_{i}:i=1,2,\ldots,6\}\) be given, the functions \((a,b,c,d,e,f,g)\) in (35) is a coupled linear system, and thus their existence, uniqueness and boundedness is shown by Theorem 12.1 in [1]. Plugging the 6 equations in (39) to the ODE system (35), we obtain \[\begin{cases}a^{\prime}-2a^{2}+k=0,\\ b^{\prime}-\frac{1}{2}g^{2}-4ab-2bg-2cg=0,\\ c^{\prime}-4ac+k=0,\\ d^{\prime}-\frac{1}{2}e^{2}-ef+2c+2a+b+g=0,\\ e^{\prime}-2ae-eg=0,\\ f^{\prime}-eg-2af-gf-2be-2ce=0,\\ g^{\prime}-4ag-g^{2}-2k=0,\end{cases}\] with the terminal conditions \[a(T)=b(T)=c(T)=d(T)=e(T)=f(T)=g(T)=0.\] Let \(l=2a+g\), and it easily to obtain \[l^{\prime}(t)-l^{2}(t)=0,\quad l(T)=0,\] which implies that \(l(t)=2a(t)+g(t)=0\) for all \(t\in[0,T]\). This gives the result that \(g=-2a\) and it yields \(e^{\prime}=0\). Then with \(e(T)=0\), we have \(e(t)=0\) for all \(t\in[0,T]\) and thus one can obtain \(f^{\prime}=0\), which indicates that \(f(t)=0\) for all \(t\in[0,T]\) as \(f(T)=0\). Therefore the ODE system (35) can be simplified to the following form about \((a(t),b(t),c(t),d(t):t\in[0,T])\): \[\begin{cases}a^{\prime}(t)-2a^{2}(t)+k=0,\\ b^{\prime}(t)-2a^{2}(t)+4a(t)c(t)=0,\\ c^{\prime}(t)-4a(t)c(t)+k=0,\\ d^{\prime}(t)+b(t)+2c(t)=0,\end{cases} \tag{40}\] with the terminal conditions \[a(T)=b(T)=c(T)=d(T)=0. \tag{41}\] The unique solvability of the Riccati system (40)-(41) is proven in Lemma 12 in the Appendix. Note that the solution \(a\) of (11) is consistent with the solution of the Riccati system given by equations (40)-(41). In this case, since \(2a+g=0\) and \(e=0\) for all \(t\in[0,T]\), it follows that \(\hat{\mu}_{s}=\bar{\mu}+\tilde{W}_{s}\) for all \(s\in[t,T]\) from the fixed point result (38). Similarly, \[\hat{\nu}_{s}=\bar{\nu}+\int_{t}^{s}\left(2+4a(r)\hat{\mu}_{r}^{2}-4a(r)\hat{ \nu}_{r}\right)\,dr+\int_{t}^{s}2\hat{\mu}_{r}\,d\tilde{W}_{r},\quad\forall s \in[t,T].\] Plugging \(e=0\) and \(\hat{\mu}_{s}=\bar{\mu}+\tilde{W}_{r}\) back to (33), we obtain the optimal control by \[\hat{\alpha}_{s}=2a(s)(\bar{\mu}+\tilde{W}_{s}-\hat{X}_{s}).\] Moreover, since \(e=f=0\) and \(g=-2a\) for \(s\in[t,T]\), the value function can be simplified from (32) to \[v(t,x,\bar{\mu},\bar{\nu})=a(t)x^{2}-2a(t)x\bar{\mu}+b(t)\bar{\mu}^{2}+c(t) \bar{\nu}+d(t).\] This concludes Proposition 1. ## 5 The \(N\)-Player Game This section focuses on proving Proposition 2 regarding the corresponding \(N\)-player game. For simplicity, we can omit the superscript \((N)\) when referring to the processes in the sample space \(\Omega^{(N)}\). To begin, we address the \(N\)-player game in Subsection 5.1, where we solve it and obtain a Riccati system containing \(O(N^{3})\) equations. Subsequently, we reduce the relevant Riccati system to an ODE system in Subsection 5.2, which has a dimension independent of \(N\). This simplified system forms the fundamental component of the convergence result. ### Characterization of the \(N\)-player game by Riccati system It is important to emphasize that based on the problem setting in Subsection 2.2 and the running cost for each player specified in (9), the \(N\)-player game can be classified as an \(N\)-coupled stochastic LQG problem. As a result, the value function and optimal control for each player can be determined by means of the following Riccati system: For \(i=1,2,\ldots,N\), consider \[\begin{cases}A_{i}^{\prime}-2A_{i}^{\top}e_{i}e_{i}^{\top}A_{i}-4\sum_{j\neq i }^{N}A_{j}^{\top}e_{j}e_{j}^{\top}A_{i}+\frac{k}{N}\sum_{j\neq i}^{N}\left(e_{ i}-e_{j}\right)\left(e_{i}-e_{j}\right)^{\top}=0,\\ B_{i}^{\prime}-2A_{i}^{\top}e_{i}e_{i}^{\top}B_{i}-2\sum_{j\neq i}^{N}\left(A_ {i}^{\top}e_{j}e_{j}^{\top}B_{j}+A_{j}^{\top}e_{j}e_{j}^{\top}B_{i}\right)=0, \\ C_{i}^{\prime}-\frac{1}{2}B_{i}^{\top}e_{i}e_{i}^{\top}B_{i}-\sum_{j\neq i}^{ N}B_{j}^{\top}e_{j}e_{j}^{\top}B_{i}+2tr(A_{i})=0,\\ A_{i}(T)=B_{i}(T)=C_{i}(T)=0,\end{cases} \tag{42}\] where \(A_{i}\) is \(N\times N\) symmetric matrix, \(B_{i}\) is \(N\)-dimensional vector, \(C_{i}\in\mathbb{R}\) is a real constant, and \(e_{i}\) is the \(i\)-th natural basis in \(\mathbb{R}^{N}\) for each \(i=1,2,\ldots,N\). **Lemma 10**.: _Suppose \((A_{i},B_{i},C_{i}:i=1,2,\ldots,N)\) is the solution of the Riccati system (42). Then, the value functions of \(N\)-player game defined by (7) is_ \[V_{i}\left(x^{(N)}\right)=\left(x^{(N)}\right)^{\top}A_{i}(0)x^{(N)}+\left(x^ {(N)}\right)^{\top}B_{i}(0)+C_{i}(0),\quad i=1,2,\ldots,N.\] _Moreover, the path and the control under the equilibrium are given by_ \[d\hat{X}_{it}^{(N)}=\left(-2(A_{i}(t))_{i}^{\top}\hat{X}_{t}^{(N)}-(B_{i}(t) )_{i}\right)dt+dW_{it}^{(N)}+d\tilde{W}_{t}, \tag{43}\] \[\hat{\alpha}_{it}^{(N)}=-2(A_{i}(t))_{i}^{\top}\hat{X}_{t}^{(N)}-(B_{i}(t))_{i}\] _for each \(i=1,2,\ldots,N\), where \((A)_{i}\) denotes the \(i\)-th column of matrix \(A\), \((B)_{i}\) denotes the \(i\)-th entry of vector \(B\) and \(\hat{X}_{t}^{(N)}=[\hat{X}_{1t}^{(N)},\hat{X}_{2t}^{(N)},\ldots,\hat{X}_{Nt}^{ (N)}]^{\top}\)._ Proof.: From the dynamic programming principle, it is standard that, under enough regularities, the players' value function \(V(x^{(N)})=(V_{1},V_{2},\ldots,V_{N})(x^{(N)})\) can be lifted to the solution \(v_{i}(t,x^{(N)})\) of the following system of HJB equations, for \(i=1,2,\ldots,N\), \[\begin{cases}\partial_{t}v_{i}+\inf_{a_{it}\in\mathbb{R}}\left(a_{it}\partial _{x_{i}}v_{i}+\frac{1}{2}a_{it}^{2}\right)+\sum_{j\neq i}^{N}a_{jt}\partial_{ x_{j}}v_{i}+\Delta v_{i}+\frac{k}{N}\sum_{j\neq i}^{N}\left((e_{i}-e_{j})^{\top}x^{ (N)}\right)^{2}=0,\\ v_{i}\left(T,x^{(N)}\right)=0.\end{cases}\] Note that with \(a_{it}=-\partial_{x_{i}}v_{i}\left(t,x^{(N)}\right)\) for each \(i=1,2,\ldots,N\), the term in the infimum attains the optimal value and thus the HJB equation can be reduced to \[\begin{cases}\partial_{t}v_{i}-\frac{1}{2}\left(\partial_{x_{i}}v_{i}\right) ^{2}-\sum_{j\neq i}^{N}\partial_{x_{j}}v_{j}\partial_{x_{j}}v_{i}+\Delta v_{i} +\frac{k}{N}\sum_{j\neq i}^{N}\left((e_{i}-e_{j})^{\top}x^{(N)}\right)^{2}=0, \\ v_{i}\left(T,x^{(N)}\right)=0.\end{cases} \tag{44}\] Then, the value functions \(V\) of \(N\)-player game defined by (7) is \(V_{i}(x^{(N)})=v_{i}(0,x^{(N)})\) for all \(i=1,2,\ldots,N\). Moreover, the path and the control under the equilibrium are given by \[d\hat{X}_{it}^{(N)}=-\partial_{x_{i}}v_{i}\left(t,\hat{X}_{t}^{(N)}\right)dt+ dW_{it}^{(N)}+d\tilde{W}_{t},\] and \[\hat{\alpha}_{it}^{(N)}=-\partial_{x_{i}}v_{i}\left(t,\hat{X}_{t}^{(N)}\right)\] for \(i=1,2,\ldots,N\). The proof is the application of Ito's formula and the details are omitted here. Due to its LQG structure, the value function leads to a quadratic function of the form \[v_{i}\left(t,x^{(N)}\right)=\left(x^{(N)}\right)^{\top}A_{i}(t)x^{(N)}+\left( x^{(N)}\right)^{\top}B_{i}(t)+C_{i}(t).\] Plugging \(V_{i}\) into (44), and matching the coefficient of variables, we get the Riccati system of ODEs in (42) and the desired results are obtained. ### Proof of Proposition 2: Reduced Riccati form for the equilibrium At present, the MFG and the corresponding \(N\)-player game can be characterized by Proposition 1 and Lemma 10, respectively. One of our primary objectives is to examine the convergence of the representative optimal path \(\hat{X}_{1t}^{(N)}\) generated by the \(N\)-player game defined in (42)-(43) to the optimal path \(\hat{X}_{t}\) of the MFG described in Proposition 1. It should be noted that \(\hat{X}_{t}\) is solely dependent on the function \(a(t)\), as indicated in the ODE (11). In contrast, \(\hat{X}_{1t}^{(N)}\) depends on \(O(N^{3})\) many functions derived from the solutions of a substantial Riccati system (42) involving matrices \((A_{it},B_{it}:i=1,2,\ldots,N)\). Consequently, comparing these two processes meaningfully becomes an exceedingly challenging task without gaining further insight into the intricate structure of the Riccati system (42). **Proof of Proposition 2.** Inspired from the setup in [18] and [16], we may seek a pattern for the matrix \(A_{i}\) in the following form: \[(A_{i})_{pq}=\begin{cases}a_{1}(t),&\text{ if }p=q=i,\\ a_{2}(t),&\text{ if }p=q\neq i,\\ a_{3}(t),&\text{ if }p\neq q,p=i\text{ or }q=i,\\ a_{4}(t),&\text{ otherwise.}\end{cases} \tag{45}\] The next result justifies the above pattern: the \(N^{2}\) entries of the matrix \(A_{i}\) can be embedded to a 2-dimensional vector space no matter how big \(N\) is. For the Riccati system (42), with the given of \(A_{i}\) and suppose each function in \(A_{i}\) is continuous on \([0,T]\), it is obvious to see that \(B_{i}=0\) for all \(t\in[0,T]\) and for all \(i=1,2,\ldots,N\). Note that in this case, for \(i=1,2,\ldots,N\), the optimal control is given by \[\hat{\alpha}_{i}=-2\sum_{j=1}^{N}(A_{i})_{ij}\hat{X}_{jt}^{(N)}=-2\left(A_{i} \right)_{i}^{\top}\hat{X}_{t}^{(N)},\] where \((A)_{i}\) is the \(i\)-th column of matrix \(A\). Plugging the pattern (45) into the differential equation of \(A_{i}\), we obtain the following system of ODEs: \[\begin{cases}a_{1}^{\prime}-2a_{1}^{2}-4(N-1)a_{3}^{2}+\frac{N-1}{N}k=0,\\ a_{2}^{\prime}-2a_{3}^{2}-4a_{1}a_{2}-4(N-2)a_{3}a_{4}+\frac{k}{N}=0,\\ a_{3}^{\prime}-2a_{1}a_{3}-4a_{1}a_{3}-4(N-2)a_{3}^{2}-\frac{k}{N}=0,\\ a_{3}^{\prime}-2a_{1}a_{3}-4a_{2}a_{3}-4(N-2)a_{3}a_{4}-\frac{k}{N}=0,\\ a_{4}^{\prime}-2a_{3}^{2}-4a_{2}a_{3}-4a_{1}a_{4}-4(N-3)a_{3}a_{4}=0\end{cases}\] with the terminal conditions \[a_{1}(T)=a_{2}(T)=a_{3}(T)=a_{4}(T)=0.\] It is worth noting that there are two ODEs for \(a_{3}\), and the two expressions should be equal, thus \[a_{1}a_{3}+(N-2)a_{3}^{2}=a_{2}a_{3}+(N-2)a_{3}a_{4},\] which implies that \((a_{1}+(N-2)a_{3})^{\prime}=(a_{2}+(N-2)a_{4})^{\prime}\) or \[2a_{1}^{2}+2(N-2)a_{1}a_{3}+4(N-1)a_{3}^{2}+4(N-2)a_{2}a_{3}+4(N- 2)^{2}a_{3}a_{4}-\frac{k}{N}\] \[= 2(N-1)a_{3}^{2}+4a_{1}a_{2}+4(N-2)(a_{2}a_{3}+a_{3}a_{4}+a_{1}a_{ 4})+4(N-2)(N-3)a_{3}a_{4}-\frac{k}{N}.\] After combining terms and substituting \(a_{2}+(N-2)a_{4}\) with \(a_{1}+(N-2)a_{3}\), we get \[a_{1}^{2}+(N-2)a_{1}a_{3}-(N-1)a_{3}^{2}=0,\] which yields \(a_{3}=a_{1}\) or \(a_{3}=-\frac{1}{N-1}a_{1}\). Note that, since \(a_{1}\) and \(a_{3}\) satisfies different differential equations, it follows that \(a_{3}\neq a_{1}\). Hence, we can conclude that \(a_{3}=-\frac{1}{N-1}a_{1}\). Next, from the equation \(a_{1}+(N-2)a_{3}=a_{2}+(N-2)a_{4}\), we have \[a_{4}=\frac{1}{N-2}a_{1}+a_{3}-\frac{1}{N-2}a_{2}.\] In conclusion, for \(i=1,2,\ldots,N\), \(A_{i}\) has the following expressions: \[(A_{i})_{pq}=\begin{cases}a_{1}(t),&\text{if }p=q=i,\\ a_{2}(t),&\text{if }p=q\neq i,\\ -\frac{1}{N-1}a_{1}(t),&\text{if }p\neq q,p=i\text{ or }q=i,\\ \frac{1}{(N-1)(N-2)}a_{1}(t)-\frac{1}{N-2}a_{2}(t),&\text{ otherwise}, \end{cases}\] where \(a_{1}\) and \(a_{2}\) satisfies the system of ODEs (46) \[\begin{cases}a_{1}^{\prime}-\frac{2(N+1)}{N-1}a_{1}^{2}+\frac{N-1}{N}k=0,\\ a_{2}^{\prime}+\frac{2}{(N-1)^{2}}a_{1}^{2}-\frac{4N}{N-1}a_{1}a_{2}+\frac{k}{ N}=0,\\ a_{1}(T)=a_{2}(T)=0.\end{cases} \tag{46}\] The existence and uniqueness of \(A_{i}\) in (42) are equivalent to the existence and uniqueness of (46). Firstly, the existence, uniqueness, and boundness of \(a_{1}\) in (46) is from the same argument for \(a\) in (40), which is shown as the proof of Lemma 12 in Appendix. The explicit solution of \(a_{1}\) is given by \[a_{1}(t)=\sqrt{\frac{k}{2}\frac{(N-1)^{2}}{N(N+1)}}\frac{1-e^{-2\sqrt{2}\sqrt {\frac{N+1}{N}k(T-t)}}}{1+e^{-2\sqrt{2}\sqrt{\frac{N+1}{N}k(T-t)}}}\] for all \(t\in[0,T]\). Next, with the given of \(a_{1}\), the existence, uniqueness, and boundness of \(a_{2}\) in (46) is guaranteed by Theorem 12.1 in [1]. Therefore, we can express the equilibrium paths and associated controls as the following: \[d\hat{X}_{it}^{(N)}=-2a_{1}^{N}(t)\left(\hat{X}_{it}^{(N)}-\frac{1}{N-1}\sum_{ j\neq i}^{N}\hat{X}_{jt}^{(N)}\right)dt+dW_{it}^{(N)}+d\tilde{W}_{t}, \tag{47}\] and \[\hat{\alpha}_{it}^{(N)}=-2a_{1}^{N}(t)\left(\hat{X}_{it}^{(N)}-\frac{1}{N-1} \sum_{j\neq i}^{N}\hat{X}_{jt}^{(N)}\right)\] respectively for \(i=1,2,\ldots,N\), where \(a_{1}^{N}\) is the solution to the ODE for \(a_{1}\) in (46). This concludes Proposition 2. ## 6 Further remark We have now established Proposition 1 concerning the MFG in Section 4 and Proposition 2 regarding the \(N\)-player game in Section 5. With these propositions proven, we are now able to conclude the proof of Theorem 1, which was presented in Section 3.4. ## 7 Appendix **Lemma 11**.: _Let \(\mathbb{W}_{p}\) be the \(p\)-Wasserstein metric. If \(X\) and \(Y\) are two real-valued random variables and \(c\) is a constant, then_ \[\mathbb{W}_{p}(\mathcal{L}(X),\mathcal{L}(Y))=\mathbb{W}_{p}(\mathcal{L}(X+c), \mathcal{L}(Y+c)). \tag{48}\] _Moreover, if \(\alpha=\{\alpha_{i}:i\in\mathbb{N}\}\) is a sequence of random variables, then_ \[\mathbb{W}_{p}\left(\frac{1}{N}\sum_{i=1}^{N}\delta_{\alpha_{i}+c},\mathcal{L }(Y+c)\right)=\mathbb{W}_{p}\left(\frac{1}{N}\sum_{i=1}^{N}\delta_{\alpha_{i}},\mathcal{L}(Y)\right). \tag{49}\] Proof.: By definition of the \(p\)-Wasserstein metric, we have: \[\mathbb{W}_{p}(\mathcal{L}(X),\mathcal{L}(Y))=\left(\inf_{\pi\in\Pi( \mathcal{L}(X),\mathcal{L}(Y))}\int_{\mathbb{R}^{2}}|x-y|^{p}d\pi(x,y)\right)^ {\frac{1}{p}},\] where \(\Pi(\mathcal{L}(X),\mathcal{L}(Y))\) is the set of all joint probability measures with marginals \(\mathcal{L}(X)\) and \(\mathcal{L}(Y)\). Similarly, \[\mathbb{W}_{p}(\mathcal{L}(X+c),\mathcal{L}(Y+c))=\left(\inf_{\pi\in\Pi( \mathcal{L}(X+c),\mathcal{L}(Y+c))}\int_{\mathbb{R}^{2}}|x-y|^{p}d\pi(x,y) \right)^{\frac{1}{p}},\] where \(\Pi(\mathcal{L}(X+c),\mathcal{L}(Y+c))\) is the set of all joint probability measures with marginals \(\mathcal{L}(X+c)\) and \(\mathcal{L}(Y+c)\). Now, consider the mapping \(\Phi:\mathbb{R}^{2}\rightarrow\mathbb{R}^{2}\) given by \(\Phi(x,y)=(x+c,y+c)\). For any \(\pi\in\Pi(\mathcal{L}(X),\mathcal{L}(Y))\), the pushforward measure of \(\pi\) under \(\Phi\) belongs to \(\Pi(\mathcal{L}(X+c),\mathcal{L}(Y+c))\), i.e., \(\pi^{\prime}=\Phi_{*}\pi\in\Pi(\mathcal{L}(X+c),\mathcal{L}(Y+c))\). Thus, we have \[\Phi_{*}\Pi(\mathcal{L}(X),\mathcal{L}(Y))\subset\Pi(\mathcal{L}(X+c), \mathcal{L}(Y+c)).\] Moreover, \(\Phi\) is bijective and measure preserving, then \[\int_{\mathbb{R}^{2}}|x-y|^{p}d\pi^{\prime}(x,y)=\int_{\mathbb{R}^{2}}|(x+c)- (y+c)|^{p}d\pi(x,y)=\int_{\mathbb{R}^{2}}|x-y|^{p}d\pi(x,y).\] Therefore, we know that \[\mathbb{W}_{p}^{p}\left(\mathcal{L}(X),\mathcal{L}(Y)\right) =\inf_{\pi\in\Pi(\mathcal{L}(X),\mathcal{L}(Y))}\int_{\mathbb{R}^ {2}}|x-y|^{p}d\pi(x,y)\] \[=\inf_{\pi\in\Pi(\mathcal{L}(X),\mathcal{L}(Y))}\int_{\mathbb{R}^ {2}}|x-y|^{p}d\Phi_{*}\pi(x,y)\] \[=\inf_{\pi^{\prime}\in\Phi_{*}\Pi(\mathcal{L}(X),\mathcal{L}(Y))} \int_{\mathbb{R}^{2}}|x-y|^{p}d\pi^{\prime}(x,y)\] \[\geq\mathbb{W}_{p}^{p}(\mathcal{L}(X+c),\mathcal{L}(Y+c)).\] by the definition of the \(p\)-Wasserstein metric. If we apply the above inequality to \(X^{\prime}=X+c\), \(Y^{\prime}=Y+c\), and \(c^{\prime}=-c\), the opposite inequality is provided. Thus, it completes the proof of (48). Next, we note that \[\frac{1}{N}\sum_{i=1}^{N}\delta_{\alpha_{i}+c}=\mathcal{L}(\alpha_{u}+c|\alpha),\] where \(u\) be a uniform random variable on \(\{1,2,\ldots,N\}\) independent to \(\alpha\). Using (48), we conclude (49) from \[\mathbb{W}_{p}\left(\frac{1}{N}\sum_{i=1}^{N}\delta_{\alpha_{i}+c}, \mathcal{L}(Y+c)\right) =\mathbb{W}_{p}\left(\mathcal{L}(\alpha_{u}+c|\alpha),\mathcal{L}( Y+c)\right)\] \[=\mathbb{W}_{p}\left(\mathcal{L}(\alpha_{u}|\alpha),\mathcal{L}( Y)\right)\] \[=\mathbb{W}_{p}\left(\frac{1}{N}\sum_{i=1}^{N}\delta_{\alpha_{i} },\mathcal{L}(Y)\right).\] **Lemma 12**.: _Under the Assumption 2, there exists a unique solution \((a(t),b(t),c(t),d(t):t\in[0,T])\) for the Riccati system of ODEs (40)-(41) and the solution can given explicitly by_ \[\begin{cases}a(t)=\sqrt{\frac{k}{2}}\frac{1-e^{-2\sqrt{2k}(T-t)}}{1+e^{-2\sqrt {2k}(T-t)}},\\ b(t)=\int_{t}^{T}\left(4a(s)c(s)-2a^{2}(s)\right)ds,\\ c(t)=k\int_{t}^{T}e^{\int_{t}^{s}-4a(r)dr}ds,\\ d(t)=\int_{t}^{T}\left(b(s)+2c(s)\right)ds.\end{cases}\] Proof.: Firstly, with the given of \(k>0\), we can solve the ODE \[a^{\prime}(t)-2a^{2}(t)+k=0,\quad a(T)=0\] explicitly by the method of separating variables. Note that with the differential form, we have \[\frac{da}{\left(\sqrt{2}a-\sqrt{k}\right)\left(\sqrt{2}a+\sqrt{k}\right)}= \frac{1}{2\sqrt{k}}\left(\frac{1}{\sqrt{2}a-\sqrt{k}}-\frac{1}{\sqrt{2}a+ \sqrt{k}}\right)da=dt.\] It follows that \[\ln\left(\left|\frac{\sqrt{2}a-\sqrt{k}}{\sqrt{2}a+\sqrt{k}}\right|\right)=2 \sqrt{2k}t+C_{1}\] for some constant \(C_{1}\) by taking integration on both sides. Thus by calculation, we obtain \[a(t)=\sqrt{\frac{k}{2}}\frac{1-C_{2}e^{2\sqrt{2k}t}}{1+C_{2}e^{2\sqrt{2k}t}}\] for some constant \(C_{2}\) to be determined. Since \(a(T)=0\), it yields that \(C_{2}=e^{-2\sqrt{2k}T}\) and thus \[a(t)=\sqrt{\frac{k}{2}}\frac{1-e^{-2\sqrt{2k}(T-t)}}{1+e^{-2\sqrt{2k}(T-t)}}.\] It is easily to verify that \(a(\cdot)\) is in \(C^{\infty}([0,T])\) and is bounded. With the given of \(a\), the functions \((b,c,d)\) in the Riccati system (40)-(41) is a coupled linear system, and thus their existence, uniqueness, and boundedness are given by Theorem 12.1 in [1].
2302.05133
Wellposedness, exponential ergodicity and numerical approximation of fully super-linear McKean--Vlasov SDEs and associated particle systems
We study a class of McKean--Vlasov Stochastic Differential Equations (MV-SDEs) with drifts and diffusions having super-linear growth in measure and space -- the maps have general polynomial form but also satisfy a certain monotonicity condition. The combination of the drift's super-linear growth in measure (by way of a convolution) and the super-linear growth in space and measure of the diffusion coefficient require novel technical elements in order to obtain the main results. We establish wellposedness, propagation of chaos (PoC), and under further assumptions on the model parameters we show an exponential ergodicity property alongside the existence of an invariant distribution. No differentiability or non-degeneracy conditions are required. Further, we present a particle system based Euler-type split-step scheme (SSM) for the simulation of this type of MV-SDEs. The scheme attains, in stepsize, the strong error rate $1/2$ in the non-path-space root-mean-square error metric and we demonstrate the property of mean-square contraction. Our results are illustrated by numerical examples including: estimation of PoC rates across dimensions, preservation of periodic phase-space, and the observation that taming appears to be not a suitable method unless strong dissipativity is present.
Xingyuan Chen, Goncalo dos Reis, Wolfgang Stockinger
2023-02-10T09:32:15Z
http://arxiv.org/abs/2302.05133v2
Wellposedness, exponential ergodicity and numerical approximation of fully super-linear McKean-Vlasov SDEs and associated particle systems ###### Abstract We study a class of McKean-Vlasov Stochastic Differential Equations (MV-SDEs) with drifts and diffusions having super-linear growth in measure and space - the maps have general polynomial form but also satisfy a certain monotonicity condition. The combination of the drift's super-linear growth in measure (by way of a convolution) and the super-linear growth in space and measure of the diffusion coefficient require novel technical elements in order to obtain the main results. We establish wellposedness, propagation of chaos (PoC), and under further assumptions on the model parameters we show an exponential ergodicity property alongside the existence of an invariant distribution. No differentiability or non-degeneracy conditions are required. Further, we present a particle system based Euler-type split-step scheme (SSM) for the simulation of this type of MV-SDEs. The scheme attains, in stepsize, the strong error rate \(1/2\) in the non-path-space root-mean-square error metric and we demonstrate the property of mean-square contraction. Our results are illustrated by numerical examples including: estimation of PoC rates across dimensions, preservation of periodic phase-space, and the observation that taming appears to be not a suitable method unless strong dissipativity is present. **Keywords:** McKean-Vlasov equations, split-step methods, ergodicity, interacting particle systems, super-linear growth in measure ## 1 Introduction In this work, we analyze a class of McKean-Vlasov Stochastic Differential Equations (MV-SDEs) having drift and diffusion components of convolution type, akin to the porous media equation or interaction kernel modelling, which allows for _super-linear growth in measure and space_ in both coefficients - one may think of the super-linearity as higher-order polynomials under some additional conditions regulating spatial and measure radial growth. We work with MV-SDE dynamics of the form \[\mathrm{d}X_{t} =\big{(}v(X_{t},\mu_{t}^{X})+b(t,X_{t},\mu_{t}^{X})\big{)} \mathrm{d}t+\overline{\sigma}(t,X_{t},\mu_{t}^{X})\mathrm{d}W_{t},\quad X_{0} \in L_{0}^{m}(\mathbb{R}^{d}), \tag{1.1}\] \[\text{where}\;\;v(x,\mu)=\int_{\mathbb{R}^{d}}f(x-y)\mu(\mathrm{d }y)+u(x,\mu),\quad\overline{\sigma}(t,x,\mu)=\sigma(t,x,\mu)+\int_{\mathbb{R }^{d}}f_{\sigma}(x-y)\mu(\mathrm{d}y). \tag{1.2}\] Above, \(\mu_{t}^{X}\) denotes the law of the solution process \(X\) at time \(t\), \(W\) is a multidimensional Brownian motion, \(u,b,\sigma\) and \(f,f_{\sigma}\) are measurable maps and \(X_{0}\) is a sufficiently integrable initial condition (in \(L_{0}^{m}\), for \(m\geq 2\)). Critically, \(f,f_{\sigma},u,\sigma\) are maps of super-linear growth, but not assumed to be differentiable and \(\sigma\) may degenerate. In terms of a modelling perspective in the context of particle dynamics, (1.1)-(1.2) model the dynamics of particle motion where the particle is affected by different sources of forcing. The map \(u\) represents a multi-well gradient potential confining the particle (and the source of super-linear growth in the spatial component) and the convolution map \(f\) contains information on the forces affecting the particles (attractive, repulsive), see [3, 40, 66]. As argued in [40], under certain assumptions, \(v\) (and \(f\)) adds inertia to the particle's dynamic in turn affecting its exit time from a domain of attraction (by accelerating or delaying it) and alters exit locations [3, 30, 40] (also [26]). To motivate the study of equations with a (nonlinear) convolution term \(f_{\sigma}\ast\mu(\cdot):=\int_{\mathbb{R}^{d}}f_{\sigma}(\cdot-y)\mu(\mathrm{ d}y)\) in the diffusion component, which is the main feature of our work, we first mention [38]. There, a Cucker-Smale model incorporating random communication is rewritten as a Cucker-Smale model with multiplicative noise (the diffusion coefficient has the form \((\mathbb{E}[X_{t}]-X_{t})=\int_{\mathbb{R}^{d}}(X_{t}-y)\mu_{t}(\mathrm{d}y)\)), which helps to stabilize flocking states as the effect of the noise diminishes the closer the particles concentrate around their mean; see also [4]. These works give a clear motivation to analyze convolution type diffusion maps (diffusions whose strength depends on the density) - also [31] studies a kinetic flocking model with a more general distance potential (communication rate) function than [38]. In addition, [13] considers general stochastic systems of interacting particles with Brownian noise to study models for the collective behavior (swarming) - more particularly, [13, Section 1.2.2] highlights several open-question model extensions to nonlinear diffusion coefficients (though beyond the scope of this work). The recent works [14, 20] investigate consensus-based optimization (CBO) methods for solving high-dimensional nonlinear unconstrained minimization problems. A CBO scheme updates the particle's position in an iterative manner to explore the optimization landscape. There, particles that are far away from the equilibrium state are expected to exhibit more exploration (i.e., the noise level should be larger) compared to particles close to it. Inspired by the above discussed works, we offer a new class of MV-SDEs adding a new element in the diffusion coefficient by means of a reversion to the population mean expressed through a _fully non-Lipschitz_\(f_{\sigma}*\mu\) significantly beyond the linear interaction diffusion coefficients studied in the mention works. More generally, the motivation to study this class of MV-SDEs and associated interacting particle systems is to present a unified framework to address wellposedness and establish properties useful for downstream applications. For instance, from emerging models of mean-field type in neuroscience [32], understating particle motion and exit times [3, 40, 41, 66, 71], parametric inference [35] (also [7, 25]) is an important consideration. We also point to Section 4 and 5 of [46] for a variety of general interacting systems that are subsumed by our class. Our results can also be viewed as an addition to the literature on granular media type equations as studied in [21, 37, 61]. The existence and uniqueness of solutions to MV-SDEs, in a strong and weak sense, has been extensively studied, see e.g., [19, 22, 44, 47, 50, 53, 55, 58, 62, 64] and references therein, but _none_ cover the setting presented here. To the best of our knowledge, the existence and uniqueness of strong solutions to equations with super-linear growth in the measure component of the drift and the diffusion has not been addressed in general. There exist various works considering super-linearly growing coefficients (in state) but do not incorporate \(f\) or \(f_{\sigma}\), see e.g., [49, 55, 69] and its references. In [3] the authors deal with a super-linear \(f\), \(f_{\sigma}\equiv 0\), and a (unbounded) uniformly Lipschitz continuous \(\sigma\), and derive wellposedness (i.e., existence and uniqueness of a strong solution) and large deviation results. Further, [72] allows for a setting similar to ours but requires upfront strong dissipativity and non-degeneracy. Their aim was to study ergodicity, nonetheless, it is unclear how to adapt their methodology if working with the goal of proving wellposedness over \([0,T]\) under milder conditions. From the initial work [3], our goal is to develop a general framework to study (1.1)-(1.2) in terms of wellposedness (over \([0,T]\)), with a super-linearly growing \(\sigma\) and \(f_{\sigma}\), ergodicity and approximation schemes. Our _first main contribution_ concerns wellposedness and propagation of chaos (PoC) results for the finite time horizon \([0,T]\) case. The critical nontrivial hurdle of this setting is in establishing \(L^{p}\)-moment bounds for \(p>2\) under the presence of the super-linear growths of \(f\), \(f_{\sigma}\) and \(\sigma\) - _this issue appears only because_ there is a simultaneous presence of nonlinearities (in space and measure) in the drift and diffusion, otherwise techniques like those of [3] or [49] would suffice. To overcome this hurdle, we introduce a new condition dubbed 'additional symmetry', which we have not found in the literature. For a quick perspective, we suggest the reader to glance at Lemma A.1 and A.2 (in Appendix) and the proof of Theorem 2.5 to see how one deals with the convolution terms and to note the importance of the 'additional symmetry' condition - a discussion on this latter condition is presented in Remark 2.6. We also address a propagation of chaos result for this class [27, 30, 33, 56, 64]. We show that the interacting particle system, obtained by replacing \(\mu^{X}\) by the system's \(N\)-particle empirical distribution recovers the original MV-SDE in the particle limit \(N\to\infty\). Under a mild higher-integrability assumption a convergence rate is available [33]. Our _second main contribution_ addresses another key element of MV-SDE theory which is the existence and uniqueness of an invariant probability measure and exponential convergence to it (i.e., ergodicity). This is a particularly important property in applications involving statistical inference [7] (usually in neuroscience [1, 16, 17, 34, 36]) or associated long-term behaviour connected to metastability [3, 66, 67]. We extend the wellposedness result to the infinite time horizon and then analyze the long-time behaviour of our class of MV-SDEs. We prove an exponential ergodicity property and the existence of an invariant measure. The proof arguments loosely follow those of [43, 52, 69] and, critically, do not make use of Lyapunov functions or the Krylov-Bogoliubov machinery. In fact, to reach this type of results for McKean-Vlasov equations, the Krylov-Bogoliubov machinery is not a suitable one due to the non-linearity of the involved semi-group and hence the classical tightness argument does not apply, see discussions in [43, 52, 69]. On the other hand, Lyapunov function arguments are also difficult to use for the particular MV-SDE class in this manuscript - this is due to the presence of a convolution operator with a function \(f\) (and \(f_{\sigma}\)) that is not of linear growth. If the polynomial power was on the convolution term, instead of inside it, then the Lyapunov machinery would successfully carry through [52]. For our case, and as mentioned before, an additional difficulty arises solely due to the presence of the simultaneous super-linear growth in \(f\), \(f_{\sigma}\) and \(\sigma\). The ergodicity/invariance proof arguments of [43, 52, 69] leverage the completeness of the space of probability measures with finite second order moments to identify the invariant measure, but our wellposedness result requires a sufficiently integrable initial condition (with strictly more than second order moments). This issue leads to a more involved proof - see discussion prior to Theorem 2.7. The _third main contribution_ of this work is a numerical method to approximate (1.1)-(1.2) over \([0,T]\) via its interacting particle system. Most of our theoretical results are only proven for the finite time case, but we successfully apply the scheme for the long-term simulation of a particle system as well. There are presently many studies for numerical methods allowing super-linear spatial growth of drifts (and diffusions): Euler type methods, e.g., taming [29], time-adaptive [60], semi-implicit methods [24], projection methods [8]; Milstein type methods e.g., [5; 6; 48; 59] with some allowing super-linear \(\sigma\) in space. There are variations on the assumptions, but _all_ these contributions require drifts and diffusions to be globally Lipschitz continuous in measure (with respect to the Wasserstein distance with quadratic cost). Two recent contributions [51; 57] allow for weaker continuity conditions than Lipschitz for the coefficients but require a linear growth in space and measure. Only [23] allows for general super-linear growth in \(f,u\) but still limits \(\overline{\sigma}\) to satisfy Lipschitz assumptions - we detail below the differences between [23] and this manuscript in more detail. The scheme we propose here belongs to the split-step method (SSM) class. It was analyzed for MV-SDEs with drifts which are Lipschitz in measure and diffusion coefficients satisfying a uniform Lipschitz condition [24]; the scheme appeared originally in [42] for standard SDEs. We follow the strategy of approximating (in time) the interacting particle system associated to the MV-SDE and using a quantitative propagation of chaos (PoC) convergence result, see [15] and [29; 59; 60] for earlier uses of this strategy. From a methodological point of view, the convergence proof of the numerical scheme relies on the stochastic \(C\)-stability and \(B\)-consistency mechanics proposed in [10]. Its use in the context of numerical schemes for MV-SDEs and interacting particle systems seems to be novel in the literature - except for the very recent [12] that obtain a higher order strong scheme for MV-SDE under non-differentiability conditions using a randomisation method. In [12], the authors work with generic Lipschitz assumptions and need to change the underpinning error norms to cope with the complexity arising from the randomization step due to an explicit non-differentiability assumption on the drift coefficient. Our approach and requirements differ, and hence so does the analysis (albeit similar at points). We show that it is possible to work directly with the concepts of [10] to deal with the interacting particle system, see Section 2.5 - we emphasize that the main goal of the analysis is to guarantee that the core estimates are independent of the number of particles \(N\) of the interacting system but may depend on the initial system's underlying dimension. As is common in the MV-SDE literature, results from the SDE numerics literature on super-linear growth do not carry over to the MV-SDE one. Closest to our work with regards to the SSM is [23], where the authors propose an SSM scheme similar to the one here for interacting particle systems that have (1.1) (with \(f_{\sigma}\equiv 0\) and \(\sigma\) globally Lipschitz continuous in space and measure) as limit. There they overcome the barrier of super-linear growth in space and measure for the drift (that [24; 29; 60] do not), but work with a diffusion component of Lipschitz type; the focus of [23] is solely the analysis of the numerical scheme and not wellposedness or ergodicity. Our setting is more involved than that of [23] and the above mentioned works specifically due to the different sources of super-linearity in drift and diffusion. In [23], bounds for higher order moments of the discrete process obtained by the time-stepping scheme can be developed by commonly used assumptions like \((\mathbf{A}^{u},\ \mathbf{A}^{\sigma})\) below. In our situation, the super-linearity in the diffusion coefficient (in space and measure) is aimed to be controlled by a drift satisfying a suitable one-sided Lipschitz condition. However, the simultaneous appearance of nonlinearities in the diffusion and the nonlinear convolution \(f*\mu\) in the drift causes difficulties. This is the reason why, for the scheme in this manuscript, only \(L^{2}\)-moment bounds were established and the proof methodology takes recourse in the stochastic \(C\)-stability and \(B\)-consistency mechanics of [10] (which does not require to establish bounds for higher order moments of the SSM). It remains unclear how to obtain higher moments (even with the help of the new additional symmetry assumption). In terms of findings, we show the scheme achieves a strong convergence rate of order \(1/2\) and we provide sufficient conditions for mean-square stability in the sense of [24, Definition 2.8]. We present several numerical examples of interest and for comparison we implement, without proof, two intuitive versions of taming methods [29; 59]. Our examples show the SSM to perform very well for the approximation of a solution to (1.1) on \([0,T]\) in an \(L^{2}\)-sense or the approximation for the ergodic distribution. The numerical results using taming are mixed but suffice to highlight which version can be expected to converge (theoretically) including a surprising element regarding the initial condition (see our Section 3.1). The SSM is shown to preserve periodicity of phase-space. Lastly, we provide one numerical example with the aim to estimate the PoC rate across dimension which highlights a gap in the literature: we observe the rate of [27; 56] (that do not cover our setting) instead of those in [18; 33] (which are used to prove our PoC result). Future research focuses on the study of uniform in time PoC results and strong convergence rates for the SSM on \([0,\infty)\). **Organization of the paper.** Section 2 contains: notations, framework, wellposedness and ergodicity results, the particle approximation and the propagation of chaos statement, and the numerical scheme alongside associated convergence results. Several numerical examples are provided in Section 3. They cover the non-dissipative case in short and long time horizons; approximation of the invariant distribution; preservation of periodicity in phase-space and numerical estimation of PoC rates across dimension. All proofs are postponed to Section 4. Generic auxiliary results are given in the Appendix. ## 2 Main results ### Notation and Spaces We follow the notation and framework set in [3; 24]. Let \(\mathbb{N}\) be the set of natural numbers starting at \(0\) and for \(a,b\in\mathbb{N}\) with \(a\leq b\), define \([\![a,b]\!]:=[a,b]\cap\mathbb{N}=\{a,\cdots,b\}\). For \(x,y\in\mathbb{R}^{d}\) denote the inner product of vectors by \(\langle x,y\rangle\), and \(|x|=(\sum_{j=1}^{d}x_{j}^{2})^{1/2}\) the Euclidean distance. Let \(\mathbb{1}_{B}\) be the indicator function of the set \(B\subset\mathbb{R}^{d}\). For a matrix \(A\in\mathbb{R}^{d\times l}\) we denote by \(A^{\intercal}\) its transpose and its Frobenius norm by \(|A|=\text{Trace}\{AA^{\intercal}\}^{1/2}\). We introduce on the measurable space \((\mathbb{R}^{d},\mathcal{B}(\mathbb{R}^{d}))\), where \(\mathcal{B}(\mathbb{R}^{d})\) denotes the Borel \(\sigma\)-field over \(\mathbb{R}^{d}\), the set of all probability measures \(\mathcal{P}(\mathbb{R}^{d})\) and its subset \(\mathcal{P}_{r}(\mathbb{R}^{d})\) of those with finite \(r\in[1,\infty)\) moment. The space \(\mathcal{P}_{r}(\mathbb{R}^{d})\) is a Polish space when endowed with the Wasserstein distance \[W^{(r)}(\mu,\nu):=\inf_{\pi\in\Pi(\mu,\nu)}\Big{(}\int_{\mathbb{R}^{d}\times \mathbb{R}^{d}}|x-y|^{r}\pi(\mathrm{d}x,\mathrm{d}y)\Big{)}^{\frac{1}{r}}, \quad\mu,\nu\in\mathcal{P}_{r}(\mathbb{R}^{d}),\] where \(\Pi(\mu,\nu)\) is the set of couplings for \(\mu\) and \(\nu\) such that \(\pi\in\Pi(\mu,\nu)\) is a probability measure on \(\mathbb{R}^{d}\times\mathbb{R}^{d}\) with \(\pi(\cdot\times\mathbb{R}^{d})=\mu\) and \(\pi(\mathbb{R}^{d}\times\cdot)=\nu\). For a given function \(f\) with domain on \(\mathbb{R}^{d}\), \(x\in\mathbb{R}^{d}\) with \(\mu\in\mathcal{P}(\mathbb{R}^{d})\), the convolution operator \(*\) is defined as \((f*\mu)(x):=\int_{\mathbb{R}^{d}}f(x-y)\mu(\mathrm{d}y)\). Let our probability space be a completion of \((\Omega,\mathbb{F},\mathcal{F},\mathbb{P})\) with \(\mathbb{F}=\{\mathcal{F}_{t}\}_{t\geq 0}\) being the natural filtration of the Brownian motion \(W=(W^{1},\cdots,W^{l})\) with \(l\)-dimensions, augmented with a sufficiently rich sub \(\sigma\)-algebra \(\mathcal{F}_{0}\) independent of \(W\). We denote by \(\mathbb{E}[\cdot]=\mathbb{E}^{\mathbb{P}}[\cdot]\) the usual expectation operator with respect to \(\mathbb{P}\). We consider some finite terminal time \(T<\infty\) and use the following notation for spaces, which are standard in the literature [24, 29]. For \(p\geq 1\), we denote by \(L^{p}\left(\Omega,\mathcal{F}_{t},\mathbb{P};\mathbb{R}^{d}\right)\) the space of \(\mathbb{R}^{d}\)-valued, \(\mathcal{F}_{t}\)-measurable random variables \(X\) with finite \(|X|_{L^{p}(\Omega,\mathcal{F}_{t},\mathbb{P};\mathbb{R}^{d})}\)-norm given by \(|X|_{L^{p}(\Omega,\mathcal{F}_{t},\mathbb{P};\mathbb{R}^{d})}=\mathbb{E}\left[ |X|^{p}\right]^{1/p}\). Define \(\mathbb{P}^{\mathbb{P}}([0,T])\), as the space of \(\mathbb{R}^{d}\)-valued, \(\mathbb{F}\)-adapted, continuous processes \(Z\) with finite \(\|Z\|_{\mathbb{P}^{\mathbb{P}}}\)-norm defined as \(\|Z\|_{\mathbb{P}^{\mathbb{P}}}=\mathbb{E}\left[\sup_{0\leq t\leq T}|Z_{t}|^{ p}\right]^{1/p}\). Throughout the text, \(C\) denotes a generic positive real-valued constant that may depend on the problem's data, and change from line to line, but is always independent of the constants \(h,M,N\) (associated with the numerical scheme and specified below). ### Framework Let \(v:\mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R}^{d})\to\mathbb{R}^{d}\), \(b:[0,\infty)\times\mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R}^{d})\to \mathbb{R}^{d}\) and \(\overline{\sigma}:[0,\infty)\times\mathbb{R}^{d}\times\mathcal{P}_{2}( \mathbb{R}^{d})\to\mathbb{R}^{d\times l}\) be measurable maps. The MV-SDE of interest of this work is equation (1.1) (for some \(m>2\)), where \(\mu_{t}^{X}\) denotes the law of the process \(X\) at time \(t\), i.e., \(\mu_{t}^{X}=\mathbb{P}\circ X_{t}^{-1}\). We make the following assumptions on the coefficients. **Assumption 2.1**.: _The functions \(b\) and \(\sigma\) are \(1/2\)-Holder continuous in time, uniformly in \(x\in\mathbb{R}^{d}\) and \(\mu\in\mathcal{P}_{2}(\mathbb{R}^{d})\) and \(\sup_{t\in[0,\infty)}\left(|b(t,0,\delta_{0})|+|\sigma(t,0,\delta_{0})|\right) \leq L\), for some constant \(L\geq 0\)._ \((\mathbf{A}^{b})\) _Let \(b\) be uniformly Lipschitz continuous in the sense that there exists \(L^{(1)}_{(b)},\;L^{(3)}_{(b)}\geq 0\) and \(L^{(2)}_{(b)}\in\mathbb{R}\) such that for all \(t\in[0,\infty)\), \(x,x^{\prime}\in\mathbb{R}^{d}\) and \(\mu,\mu^{\prime}\in\mathcal{P}_{2}(\mathbb{R}^{d})\) we have that_ \[|b(t,x,\mu)-b(t,x^{\prime},\mu^{\prime})|^{2} \leq L^{(1)}_{(b)}\big{(}|x-x^{\prime}|^{2}+\big{(}W^{(2)}(\mu, \mu^{\prime})\big{)}^{2}\big{)},\] \[\langle x-x^{\prime},b(t,x,\mu)-b(t,x^{\prime},\mu^{\prime})\rangle \leq L^{(2)}_{(b)}|x-x^{\prime}|^{2}+L^{(3)}_{(b)}\big{(}W^{(2)}(\mu, \mu^{\prime})\big{)}^{2}.\] \((\mathbf{A}^{u},\;\mathbf{A}^{\sigma})\) _Let \(u,\sigma\) satisfy: there exist \(L^{(1)}_{(u\sigma)}\in\mathbb{R}\), and \(L^{(2)}_{(u\sigma)},L^{(3)}_{(u\sigma)},L^{(4)}_{(u\sigma)},q_{1}\geq 0\) such that for all \(t\in[0,\infty)\), \(x,x^{\prime}\in\mathbb{R}^{d}\) and \(\mu,\mu^{\prime}\in\mathcal{P}_{2}(\mathbb{R}^{d})\), with \(m>2\), we have that_ \[\langle x-x^{\prime},u(x,\mu)-u(x^{\prime},\mu^{\prime})\rangle+2(m -1)|\sigma(t,x,\mu)-\sigma(t,x^{\prime},\mu^{\prime})|^{2} \leq L^{(1)}_{(u\sigma)}|x-x^{\prime}|^{2}+L^{(2)}_{(u\sigma)} \big{(}W^{(2)}(\mu,\mu^{\prime})\big{)}^{2}, \tag{2.1}\] \[|u(x,\mu)-u(x^{\prime},\mu)|+|\sigma(t,x,\mu)-\sigma(t,x^{\prime}, \mu)| \leq L^{(3)}_{(u\sigma)}(1+|x|^{q_{1}}+|x^{\prime}|^{q_{1}})|x-x^{ \prime}|,\] (2.2) \[|u(x,\mu)-u(x,\mu^{\prime})|^{2}+|\sigma(t,x,\mu)-\sigma(t,x,\mu)| ^{2} \leq L^{(4)}_{(u\sigma)}\big{(}W^{(2)}(\mu,\mu^{\prime})\big{)}^{2}.\] \((\mathbf{A}^{f},\;\mathbf{A}^{f_{\sigma}})\) _Let \(f,f_{\sigma}\) satisfy: there exist \(L^{(1)}_{(f)},L^{(3)}_{(f)}\in\mathbb{R}\), and \(L^{(2)}_{(f)},q_{2}\geq 0\), such that for all \(x,x^{\prime}\in\mathbb{R}^{d}\), \(2<p\leq m\), we have that_ \[\langle x-x^{\prime},f(x)-f(x^{\prime})\rangle+2(m-1)|f_{\sigma}(x )-f_{\sigma}(x^{\prime})|^{2} \leq L^{(1)}_{(f)}|x-x^{\prime}|^{2},\] _(One-sided Lipschitz),_ \[|f(x)-f(x^{\prime})|+|f_{\sigma}(x)-f_{\sigma}(x^{\prime})| \leq L^{(2)}_{(f)}(1+|x|^{q_{2}}+|x^{\prime}|^{q_{2}})|x-x^{ \prime}|,\] _(Locally Lipschitz),_ \[f(x) =-f(-x),\] _(Odd function),_ \[(|x|^{p-2}-|x^{\prime}|^{p-2})\langle x+x^{\prime},f(x-x^{\prime}) \rangle \leq L^{(3)}_{(f)}(|x|^{p}+|x^{\prime}|^{p}),\] _(Additional symmetry)._ _Assume the normalization11\(f(0)=f_{\sigma}( **Remark 2.2** (Time dependency for \(u\)).: _To avoid added complexity to an already complex work, we do not address time-dependence on \(u\). A close inspection of the proof for wellposedness and convergence of the numerical scheme shows that as long as the time dependence does not interfere with constraints imposed by Assumption 2.1 the results will hold. Additionally one would require a \(1/2\)-Holder continuity property for the function._ All elements in the above assumption are standard, except the 'additional symmetry' restriction. The 'additional symmetry' is a new type of restriction which we have not found previously in the literature and we discuss it in more detail at several points in the text, in particular, in Remark 2.6. This condition is trivially satisfied when \(d=1\) (see (2.3)) or when the function is Lipschitz. We next provide a non-trivial example in \(d>1\) for \(f\) satisfying the 'extra symmetry' condition. **Example 2.3**.: _For \(x\in\mathbb{R}^{d}\) define \(f(x)=-x|x|^{2}\). Then, for any \(p>2\), \(x,y\in\mathbb{R}^{d}\) it holds that_ \[(|x|^{p-2}-|y|^{p-2})\langle x+y,-(x-y)|x-y|^{2}\rangle=-(|x|^{p-2}-|y|^{p-2})( |x|^{2}-|y|^{2})|x-y|^{2}\leq 0,\] _and the conclusion follows from the monotonicity of the polynomial function._ **Remark 2.4** (Implied properties).: _Let Assumption 2.1 hold with \(m>2\). We provide the following estimates for some positive constant \(C\) which may change line by line (see [23, Remark 2.2] for details). For all \(t\in[0,T]\), \(x,x^{\prime},z\in\mathbb{R}^{d}\) and \(\mu,\mu^{\prime}\in\mathcal{P}_{2}(\mathbb{R}^{d})\), we have_ \[\langle x,f(x)\rangle+2(m-1)|f_{\sigma}(x)|^{2} \leq L^{(1)}_{(f)}|x|^{2},\quad\langle x,u(x,\mu)\rangle+(m-1)| \sigma(t,x,\mu)|^{2}\leq C\big{(}1+|x|^{2}+(W^{(2)}(\mu,\delta_{0}))^{2}\big{)},\] \[\langle x-x^{\prime},u(x,\mu)-u(x^{\prime},\mu^{\prime})\rangle \leq C\big{(}|x-x^{\prime}|^{2}+(W^{(2)}(\mu,\mu^{\prime}))^{2} \big{)},\quad\langle x,b(t,x,\mu)\rangle\leq C(1+|x|^{2}+(W^{(2)}(\mu,\delta_{ 0}))^{2}),\] \[\langle x-x^{\prime},v(x,\mu)-v(x^{\prime},\mu)\rangle \leq(L^{(1)}_{(u\sigma)}+L^{(1)}_{(f)})|x-x^{\prime}|^{2},\quad|b( t,x,\mu)|^{2}\leq C\big{(}1+|x|^{2}+(W^{(2)}(\mu,\delta_{0}))^{2}\big{)}\] \[|\overline{\sigma}(t,x,\mu)|^{2} \leq 2|\sigma(t,x,\mu)|^{2}+2|\int_{\mathbb{R}^{d}}f_{\sigma}(x-y) \mu(\mathrm{d}y)|^{2}\leq 2|\sigma(t,x,\mu)|^{2}+2\int_{\mathbb{R}^{d}}|f_{ \sigma}(x-y)|^{2}\mu(\mathrm{d}y).\] _Let \(f:\mathbb{R}\to\mathbb{R}\) be an odd-function satisfying the one-sided Lipschitz condition, then \(f\) satisfies the additional symmetry condition, i.e., for \(x,y\in\mathbb{R},x\neq y,p\geq 2\), we have_ \[(|x|^{p-2}-|y|^{p-2})\langle x+y,f(x-y)\rangle=\frac{(|x|^{p-2}-|y|^{p-2})(x+y )}{x-y}\langle x-y,f(x-y)\rangle\leq C(|x|^{p}+|y|^{p}). \tag{2.3}\] _The following decomposition is crucial for the remaining parts of this work and allows us to incorporate a nonlinear \(f_{\sigma}\): For \(x\in\mathbb{R}^{d}\), \(m\geq p>2\), \(\mu\in\mathcal{P}_{m}(\mathbb{R}^{d})\) it holds that_ \[\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}} |x|^{p-2}\langle x,f(x-y)\rangle\mu(\mathrm{d}y)\mu(\mathrm{d}x) =\frac{1}{2}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\langle|x|^{p-2}x-|y|^{ p-2}y,f(x-y)\rangle\mu(\mathrm{d}y)\mu(\mathrm{d}x)\] \[=\frac{1}{2}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}(\langle|x| ^{p-2}x-|x|^{p-2}y\rangle+(|x|^{p-2}y-|y|^{p-2}y),f(x-y)\rangle\mu(\mathrm{d}y )\mu(\mathrm{d}x)\] \[=\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\big{(}\tfrac{1}{2}|x| ^{p-2}\langle x-y,f(x-y)\rangle+\tfrac{1}{4}(|x|^{p-2}-|y|^{p-2})\langle x+y,f (x-y)\rangle\big{)}\mu(\mathrm{d}y)\mu(\mathrm{d}x). \tag{2.4}\] ### Existence, uniqueness and ergodicity of the MV-SDE Let us start by stating the wellposedness result of MV-SDE (1.1). **Theorem 2.5** (Wellposedness).: _Let Assumption 2.1 hold with \(m>2q+2\), then there exists a unique strong solution \(X\) to MV-SDE (1.1) satisfying the following estimates: For some constant \(C>0\), we have a pointwise estimate_ \[\sup_{t\in[0,T]}\mathbb{E}\big{[}|X_{t}|^{\bar{m}}\big{]}\leq C\left(1+\mathbb{ E}[|X_{0}|^{\bar{m}}]\right)e^{CT},\qquad\text{for any}\ \ \widetilde{m}\in[2,m].\] _Moreover, the result extends to \(\mathbb{S}^{\bar{m}}([0,T])\) as_ \[\mathbb{E}\big{[}\sup_{t\in[0,T]}|X_{t}|^{\bar{m}}\big{]}\leq C\left(1+\mathbb{ E}[|X_{0}|^{\bar{m}}]\right)e^{CT},\qquad\text{for any}\ \ \widehat{m}\in[2,m).\] The proof of the wellposedness theorem is postponed to Section 4.1. **Remark 2.6** (On the 'additional symmetry' restriction).: _The critical element of the proof for this result, is the difficulty in establishing (finite) bounds for higher order moments of the solution process. The 'additional symmetry' assumption is a technical condition without which we were not able to establish \(L^{p}\)-moment bounds for \(p>2\) (and \(d>1\)) - proving \(L^{2}\)-moment bounds or uniqueness of the solution is straightforward and the condition is not needed. The requirement of 'additional symmetry' stems solely from having a super-linearly growing \(\sigma\), \(f_{\sigma}\) and a super-linear growth of the convolution term appearing in the drift. If either of them is of linear growth (or \(d=1\)), then the 'additional symmetry' condition can be removed and the results hold._ _The strategy used in [3] to establish \(L^{p}\)-moment bounds, working with Assumption 2.1 but with a linearly growing \(\sigma\), is to bound \(\mathbb{E}\big{[}|X_{t}|^{2p}\big{]}\) via_ \[\mathbb{E}\big{[}|X_{t}|^{2p}\big{]}\leq C\big{(}\mathbb{E}\big{[}|X_{t}- \mathbb{E}[X_{t}]|^{2p}\big{]}+\mathbb{E}\big{[}|X_{t}|^{2}\big{]}^{p}\big{)},\] _and then noticing that_ \[\mathbb{E}\big{[}|X_{t}-\mathbb{E}[X_{t}]|^{2p}\big{]}=\int_{\mathbb{R}^{d}} \big{|}x-\int_{\mathbb{R}^{d}}y\mu_{t}(\mathrm{d}y)\big{|}^{2p}\mu_{t}( \mathrm{d}x)\leq\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}|x-y|^{2p}\mu_{t}( \mathrm{d}y)\mu_{t}(\mathrm{d}x)=\mathbb{E}\big{[}|X_{t}-\tilde{X}_{t}|^{2p} \big{]},\] _with \(\tilde{X}\) an independent copy of \(X\) driven by its independent Brownian motion, see Lemma A.1 and Lemma A.2 for extra details. This trick allows to deal with the convolution term, employing its symmetry, (see Lemma A.2), but does not give control of the super-linear diffusion. To be precise, Ito's formula applied to \(|X-\tilde{X}|^{2p}\) forces one to use the polynomial growth condition on \(\sigma\) (2.2), which involves higher moments, instead of (2.1)._ _Without the trick described above, and following more classical approaches [59, Theorem 2.1], it is possible to control the super-linear growth of \(\sigma\) in space (via (2.1)) but it is unclear how to simultaneously control the super-linear growth of the convolution terms in a tractable way (the tricks of Lemma A.1 and Lemma A.2 do not carry over)._ _All in all, there is competition between the growths of \(f\) and \(\sigma\), \(f_{\sigma}\), and neither just described technique is adequate to establish \(L^{p}\)-moment estimates. The 'additional symmetry' condition offsets this difficulty. See details in the proof in Section 4.1. Lifting this restriction is left as an open question._ Next, recall the constants \(q,m\) from Assumption 2.1, we consider the long-time behaviour, an exponential ergodic property and the existence of an invariant measure for the MV-SDEs of interest. We point the reader to [69, 70] for a review on recent results. To this end, we need to estimate differences of (1.1) with different initial conditions and introduce the associated nonlinear semigroup. Define the nonlinear semigroup \((P_{s,t}^{*})\) for \(0\leq s\leq t<\infty\) on \(\mathcal{P}_{\ell}(\mathbb{R}^{d}),\ \ell>2q+2\) by setting \(P_{s,t}^{*}\mu:=\text{Law}(X_{s,t})\) and \(X_{s,s}\) is the solution to (1.1) starting from time \(s\) such that \(\text{Law}(X_{s,s})=\mu\). Note that standard literature sets the semigroup in \(\mathcal{P}_{2}(\mathbb{R}^{d})\) but in this manuscript Theorem 2.5 requires higher integrability of the initial condition and working with \(\mathcal{P}_{\ell}(\mathbb{R}^{d}),\ell>2q+2\) reflects that. In the notation introduced earlier, we have \(P_{0,t}^{*}\mu_{t}^{X}:=\mu_{t}^{X}\), and more generally \(P_{s,t}^{*}=P_{s,t}^{*}\),\(P_{r,t}^{*}\), for \(s\leq r\leq t\). Crucially, if \(b\) and \(\sigma\) in (1.1) are independent of time, then \(P_{s,t}^{*}=P_{0,t-s}^{*}\) (see [69]). We say that \(\bar{\mu}\) is an invariant distribution of the semigroup \(P^{*}\) if \(P_{0,\bar{\mu}}^{*}=\bar{\mu}\) holds for all \(t\geq 0\). The semigroup satisfies an ergodic property if there exists \(\widehat{\mu}\in\mathcal{P}_{\ell}(\mathbb{R}^{d})\) such that \(\lim_{t\to\infty}P_{0,t}^{*}\nu=\widehat{\mu}\) (weakly) for all \(\nu\) (at this point, we leave unclear the space where \(\nu\) belongs to). In the proof of the theorem below, we show that the property holds true for any \(\nu\in\mathcal{P}_{2\ell-2}(\mathbb{R}^{d})\subset\mathcal{P}_{\ell}(\mathbb{R }^{d}),\ \ell>2q+2\) with convergence taking place through the \(W^{(\ell)}\)-metric in \(\mathcal{P}_{2\ell-2}(\mathbb{R}^{d})\cap\mathcal{P}_{\ell}(\mathbb{R}^{d})\). These results differ from those in [69] and the proof requires further care. **Theorem 2.7** (Contraction, exponential ergodicity property and invariance).: _Let the Assumptions of Theorem 2.5 hold with \(m>4q+2\). Assume that there exist constants \(L^{(1)}_{(bu\sigma)},L^{(3)}_{(bu\sigma)}\geq 0\), and \(L^{(2)}_{(bu\sigma)}\in\mathbb{R}\) such that for all \(t\in[0,\infty)\), \(x\in\mathbb{R}^{d}\) and \(\mu\in\mathcal{P}_{2}(\mathbb{R}^{d})\) we have that_ \[\big{\langle}x,u(x,\mu)+b(t,x,\mu)\big{\rangle}+(m-1)|\sigma(t,x,\mu)|^{2} \leq L^{(1)}_{(bu\sigma)}+L^{(2)}_{(bu\sigma)}|x|^{2}+L^{(3)}_{(bu\sigma)} \big{(}W^{(2)}(\mu,\delta_{0})\big{)}^{2}.\] _Then the following three assertions hold:_ 1. _For_ \(\mu,\nu\in\mathcal{P}_{\ell}(\mathbb{R}^{d})\)_, with_ \(2q+2<\ell\)_,_ \(L^{(1),+}_{(f)}=\max\{L^{(1)}_{(f)},0\}\) _and_ \(\rho_{1}=L^{(2)}_{(b)}+L^{(3)}_{(b)}+2L^{(1)}_{(u\sigma)}+2L^{(2)}_{(u\sigma)}+4 L^{(1),+}_{(f)}\)_,_ \(t\in[0,T],\ T<\infty\)_, we have_ \[\big{(}W^{(2)}(P_{0,t}^{*}\mu,P_{0,t}^{*}\nu)\big{)}^{2}\leq e^{\rho_{1}t} \big{(}W^{(2)}(\mu,\nu)\big{)}^{2}.\] (2.5) 2. _Let_ \(\mu\in\mathcal{P}_{\ell}(\mathbb{R}^{d})\) _with_ \(2q+2<\ell\leq m\)_,_ \(\rho_{2,\ell}=\ell(L^{(2)}_{(bu\sigma)}+L^{(3)}_{(bu\sigma)}+2L^{(1),+}_{(f)}+ ^{(3)}_{(f)}/2)+(\ell-2)/\ell\)_,_ \(t\in[0,T],\ T<\infty\)_. Then for some constant_ \(C\) _depending on_ \(\ell,\ L^{(1)}_{(bu\sigma)}\) _and_ \(\sup_{t}|b(t,\delta_{0})|\)_, but independent of time_ \(t\)_, we have_ \[\big{(}W^{(\ell)}(P_{0,t}^{*}\mu,\delta_{0})\big{)}^{\ell}\leq e^{\rho_{2},\ell} \big{(}W^{(\ell)}(\mu,\delta_{0})\big{)}^{\ell}+\frac{C}{\rho_{2,\ell}}(e^{\rho_ {2},\ell}t-1)\mathbb{1}_{\rho_{2,\ell}\neq 0}+Ct\mathbb{1}_{\rho_{2,\ell}=0}.\] (2.6) 3. _Assume further that the functions_ \(b,\sigma\) _are independent of time and that_ \(\rho_{1},\rho_{2,2\ell-2}<0\) _with_ \(1+m/2\geq\ell>2q+2\)_. Then (_2.5_) yields exponential contraction, (_2.6_) yields bounded orbits, and there exists a unique invariant measure_ \(\bar{\mu}\in\mathcal{P}_{\ell}(\mathbb{R}^{d})\)_, such that, for any_ \(t>0\) _and_ \(\nu_{0}\in\mathcal{P}_{2\ell-2}(\mathbb{R}^{d})\) _we have_ \[W^{(2)}(P_{0,t}^{*}\bar{\mu},\bar{\mu})=0\qquad\text{and}\qquad W^{(2)}(P_{0,t} ^{*}\nu_{0},\bar{\mu})\leq e^{\rho_{1}t/2}W^{(2)}\big{(}\nu_{0},\bar{\mu}\big{)}.\] _In fact, for any_ \(\nu_{0}\in\mathcal{P}_{2\ell-2}(\mathbb{R}^{d})\) _we have_ \(\lim_{t\to\infty}W^{(\ell)}(P_{0,t}^{*}\nu_{0},\bar{\mu})=0\)_._ The proof is postponed to Section 4.2 and a quick inspection shows that for statement 1 and 2, one only needs \(\ell>2q+2\). Strictly speaking, the requirement for the initial distribution \(\nu_{0}\in\mathcal{P}_{2\ell-2}(\mathbb{R}^{d})\) is only needed for the final statement. The mechanism of choice for the proof is inspired by [69, Theorem 3.1]. In essence, (2.6) can be interpreted as the existence of a 'non-expanding orbit', i.e., there is a 'bounded orbit' and the exponential contractivity of the Wasserstein metric (under \(\rho_{1},\rho_{2,2\ell-2}<0\)) in (2.7) yields that all orbits are bounded - for further considerations see [52]. It is unclear how the Lyapunov functional techniques used in [43], [52] (or [39]) can be applied to the class of MV-SDEs of this work (it is the super-linear growth of \(f\) in the convolution term that causes problems; if the polynomial growth is on the convolution integral instead of \(f\) then the theory carries over) - we leave this issue for future research. ### Particle approximation of the MV-SDE We now turn to the particle approximation of the MV-SDE with the ultimate goal of establishing a working numerical scheme for the equation. All results here are only concerned with the finite-time case. As in [15, 24, 60], we approximate the MV-SDE (1.1) (driven by the Brownian motion \(W\)) by an interacting particle system, i.e., an \(N\)-dimensional system of \(\mathbb{R}^{d}\)-valued interacting particles. Let \(i\in[\![1,N]\!]\) and consider \(N\) particles \((X_{t}^{i,N})_{t\in[0,T]}\) with independent and identically distributed (i.i.d.) initial data \(X_{0}^{i,N}=X_{0}^{i}\) (an independent copy of \(X_{0}\)) satisfying the \((\mathbb{R}^{d})^{N}\)-valued SDE \[\mathrm{d}X_{t}^{i,N}=\big{(}v(X_{t}^{i,N},\mu_{t}^{X,N})+b(t,X_{t}^{i,N},\mu _{t}^{X,N})\big{)}\mathrm{d}t+\overline{\sigma}(t,X_{t}^{i,N},\mu_{t}^{X,N}) \mathrm{d}W_{t}^{i},\quad X_{0}^{i,N}=X_{0}^{i}, \tag{2.7}\] where \(\mu_{t}^{X,N}(\mathrm{d}x):=\frac{1}{N}\sum_{j=1}^{N}\delta_{X_{j}^{i,N}}( \mathrm{d}x)\) with \(\delta_{x}\) being the Dirac measure at \(x\in\mathbb{R}^{d}\), and \(W^{i}\) being independent Brownian motions (also independent of the Brownian motion appearing in (1.1)). We introduce similarly to [23, Remark 2.4] the auxiliary maps \(V\), and \(\hat{\Sigma}\) to view (2.7) as a system in \(\mathbb{R}^{Nd}\). **Lemma 2.8**.: _Define \(V:\mathbb{R}^{Nd}\to\mathbb{R}^{Nd}\), \(\hat{\Sigma}:[0,T]\times\mathbb{R}^{Nd}\to\mathbb{R}^{Nd\times Nl}\) by \(V(x^{N})=\big{(}\cdots,v(x^{i,N},\mu^{x,N}),\cdots\big{)},\) and \(\hat{\Sigma}(t,x^{N})=\big{(}\cdots,\overline{\sigma}(t,x^{i,N},\mu^{x,N}), \cdots\big{)}\) with \(x^{N}=(x^{1,N},\cdots,x^{N,N})\in\mathbb{R}^{Nd},\;t\in[0,T]\)._ _Then, under Assumption 2.1 with \(m>2\), for any \(x^{N},y^{N}\in\mathbb{R}^{Nd}\) with corresponding empirical measures \(\mu^{x,N},\mu^{y,N}\), the functions \(V,\;\hat{\Sigma}\) also satisfy a monotonicity condition in \(\mathbb{R}^{Nd}\) (with constants independent of \(N\))._ Proof.: From Assumption 2.1, Remark 2.4 and Jensen's inequality, we deduce, for all \(x^{N},y^{N}\in\mathbb{R}^{Nd},\;t\in[0,T]\), \[\langle x^{N}-y^{N},V(x^{N})-V(y^{N})\rangle+\frac{(m-1)}{2}| \hat{\Sigma}(t,x^{N})-\hat{\Sigma}(t,y^{N})|^{2}\] \[\quad\leq\frac{1}{2N}\sum_{i=1}^{N}\sum_{j=1}^{N}\big{\langle}(x ^{i,N}-x^{j,N})-(y^{i,N}-y^{j,N}),f(x^{i,N}-x^{j,N})-f(y^{i,N}-y^{j,N})\big{\rangle}\] \[\quad\quad+\sum_{i=1}^{N}\Big{(}\big{\langle}x^{i,N}-y^{i,N},u(x^ {i,N},\mu^{x,N})-u(y^{i,N},\mu^{y,N})\big{\rangle}+(m-1)|\sigma(t,x^{i,N},\mu^ {x,N})-\sigma(t,y^{i,N},\mu^{y,N})|^{2}\Big{)}\] \[\quad\quad+\frac{m-1}{N}\sum_{i=1}^{N}\sum_{j=1}^{N}|f_{\sigma}( x^{i,N}-x^{j,N})-f_{\sigma}(y^{i,N}-y^{j,N})|^{2}\leq C|x^{N}-x^{N}|^{2},\] where \(C>0\) is independent of \(N\). **Propagation of chaos (PoC).** In order to show that the particle approximation (2.7) is effective to approximate the underlying MV-SDE, we present a pathwise propagation of chaos result (convergence as the number of particles increases). To do so, we introduce the system of non interacting particles \[\mathrm{d}X_{t}^{i}=\big{(}v(X_{t}^{i},\mu_{t}^{X^{i}})+b(t,X_{t}^{i},\mu_{t}^{ X^{i}})\big{)}\mathrm{d}t+\overline{\sigma}(t,X_{t}^{i},\mu_{t}^{X^{i}})\mathrm{d}W_{t}^{i}, \quad t\in[0,T], \tag{2.8}\] which are (decoupled) MV-SDEs with i.i.d. initial conditions \(X_{0}^{i}\) (an independent copy of \(X_{0}\)). Since the \(X^{i}\)'s are independent, \(\mu_{t}^{X^{i}}=\mu_{t}^{X}\) for all \(i\) (and \(\mu_{t}^{X}\) the marginal law of the solution to (1.1)). We are interested in the strong error-type metrics for the numerical approximation and the relevant PoC result for our case is given in the next theorem, the proof is postponed to Section 4. **Theorem 2.9** (Propagation of Chaos).: _Let the Assumptions of Theorem 2.5 hold for some \(m>2(q+1)\). Let \(X^{i}\) be the solution to (2.8) in the sense of Theorem 2.5. Then, there exists a unique solution \(X^{i,N}\) to (2.7) and for any \(1\leq p\leq m\) there exists \(C>0\) independent of \(N\) such that_ \[\sup_{i\in\llbracket 1,N\rrbracket}\sup_{t\in[0,T]}\mathbb{E}\big{[}|X^{i,N}_{t} |^{p}\big{]}\leq C(1+\mathbb{E}\big{[}\,|X_{0}|^{p}\big{]}).\] _Moreover, suppose that \(m>2(q+1)\) and \(m>4\), then we have the following convergence result_ \[\sup_{i\in\llbracket 1,N\rrbracket}\sup_{t\in[0,T]}\mathbb{E}\big{[}|X^{i,N}_{t} -X^{i}_{t}|^{2}\big{]}\leq C\left\{\begin{array}{ll}N^{-1/2},&d<4,\\ N^{-1/2}\log N,&d=4,\\ N^{-\frac{2}{4+1}},&d>4.\end{array}\right. \tag{2.9}\] This result shows that the particle approximation will converge to the MV-SDE with a given rate. Therefore, to establish convergence of our numerical scheme to the MV-SDE (in a strong sense), we only need to show that the discrete-time version of the particle system converges to the "true" particle system. ### \(C\)-stability and \(B\)-consistency for the particle system Before introducing our numerical scheme and the corresponding strong convergence result, we first present a definition of \(C\)-stability and \(B\)-consistency for the particle system. The following definitions and methodologies are modifications of the original work in [10] tailored to the present particle system setting. The probability space in this section supports (at least) the \(N\) driving Brownian motions of the particle system and the filtration corresponds to the enlarged filtration generated by all Brownian motions augmented by a rich enough \(\sigma\)-algebra \(\mathcal{F}_{0}\). **Definition 2.10**.: _Let \(h\in(0,T]\) be the stepsize and \(\Psi_{i}:\mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R}^{d})\times[0,T]\times \Omega\to\mathbb{R}^{d}\) for all \(i\in\llbracket 1,N\rrbracket\) be a mapping satisfying the following measurability and integrability condition: For every \(t,t+h\in[0,T],\ h\in(0,1)\) and \(X=(X^{1},\cdots,X^{N})\in L^{2}\left(\Omega,\mathcal{F}_{t},\mathbb{P}; \mathbb{R}^{Nd}\right),\ \mu\in\mathcal{P}_{2}(\mathbb{R}^{d})\) it holds_ \[\Psi_{i}(X^{i},\mu,t,h)\in L^{2}\big{(}\Omega,\mathcal{F}_{t+h},\mathbb{P}; \mathbb{R}^{d}\big{)},\quad\Psi=(\Psi_{1},\cdots,\Psi_{N}). \tag{2.10}\] _Then, for \(M\in\mathbb{N},Mh=T,\ k\in\llbracket 0,M-1\rrbracket\), \(t_{k}=kh\), we say that a particle system \(\hat{X}^{N}_{k}=(\hat{X}^{1,N}_{k},\cdots,\hat{X}^{N,N}_{k})\in\mathbb{R}^{Nd}\) is generated by the stochastic one-step method \((\Psi,h,\xi)\) with initial condition \(\xi=(\xi^{1},\cdots,\xi^{N})\in L^{2}\left(\Omega,\mathcal{F}_{0},\mathbb{P}; \mathbb{R}^{Nd}\right)\), \(\Psi=(\Psi_{1},\cdots,\Psi_{N})\), if_ \[\hat{X}^{i,N}_{k+1}=\Psi_{i}(\hat{X}^{i,N}_{k},\hat{\mu}^{X,N}_{k},t_{k},h), \quad\hat{\mu}^{X,N}_{k}=\frac{1}{N}\sum_{j=1}^{N}\delta_{\hat{X}^{i,N}_{k}}( \mathrm{d}x),\] \[\hat{X}^{i,N}_{0}=\xi^{i},\quad i\in\llbracket 1,N\rrbracket.\] _We call \(\Psi\) the one-step map of the method._ **Definition 2.11**.: _A stochastic one-step method \((\Psi,h,\xi)\) is called stochastically \(C\)-stable if there exists a constant \(C>0\) and a parameter \(\eta\in(1,\infty)\) such that for all \(t,t+h\in[0,T],\ h>0\) and all random variables \(X^{i,N}_{t},Z^{i,N}_{t}\in L^{2}\left(\Omega,\mathcal{F}_{t},\mathbb{P}; \mathbb{R}^{d}\right),\ i\in\llbracket 1,N\rrbracket\), from identically distributed particle systems with their empirical measures \(\mu^{X,N}_{t},\ \mu^{Z,N}_{t}\in\mathcal{P}_{2}(\mathbb{R}^{d})\), it holds_ \[\mathbb{E}\Big{[}\Big{|}\mathbb{E}\big{[}\Psi_{i}(X^{i,N}_{t}, \mu^{X,N}_{t},t,h)-\Psi_{i}(Z^{i,N}_{t},\mu^{Z,N}_{t},t,h)\mid\mathcal{F}_{t} \big{]}\Big{|}^{2}\Big{]}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\leq(1+Ch )\,\mathbb{E}\big{[}|X^{i,N}_{t}-Z^{i,N}_{t}|^{2}\big{]}+Ch\big{(}W^{(2)}(\mu^{ X,N}_{t},\mu^{Z,N}_{t})\big{)}^{2}.\] Here, and in what follows we denote by \((\mathrm{id}-\mathbb{E}\left[\cdot\mid\mathcal{F}_{t}\right])Y=Y-\mathbb{E} \left[Y\mid\mathcal{F}_{t}\right]\) the projection of an \(\mathcal{F}_{t+h}\)-measurable random variables \(Y\) orthogonal to the conditional expectation \(\mathbb{E}\left[\cdot\mid\mathcal{F}_{t}\right]\). **Definition 2.12**.: _Let \(X^{i,N},\ i\in\llbracket 1,N\rrbracket\), be the unique strong solution to (2.7), with \(\mu^{X,N}\) being the corresponding empirical measure. A stochastic one-step method \((\Psi,h,\xi)\) is called stochastically \(B\)-consistent of order \(\gamma>0\) if there exists a constant \(C>0\) such that for all \(t,t+h\in[0,T],\ h\in(0,1)\), it holds_ \[\mathbb{E}\Big{[}\Big{|}\mathbb{E}\big{[}X^{i,N}_{t+h}-\Psi_{i}(X^{ i,N}_{t},\mu^{X,N}_{t},t,h)\mid\mathcal{F}_{t}\big{]}\Big{|}^{2}\Big{]} \leq Ch^{2\gamma+2},\] \[\mathbb{E}\Big{[}\Big{|}\big{(}\mathrm{id}-\mathbb{E}\big{[}\cdot \mid\mathcal{F}_{t}\big{]}\big{)}\left(X^{i,N}_{t+h}-\Psi_{i}(X^{i,N}_{t},\mu^{ X,N}_{t},t,h)\right)\Big{|}^{2}\Big{]} \leq Ch^{2\gamma+1}.\] Next, we show the convergence results based on the definitions above. **Lemma 2.13**.: _Let \((\Psi,h,\xi)\) be a stochastically \(C\)-stable one-step method with some \(\eta\in(1,\infty)\). For the particle system \(X^{i,N}\), given by (2.7) with its empirical distribution \(\mu^{X,N}\), we have_ \[\sup_{n\in\llbracket 0,M]}\sup_{i\in\llbracket 1,N \rrbracket}\mathbb{E}\big{[}|X^{i,N}_{n}-\hat{X}^{i,N}_{n}|^{2}\big{]}\leq e^{ CT}\bigg{[}\mathbb{E}\big{[}|X^{i,N}_{0}-\xi^{i}|^{2}\big{]}\] \[\qquad+\sum_{k=1}^{M}\sup_{i\in\llbracket 1,N \rrbracket}\Big{(}(1+h^{-1})\mathbb{E}\Big{[}\big{|}\mathbb{E}\big{[}X^{i,N}_{ k}-\Psi_{i}(X^{i,N}_{k-1},\mu^{X,N}_{k-1},h)\mid\mathcal{F}_{t_{k-1}}\big{]} \big{|}^{2}\Big{]}\] \[\qquad\qquad\qquad+C_{\eta}\ \mathbb{E}\Big{[}\Big{|}\left( \mathrm{id}-\mathbb{E}\big{[}\cdot\mid\mathcal{F}_{t_{k-1}}\big{]}\right) \big{(}X^{i,N}_{k}-\Psi_{i}(X^{i,N}_{k-1},\mu^{X,N}_{k-1},h)\big{)}\Big{|}^{2 }\big{]}\Big{)}\bigg{]}\] _where \(C_{\eta}=1+(\eta-1)^{-1}\) and \(\hat{X}^{i,N}_{n}\) denotes the particles generated by \((\Psi,h,\xi)\), with \(X^{i,N}_{k}=X^{i,N}_{t_{k}}\), \(\mu^{X,N}_{k}=\mu^{X,N}_{t_{k}}\), \(t_{k}=kh\) for all \(k\in\llbracket 0,M\rrbracket\)._ **Theorem 2.14**.: _Let the stochastic one-step method \((\Psi,h,\xi)\) be stochastically \(C\)-stable and stochastically \(B\)-consistent of order \(\gamma>0\). If \(\xi^{i}=X^{i,N}_{0}=\hat{X}^{i,N}_{0}\), then there exists a constant \(C\) independent of \(N,h\) such that_ \[\sup_{n\in\llbracket 0,M\rrbracket}\sup_{i\in\llbracket 1,N \rrbracket}\mathbb{E}\big{[}|X^{i,N}_{n}-\hat{X}^{i,N}_{n}|^{2}\big{]}\leq Ch^{ 2\gamma},\] _where \(X^{i,N}\) denotes the exact solution to (2.7) and \(\hat{X}^{i,N}\) is the particle generated by \((\Psi,h,\xi)\). In particular, \((\Psi,h,\xi)\) is strongly convergent of order \(\gamma\)._ ### The numerical scheme The split-step method (SSM) proposed here follows the steps of [23] and will be re-casted accordingly. The critical difficulty arises from the simultaneous appearance of the convolution component in \(v\) (1.1) and the super-linear diffusion coefficient. The presence of both nonlinearities is the main hindrance to prove moment bounds of order \(p>2\) for the numerical scheme. Therefore, we rely on the \(C\)-stability and \(B\)-consistency methodology, as this approach does not require to prove moment stability of higher order for the numerical scheme. This is in stark contrast to the techniques used in [23], where the time-stepping scheme has stable moments of higher order (depending on the regularity of the initial data) and strong convergence rates are proven without employing the \(C\)-stability and \(B\)-consistency procedure. Here, we wish to emphasize that even with the symmetry condition it is unclear how to prove \(L^{p}\)-moment bounds of the numerical scheme for \(p>2\). **Definition 2.15** (Definition of the SSM).: _Under the framework of Assumption 2.1, let \(h\) satisfy (2.14) and \(M\in\mathbb{N}\) such that \(Mh=T\). Define recursively the SSM approximating of (2.7) as: set \(\hat{X}^{i,N}_{0}=X^{i}_{0}\) for \(i\in\llbracket 1,N\rrbracket\); for \(n\in\llbracket 0,M-1\rrbracket\) and \(i\in\llbracket 1,N\rrbracket\) (recall Remark 2.8), \(t_{n}=nh\), we have_ \[Y^{*,N}_{n}=\hat{X}^{N}_{n}+hV(Y^{*,N}_{n}),\quad\hat{X}^{N}_{n}=(\cdots,\hat {X}^{i,N}_{n},\cdots),\quad Y^{*,N}_{n}=(\cdots,Y^{i,*,N}_{n},\cdots), \tag{2.11}\] \[\text{where}\ \ Y^{i,*,N}_{n}=\hat{X}^{i,N}_{n}+hv(Y^{i,*,N}_{n},\hat{\mu}^{ Y,N}_{n}),\qquad\hat{\mu}^{Y,N}_{n}(\mathrm{d}x):=\frac{1}{N}\sum_{j=1}^{N} \delta_{Y^{j,*,N}_{n}}(\mathrm{d}x), \tag{2.12}\] \[\hat{X}^{i,N}_{n+1}=Y^{i,*,N}_{n}+b(t_{n},Y^{i,*,N}_{n},\hat{\mu}^{ Y,N}_{n})h+\overline{\sigma}(t_{n},Y^{i,*,N}_{n},\hat{\mu}^{Y,N}_{n})\Delta W^{i }_{n},\qquad\Delta W^{i}_{n}=W^{i}_{t_{n+1}}-W^{i}_{t_{n}}. \tag{2.13}\] _The stepsize \(h\) satisfies (this constraint is soft, see [23, Remark 2.7] for details)_ \[h\in\Big{(}0,\min\left\{1,\frac{1}{\zeta}\right\}\Big{)}\text{ where }\zeta=\max\Big{\{}2(L^{(1)}_{(f)}+L^{(1)}_{(u\sigma)}),\ 2(2L^{(1),+}_{(f)}+L^{(1)}_{(u\sigma)}),\ 0\Big{\}}. \tag{2.14}\] The choice of \(h\) is further discussed in the following two remarks. **Remark 2.16** (Choice of \(h\)).: _Let Assumption 2.1 hold (the constraint on \(h\) in (2.14) comes from (4.12), (4.15), (4.16) and (4.17) below) and following the notation of these inequalities, under (2.14) with \(\zeta>0\), there exists \(\lambda\in(0,1)\) such that \(h<\lambda/\zeta\) and_ \[\max\left\{\frac{1}{1-2(L^{(1)}_{(f)}+L^{(1)}_{(u\sigma)})h},\ \frac{1}{1-2(2L^{(1),+}_{(f)}+L^{(1)}_{(u\sigma)}+L^{(2)}_{(u\sigma)})h} \right\}<\frac{1}{1-\lambda}.\] _For \(\zeta=0\), the result is trivial and we conclude that there exists a constant \(C\) independent of \(h\) such that_ \[\max\left\{\frac{1}{1-2(L^{(1)}_{(f)}+L^{(1)}_{(u\sigma)})h},\ \frac{1}{1-2(2L^{(1),+}_{(f)}+L^{(1)}_{(u\sigma)}+L^{(2)}_{(u\sigma)})h} \right\}\leq 1+Ch.\] _As argued in [23, Remark 2.7], the constraint on \(h\) can be lifted._ Recall that the function \(V\) satisfies a one-sided Lipschitz condition in \(\mathbb{R}^{Nd}\) (Remark 2.8), and hence (under (2.14)) a unique solution \(Y_{n}^{*,N}\) to (2.11) as a function of \(\hat{X}_{n}^{N}\) exists [23, Lemma 4.2]. After introducing the discrete scheme, we introduce its continuous extension and the main convergence results. **Definition 2.17** (Continuous extension of the SSM).: _Under the same choice of \(h\) and assumptions in Definition 2.15, for all \(t\in[t_{n},t_{n+1}]\), \(n\in[\![0,M-1]\!]\), \(t_{n}=nh\), \(i\in[\![1,N]\!]\), \(\hat{X}_{0}^{i,N}=X_{0}^{i}\), for \(X_{0}^{i}\) in (2.7), the continuous extension of the SSM is_ \[\mathrm{d}\hat{X}_{t}^{i,N} =\big{(}v(Y_{\kappa(t)}^{i,*,N},\hat{\mu}_{\kappa(t)}^{Y,N})+b( \kappa(t),Y_{\kappa(t)}^{i,*,N},\hat{\mu}_{\kappa(t)}^{Y,N})\big{)}\mathrm{d}t +\overline{\sigma}(\kappa(t),Y_{\kappa(t)}^{i,*,N},\hat{\mu}_{\kappa(t)}^{Y,N })\mathrm{d}W_{t}^{i},\] \[\hat{\mu}_{n}^{Y,N}(\mathrm{d}x): =\frac{1}{N}\sum_{j=1}^{N}\delta_{Y_{n}^{j,*,N}}(\mathrm{d}x), \quad\kappa(t)=\sup\big{\{}t_{n}:t_{n}\leq t,\ n\in[\![0,M-1]\!]\big{\}},\quad \hat{\mu}_{t_{n}}^{Y,N}=\hat{\mu}_{n}^{Y,N}.\] **Theorem 2.18** (Convergence of the SSM).: _Let the assumptions of Theorem 2.9 hold. Assume additionally that \(\eta>1\) and \(m>4q+4>\max\{2(q+1),4\}\), with \(X_{0}^{i}\in L_{0}^{m}(\mathbb{R}^{d})\) and \(q\) as defined in Assumption 2.1. Choose \(h\) as in (2.14). Then for the SSM scheme defined in (2.11)-(2.13), we have the following properties._ 1. _The SSM is_ \(C\)_-stable;_ 2. _The SSM is_ \(B\)_-consistent with_ \(\gamma=1/2\) _in Definition_ 2.12_;_ 3. _For_ \(i\in[\![1,N]\!]\)_, let_ \(X^{i,N}\) _be the solution to (_2.7_), then there exists a constant_ \(C>0\) _(independent of_ \(N\) _and_ \(h\)_) such that_ \[\sup_{i\in[\![1,N]\!]}\sup_{t\in[0,T]}\mathbb{E}\big{[}\lfloor X_{t}^{i,N}- \hat{X}_{t}^{i,N}\rceil^{2}\big{]}\leq Ch.\] Lastly, we present a result about long time stability of the numerical scheme proposed as means to access the invariant distribution of the original MV-SDE by way of simulation. In other words, we provide sufficient conditions for our scheme to be _mean-square contractive_ as \(T\to\infty\) in the sense of [24, Definition 2.8]. **Theorem 2.19**.: _Let the Assumptions of Theorem 2.18 and Theorem 2.7 hold. Suppose that \(X_{0}\in L_{0}^{m}(\mathbb{R}^{d})\) and \(Z_{0}\in L_{0}^{m}(\mathbb{R}^{d})\) for \(m>4q+4\) as in Theorem 2.18, and let \(\hat{X}_{0}^{i,N}\) and \(\hat{Z}_{0}^{i,N}\) be i.i.d. copies of \(X_{0}\) and \(Z_{0}\) respectively, for all \(i\in[\![1,N]\!]\)._ _Set \(h>0\). For \(i\in[\![1,N]\!]\) and \(n\in[\![1,M]\!]\), define \((\hat{X}_{n}^{i,N},Y_{n}^{i,X,N})\) and \((\hat{Z}_{n}^{i,N},Y_{n}^{i,Z,N})\) as the output of the SSM (2.12)-(2.13) (i.e., \(\star=X,Z\)) corresponding to the empirical measure pairs \((\hat{\mu}_{n}^{X,N},\hat{\mu}_{n}^{Y,X,N})\) and \((\hat{\mu}_{n}^{Z,N},\hat{\mu}_{n}^{Y,Z,N})\) with initial conditions \(X_{0}^{i,N}\) and \(Z_{0}^{i,N}\) respectively. Then, for any \(n\in[\![1,M]\!]\),_ \[\sup_{i\in[\![1,N]\!]}\mathbb{E}\big{[}\lfloor\hat{X}_{n}^{i,N}-\hat{Z}_{n}^{ i,N}\rceil^{2}\big{]}\leq(1+\beta h)^{n}\sup_{i\in[\![1,N]\!]}\mathbb{E} \big{[}\lfloor\hat{X}_{0}^{i,N}-\hat{Z}_{0}^{i,N}\rceil^{2}\big{]},\] _where we recall the parameters of Theorem 2.7,_ \[\beta=\frac{\rho_{1}+2L_{0}^{(1)}h}{1-h(4L_{(f)}^{(1),+}+2L_{(u \varnothing)}^{(1)}+2L_{(u\varnothing)}^{(2)})},\quad\rho_{1}=4L_{(f)}^{(1),+ }+2L_{(u\varnothing)}^{(1)}+2L_{(u\varnothing)}^{(2)}+2L_{(b)}^{(2)}+2L_{(b)}^ {(3)}.\] _Under the choice of \(h\) stated in Theorem 2.18, the quantity \(1+\beta h\) is always positive. If \(\rho_{1}<0\) and \(h\) sufficiently small then \(\beta<0\) and thus the SSM is mean-square contractive in the sense of [24, Definition 2.8]._ ## 3 Examples of interest We illustrate the performance of the SSM on several numerical examples. As the "true" solution of the considered models is unknown, the convergence rates for these examples are calculated in reference to a proxy solution given by an approximation at a smaller timestep \(h\). The strong error between the proxy-true solution \(X_{T}\) and approximation \(\hat{X}_{T}\) is as follows \[\text{root Mean-square error (rMSE)}=\Big{(}\mathbb{E}\big{[}\lfloor X_{T}-\hat{X}_ {T}\rceil^{2}\big{]}\Big{)}^{\frac{1}{2}}\approx\Big{(}\frac{1}{N}\sum_{j=1}^{ N}\lfloor X_{T}^{j}-\hat{X}_{T}^{j}\rceil^{2}\Big{)}^{\frac{1}{2}}.\] We also consider the path type strong error as follows \[\text{Strong error (path)}=\Big{(}\mathbb{E}\big{[}\sup_{t\in[0,T]} \lvert X_{t}-\hat{X}_{t}\rvert^{2}\big{]}\Big{)}^{\frac{1}{2}}\approx\Big{(} \frac{1}{N}\sum_{j=1}^{N}\sup_{n\in[0,M]}\lvert X_{n}^{j}-\hat{X}_{n}^{j} \rvert^{2}\Big{)}^{\frac{1}{2}}.\] The propagation of chaos (PoC) rate between different particle systems \((\hat{X}_{T}^{i,N_{l}})_{i,l}\) where \(i\) denotes the \(i\)-th particle and \(N_{l}\) denotes the size of the system, is measured by \[\text{Propagation of chaos Error (PoC-Error)}\approx\Big{(}\frac{1}{N_{l}}\sum_{j=1}^{N _{l}}|\hat{X}_{T}^{j,N_{l}}-\hat{X}_{T}^{j,N_{l+1}}|^{2}\Big{)}^{\frac{1}{2}}. \tag{3.1}\] Above \(N_{l+1}=2N_{l}\) and the first half of the \(N_{l+1}\) particles use the same Brownian motions as the whole \(N_{l}\) particle system. In this section, the rMSE takes \(h\in\{10^{-1},5\times 10^{-2},2\times 10^{-2},10^{-2},5\times 10^{-3},2\times 10^{- 3},10^{-3}\}\) with \(N=1000\), the proxy solution takes \(h=10^{-4}\). The PoC takes \(N\in\{40,80,160,320,640,1280\}\) with \(h=10^{-3}\), the proxy solution takes \(N=2560\). **Remark 3.1** ('Taming' algorithm).: _For comparative purposes, we implement the 'Taming' algorithm [24, 29] - any convergence analysis of the taming algorithm in the framework of this manuscript is an open question. Of the many possible taming variants, we implement the following two cases: taming \(f\) (and similarly \(f_{\sigma}\)) inside the convolution term ('Taming-in') and taming the convolution itself ('Taming-out'). Concretely, set \(Mh=T\), then \(f\) is replaced by (for \(\alpha\in(0,1]\))_ * _'Taming-out':_\(\int_{\mathbb{R}^{d}}f(\cdot-y)\mu(\mathrm{d}y)\) _is replaced by_ \(\int_{\mathbb{R}^{d}}f(\cdot-y)\mu(\mathrm{d}y)/\big{(}\,1+M^{\alpha}|\int_{ \mathbb{R}^{d}}f(\cdot-y)\mu(\mathrm{d}y)|\,\big{)}\)_._ * _'Taming-in':_\(f\) _is replaced by_ \(f\big{/}\big{(}1+M^{\alpha}|f|\big{)}\)_._ Note that the proxy solution for the SSM is computed using the SSM and analogously for the taming schemes. For each example, the error rates of Taming and SSM are computed using the same Brownian motion paths and same initial data. To avoid confusion later in the numerical results, we clarify that due to the super-linear convolution kernel, we do not expect the Taming method to converge. However, under mild initial conditions, it is rare to observe the divergence, so we test high variance cases to show the Taming method does not work in general while the SSM works as expected. We remark that the first step (2.11) of the SSM requires to solve an implicit equation in \(\mathbb{R}^{Nd}\), which is done employing Newton's method (see [23, Appendix B] for details). Below, the symbols \(\mathcal{N}(\alpha,\beta)\) denote the normal distribution with mean \(\alpha\in\mathbb{R}\) and variance \(\beta\in(0,\infty)\), the symbol \(U(a,b)\) denotes the uniform distribution over \([a,b]\) for \(-\infty<a<b<\infty\), the symbol \(B(c,p)\) denotes the binomial distribution for random variables \(X\) such that \(X=0\) with probability \(p\) and \(X=c\) with probability \(1-p\). ### Example: Symmetric double-well type model We consider an extension to the symmetric double-well model [66] of confinement type with extra super-linearity [63, Section 5] in the diffusion coefficient, \[\mathrm{d}X_{t}=\big{(}v(X_{t},\mu_{t}^{X})+X_{t}\big{)}\mathrm{d}t+(X_{t}+ \tfrac{1}{4}X_{t}^{2})\mathrm{d}W_{t},\ v(x,\mu)=-\tfrac{1}{4}x^{3}+\int_{ \mathbb{R}}-\big{(}x-y\big{)}^{3}\mu(\mathrm{d}y). \tag{3.2}\] The corresponding Fokker-Planck equation is \(\partial_{t}\rho=\nabla[\ \nabla\frac{\rho}{2}|x+\frac{1}{4}x^{2}|^{2}+\rho \nabla V+\rho\nabla W*\rho]\) with \(W=\frac{1}{4}|x|^{4}\), \(V=\frac{1}{16}|x|^{4}-\frac{1}{2}|x|^{2}\), and \(\rho\) is the corresponding density map. Due to the structure of the drift term, we expect three cluster states around \(x\in\{-2,0,2\}\). The goal of this example is to simulate the interacting particle system associated to (3.2) up to \(T=10\) using the three numerical methods available. Note that Theorem 2.7 does not apply. Figure 3.1 (a) and (c) show the evolution of the density map at \(T\in\{1,3,10\}\). In (a) with \(X_{0}\sim\mathcal{N}(0,1)\), all three methods yield similar results, but (c) shows that with \(X_{0}\sim B(50,0.5)\), Taming-out (blue, left) and Taming-in fail to produce acceptable results, while the SSM produces the expected results. Figure 3.1 (b) shows the strong convergence of the methods, Taming-out failed to converge. Taming-in and the SSM converge under all time step choices (all satisfying (2.14)) and nearly attain the \(1/2\) strong error rate, the error of SSM is one order of magnitude smaller than the error of Taming-in. Figure 3.1 (d) shows the path type strong convergence of both methods, and we observe that Taming-out and Taming-in failed to converge or at least converge with a very low rate. The SSM converges under all time step choices but the errors are one order of magnitude greater than the standard strong error. As mentioned earlier, we do not have any theoretical support for the convergence of the taming methods. This example shows that a convergence proof for Taming-in might be feasible, possibly, under the caveat of an additional condition on the distribution/support of the initial condition - this was fully unforeseen. These results for Taming-out are discouraging, nonetheless, under strong dissipativity Taming-out seems stable (see next example). ### Example: Approximating the invariant distribution This example aims to illustrate the long-time simulation for the purpose of approximating the invariant distribution of the system \[\mathrm{d}X_{t}=\big{(}v(X_{t},\mu_{t}^{X})-X_{t}\big{)}\mathrm{d}t+\tfrac{1}{4}( 1-X_{t}^{2})\mathrm{d}W_{t},\ v(x,\mu)=-x^{3}+\int_{\mathbb{R}}-\big{(}x-y \big{)}^{3}\mu(\mathrm{d}y). \tag{3.3}\] The corresponding Fokker-Planck equation is \(\partial_{t}\rho=\nabla[\,\nabla\tfrac{\rho}{42}|1-x^{2}|^{2}+\rho\nabla V+\rho \nabla W*\rho]\) with \(W=\tfrac{1}{4}|x|^{4}\), \(V=\tfrac{1}{4}|x|^{4}+\tfrac{1}{2}|x|^{2}\), and \(\rho\) is the corresponding density map. We know that there is a unique invariant distribution, see Theorem 2.7. Here, the cluster state is \(x=0\). Figure 3.2 (a) and (c) show the evolution of the particle distribution under different initial conditions. All three methods produce similar outputs at \(T\in\{3,\ 10\}\), with Taming-out taking longer to contract and to converge than the other methods under \(X_{0}\sim\mathcal{N}(2,16)\) in (a) and \(X_{0}\sim U(4,12)\) in (c). The similar results obtained at \(T\in\{3,10\}\) are due to the fact that the model (3.3) has an invariant distribution and the initial distribution is compactly supported around the cluster state \(x=0\). Figure 3.2 (b) illustrates the strong convergence of the three methods: they all converge and the rates are of order close to \(1/2\), the SSM outperforms the other two methods by \(1\) to \(2\) orders of magnitude. Figure 3.2 (d) depicts the expected exponential decay rate for the SSM under different initial conditions of Theorem 2.7: \(X_{1,0}\sim\mathcal{N}(0,1)\), \(X_{2,0}\sim U(-3,3)\), \(X_{3,0}\sim\mathcal{N}(2,16)\), \(X_{4,0}\sim\mathcal{N}(2,100)\) (same Brownian motion samples). ### Example: Kinetic 2d Van der Pol (VdP) oscillator and periodic phase-space We consider a two-dimensional Van der Pol oscillator model with added super-linearity terms. The (VdP) model was proposed to describe stable oscillation [45, Section 4.2 and 4.3] and for a system of many coupled oscillators in the presence of noise the limit model is a MV-SDE [2]. Here, we build a two-dimensional VdP-type model with mean-field Figure 3.1: Simulation of the double-well model (3.2) with \(N=1000\) particles. All schemes are initialized on the exact same samples. (a) and (c) show the density map for Taming-out (left), Taming-in (middle) and SSM (right) with \(h=0.01\) at times \(T\in\{1,3,10\}\) seen top-to-bottom and with different initial distribution. (b) Strong error (rMSE) of SSM and Taming with \(X_{0}\sim\mathcal{N}(3,9)\) in log-scale. (d) Strong error (Path) of SSM and Taming with \(X_{0}\sim\mathcal{N}(3,9)\) in log-scale. components and super-diffusivity that features a periodicity of phase-space to show that the SSM preserves the theoretical periodic behavior in simulation scenarios - see [16, Section 7.3]. Set \(x=(x_{1},x_{2})\in\mathbb{R}^{2}\) and define the functions \(f,u,b,\sigma\) as \[f(x)=-x|x|^{2},\;u(x)=\left[\begin{array}{c}-\frac{1}{3}x_{1}^{3}\\ 0\end{array}\right],\;b(x)=\left[\begin{array}{c}x_{1}-x_{2}\\ x_{1}\end{array}\right],\;\sigma(x)=\left[\begin{array}{cc}1+1/4\;x_{1}^{2} &0\\ 0&0\end{array}\right], \tag{3.4}\] where \(f\) satisfies \((\mathbf{A}^{f})\). Figure 3.3 (a)-(o) show the system's phase-space portraits (i.e., the parametric plot of \(t\mapsto(X_{1,t},X_{2,t})\) and \(t\mapsto(\mathbb{E}[X_{1,t}],\mathbb{E}[X_{2,t}])\)) for the three methods with different choices of \(N\). In the first row of Figure 3.3, (a)-(e) shows the result of the Taming-out method, the system fails to converge for \(N>50\). The second row and third row of Figure 3.3 show the result of Taming-in and the SSM, both methods converge and the trajectory become smoother as more particles are taken. However, there is a big difference on the expectation trajectories of the SSM and Taming in, the expectation trajectories of the SSM do not cross themselves while the expectation trajectories of Taming-in always cross themselves, which is not expected since the slope fields of the VdP model are smooth and do not admit the cross. Moreover, comparing the first few steps in the sample paths, the particles generated by the SSM concentrate to the expectation path within two steps while the one generated by Taming-in takes about 10 steps. This is because the SSM preserves the super-linear power from the convolution kernel while the Taming-in turns this power to an asymptotic linear one. Thus, the SSM preserves more geometric properties than the taming method even though the approximation obtained via taming may not blow up. ### Example: Super-linear growth of measure components in diffusion This example aims to illustrate the effect of two additional types of measure-nonlinearities included in the diffusion term; Case 1 corresponds to a convolution term in the diffusion and Case 2 is a variance-type term (which is beyond the scope of the paper). Note that the assumptions of the wellposedness result are not satisfied as the estimate (2.1) does not Figure 3.2: Approximation of the invariant distribution of (3.3) with \(N=1000\) particles. The simulated Brownian motion paths and initial distribution are the same for all schemes. (a) and (c) show the distribution for Taming-out (left), Taming-in (middle) and SSM (right) with \(h=0.01\) at times \(T\in\{1,3,10\}\) seen top-to-bottom and with different initial distribution; \(x\)- and \(y\)-scales are fixed. (b) Strong error (rMSE) of SSM and Taming with \(X_{0}\sim\mathcal{N}(2,16)\). (d) Expected distance (in log-scale) between particles under different initial distribution with \(h=10^{-3}\) for the SSM. hold (but could readily achieved by slightly modifying the constants of the coefficients), which indicates that this bound is not sharp. We consider \[\mathrm{d}X_{t} =\big{(}v(X_{t},\mu_{t}^{X})+X_{t}\big{)}\mathrm{d}t+\big{(}X_{t}+ \tfrac{1}{4}X_{t}^{2}+f_{\sigma}(X_{t},\mu_{t}^{X})\big{)}\mathrm{d}W_{t}, \tag{3.5}\] \[\text{with }v(x,\mu) =-\tfrac{1}{4}x^{3}+\int_{\mathbb{R}}-\big{(}x-y\big{)}^{3}\mu( \mathrm{d}y),\ f_{\sigma}(x,\mu)=\begin{cases}\int_{\mathbb{R}}\big{(}x-y \big{)}^{2}\mu(\mathrm{d}y),\ \text{Case 1,}\\ \int_{\mathbb{R}}\int_{\mathbb{R}}\big{(}y-z\big{)}^{2}\mu(\mathrm{d}y)\mu( \mathrm{d}z),\ \text{Case 2.}\end{cases}\] For Case 1, we have a nonlinear convolution kernel \(f_{\sigma}(x)=x^{2}\) for all \(x\in\mathbb{R}\). Figure 3.4, in particular subplots (a)-(c), illustrates that the SSM converges, in a pointwise sense, with strong order \(1/2\) and recovers reasonable density estimates for different choices of the initial distribution. Similar behaviour is not observed for different taming approaches which fail to recover the anticipated strong convergence order of \(1/2\) and we observe that taming schemes do not capture the density of the solution well for high-variance initial data. We conducted an analogous test with \(v(x,\mu)=-x^{3}/4\) in (d), i.e., we removed the convolution term in the drift, and our experiments failed, in the sense that the approximate solutions computed by the SSM did not converge. This supports our theoretical results that a suitable drift compensation for the nonlinear measure component appearing in the diffusion is indeed needed. Case 2 corresponds to an example, where the convolution term is again integrated, i.e., resembles a variance-type term. We are not aware of an existing result that yields wellposedness of the underlying MV-SDE including such a term (even without the nonlinear convolution terms). Further, it is not clear which assumptions would be required for a numerical scheme to converge in a strong sense. The expected strong convergence order is observed for the SSM in (e), but no taming approach appears to be a reasonable alternative. We additionally conducted a numerical experiment for Case 2 with \(v(x,\mu)=-x^{3}/4\), in order to investigate if the variance-type term requires a compensation term (similar to changed Case 1). We also observed that no time-stepping scheme (i.e., taming and SSM) seemed to converge (the result is similar to (d) and we do not present here), which again indicates that the drift's convolution term can also help to control variance-type terms in the diffusion. Figure 3.3: Simulation of the Vdp model (3.4) with a different number of particles and \(h=10^{-2}\), \(T=12\), \(X_{1,0}\sim\mathcal{N}(2,16),X_{2,0}\sim\mathcal{N}(0,16)\). (a)(b)(c)(d)(e) are phase portraits of the Taming-out method with different choices of \(N\). (f)(g)(h)(i)(j) are phase portraits of the Taming-in method with different choices of \(N\). (k)(l)(m)(n)(o) are phase portraits of the SSM with different choices of \(N\). ### Example: Propagation of Chaos rate across dimensions In this example, we estimate the PoC rate depending on the dimension and compare the findings to the theoretical upper bounds established in Theorem 2.9. For equation (1.1)-(1.2) we make the following choices: Let \(d\geq 2\), \(x=(x_{1},\cdots,x_{d})\in\mathbb{R}^{d}\), the initial condition \(X_{0}\) is a vector distributed according to \(d\)-independent \(\mathcal{N}(1,1)\)-random variables, and \[f(x) =-x|x|^{2},\quad u(x)=-\frac{1}{3}\left[\begin{array}{cccc}x_{1 }^{3},x_{2}^{3},\cdots,x_{d}^{3}&\end{array}\right]^{\intercal},\quad b(t,x, \mu)=x,\] \[\overline{\sigma}(x) =\left[\begin{array}{cccc}x_{1}+1/4\;x_{1}^{2}&x_{2}&\cdots&x_{ d}\\ x_{1}&x_{2}+1/4\;x_{2}^{2}&\cdots&x_{d}\\ \cdots&\cdots&\cdots&\cdots\\ x_{1}&x_{2}&\cdots&x_{d}+1/4\;x_{d}^{2}\end{array}\right]. \tag{3.6}\] This is a toy model with a high-dimensional fully coupled convolution kernel and super-linear diffusion term. We observe in Figure 3.5 a strong PoC rate, estimated via (3.1), of order of roughly \(1/2\) across dimension \(d\). By the ordinary least squares linear regression, for dimension \(d\in\{2,3,4,6,10\}\), the corresponding slopes are \(\{\text{slopes}_{d}\}_{d}=\{-0.55,-0.57,-0.5,-0.50,-0.49\}\) and the corresponding \(R\)-square measure is \(\{R_{d}^{2}\}_{d}=\{0.81,0.75,0.92,0.91,0.98\}\). These findings are inline with those obtained in the one-dimensional example of [60, Example 4.1]. Theorem 2.9 establishes a strong convergence rate (in terms of number of particles in a pathwise sense) of order \(1/4\) for dimensions \(d<4\) only and these results are smaller than the upper bounds of PoC in Theorem 2.9 - this highlights a gap in the literature to be explored in future research. For perspective, at a theoretical level the rate \(1/2\) in \(N\) is not new under stronger assumptions. This was obtained in [27, Lemma 5.1] or [65] when the drift and diffusion coefficients are assumed to satisfy strong regularity assumptions. Also in [56] for linear type MV-SDEs featuring diffusions \(\mathbb{R}^{d}\ni x\mapsto\overline{\sigma}(x)\) and drifts with structure of the type \(\mathbb{R}^{d}\ni x\mapsto\int_{\mathbb{R}^{d}}b(x,y)\mu(\mathrm{d}y)\), and requiring that \(b,\overline{\sigma}\) are uniformly Lipschitz, the convergence rate \(1/2\) in the number of particles is obtained; also in [28]. Figure 3.4: Approximation of (3.5) with \(N=1000\) particles. The simulated Brownian motion sample paths and initial distribution are the same for all schemes. (a) and (c) show the distribution for Taming-out (left), Taming-in (middle) and SSM (right) with \(h=0.01\) at times \(T\in\{1,3,10\}\) seen top-to-bottom and with different initial distribution; \(x\)- and \(y\)-scales are fixed. (b), (d) and (e) show the strong error (rMSE) of SSM and Taming with \(X_{0}\sim\mathcal{N}(1,1)\) for different cases. The changed Case 1 in (d) is Case 1 with \(v(x,\mu)=-x^{3}/4\). ### Discussion We discuss the advantages of the SSM compared with the taming methods. The SSM converges under all cases, while the two types of taming failed to converge in some cases. The SSM requires an implicit solver for the convolution kernel but the running time of the SSM compared to the taming methods is only 2 to 3 times longer. From the numerical examples, we see that: 1. The two types of strong errors of the SSM are of order 0.5 and consistently outperform that of the proposed taming schemes. In fact, the taming methods are not even expected to converge, however, under mild initial condition, it is hard to observe the divergence. In the tests with high variance initial distributions, the taming methods diverge while SSM converge consistently. The SSM preserves convergence for larger time steps \(h\) (via comparative lower errors) and is also suitable for long-time simulation. 2. The SSM preserves important geometric properties (the concentration speed of the particles is fast, the expected trajectory coincide with the vector field result), while the taming methods appear to fail to capture these crucial properties. 3. We applied the SSM to examples, where the diffusion also involves certain nonlinear measure terms. As long as a suitable monotonicity condition is satisfied the SSM yields promising results. 4. We perform a PoC rate test across dimensions with non-trivial convolution kernel. The rate which we observe numerically is better than the one suggested by the PoC results. ## 4 Proof of the main results ### Proof of Theorem 2.5 : Wellposedness and moment stability Proof of Theorem 2.5.: The central difficulty to show this result is to establish a-priori \(L^{p}\)-moment bounds for the MV-SDE. Once this objective is achieved, the existence and uniqueness follows from a modifications of the methodologies used in [3, Theorem 3.5]. _Establishing \(L^{p}\)-moment bounds._ We show pointwise \(p\)-th moment estimates for \(m\geq p>2\) (the case \(p=2\) follows in a straightforward manner from the below arguments where one would use Lemma A.1 and Lemma A.2 instead of the additional symmetry property - we discuss this in more detail in Section 4.2 as we prove Theorem 2.7). From Ito's formula, Assumption 2.1 and Remark 2.4, for all \(t\in[0,T]\), we deduce \[|X_{t}|^{p}\leq |X_{0}|^{p}+p\int_{0}^{t}|X_{s}|^{p-2}\langle X_{s},v(X_{s},\mu_{s }^{X})\rangle\mathrm{d}s+p\int_{0}^{t}|X_{s}|^{p-2}\langle X_{s},\overline{ \sigma}(s,X_{s},\mu_{s}^{X})\mathrm{d}W_{s}\rangle \tag{4.1}\] \[+p\int_{0}^{t}|X_{s}|^{p-2}\langle X_{s},b(s,X_{s},\mu_{s}^{X}) \rangle\mathrm{d}s+p(p-1)\int_{0}^{t}|X_{s}|^{p-2}\Big{(}|\sigma(s,X_{s},\mu_{ s}^{X})|^{2}+\int_{\mathbb{R}^{d}}|f_{\sigma}(X_{s}-y)|^{2}\mu_{s}^{X}(\mathrm{d}y) \Big{)}\mathrm{d}s\] \[\leq |X_{0}|^{p}+C\int_{0}^{t}\Big{(}1+|X_{s}|^{p}+\big{(}W^{(2)}(\mu_{ s}^{X},\delta_{0})\big{)}^{p}\Big{)}\mathrm{d}s+p\int_{0}^{t}|X_{s}|^{p-2} \langle X_{s},\overline{\sigma}(s,X_{s},\mu_{s}^{X})\mathrm{d}W_{s}\rangle\] Figure 3.5: Estimation of PoC rate for equation (1.1)-(1.2) under (3.6) using SSM (2.11)-(2.13) with fixed stepsize \(h=10^{-3}\), \(T=1\) and number of particles \(N\in\{40,80,160,320,640,1280,2560\}\). In all figures the reference rate \(0.5\) and the upper bound rate from Theorem 2.9 are displayed. \[\|\Gamma[\mathbf{g}]\|_{[0,T_{0}],q} \leq\sup_{t\in[0,T_{0}]}\bigg{(}\sup_{x\in\mathbb{R}^{d}}\frac{|(f* \mu_{t}^{g})(x)|+|(f*\mu_{t}^{g})(x)|}{1+|x|^{q+1}}\bigg{)}\] \[\leq C\Big{(}1+\sup_{t\in[0,T_{0}]}\mathbb{E}\big{[}|X_{t}^{g}|^{q+ 1}\big{]}\Big{)}\] \[\leq C(q,\mathbb{E}[|X_{0}|^{q+1}])+Ce^{CT_{0}}\int_{0}^{T_{0}}\Big{(} [b(s,0,\delta_{0})]^{q+1}+|u(0,\delta_{0})|^{q+1}+|\mathbf{g}_{1}(s,0)|^{q+1}\] \[+|\sigma(s,0,\delta_{0})|^{q+1}+|\mathbf{g}_{2}(s,0)|^{q+1}\Big{)} \mathrm{d}s\] \[\leq C(q,\mathbb{E}[|X_{0}|^{q+1}])+Ce^{CT_{0}}\bigg{(}(T_{0}K)^{ q+1}+\int_{0}^{T_{0}}\Big{(}[b(s,0,\delta_{0})]^{q+1}+|u(0,\delta_{0})|^{q+1}\] \[+|\sigma(s,0,\delta_{0})|^{q+1}\Big{)}\mathrm{d}s\bigg{)}\] \[\leq K,\] for a sufficiently small \(T_{0}>0\) and the choice \(K=C(q,\mathbb{E}[|X_{0}|^{q+1}])\). It remains to show that, for \(\mathbf{g}_{1},\mathbf{g}_{2}\in E\), we have \[\|\Gamma[\mathbf{g}_{1}]-\Gamma[\mathbf{g}_{2}]\|_{[0,T_{0}],q}\leq c\|\mathbf{g}_{1}-\bm {g}_{2}\|_{[0,T_{0}],q},\] for \(c\in(0,1)\) and \(T_{0}\) possibly even smaller than chosen above. This would show the existence of a solution with finite \(m\)-th moments on \([0,T_{0}]\). Note that \[\|\Gamma[\mathbf{g}_{1}]-\Gamma[\mathbf{g}_{2}]\|_{[0,T_{0}],q} \leq\sup_{t\in[0,T_{0}]}\bigg{(}\sup_{x\in\mathbb{R}^{d}}\frac{|( f*\mu_{t}^{\mathbf{g}_{1}})(x)-(f*\mu_{t}^{\mathbf{g}_{2}})(x)|+|(f_{*}*\mu_{t}^{\mathbf{g}_{ 1}})(x)-(f_{\sigma}*\mu_{t}^{\mathbf{g}_{2}})(x)|}{1+|x|^{q+1}}\bigg{)}\] \[\leq C\sup_{t\in[0,T_{0}]}\bigg{(}\sup_{x\in\mathbb{R}^{d}}\frac{ \mathbb{E}\left[|X_{t}^{\mathbf{g}_{1}}-X_{t}^{\mathbf{g}_{2}}|(1+|x|^{q+1}\big{(}1+|X _{t}^{\mathbf{g}_{1}}|^{q}+|X_{t}^{\mathbf{g}_{2}}|^{q}\big{)}\right]}{1+|x|^{q+1}} \bigg{)}\] \[\leq C\sup_{t\in[0,T_{0}]}\mathbb{E}\left[|X_{t}^{\mathbf{g}_{1}}-X_{ t}^{\mathbf{g}_{2}}|\big{(}1+|X_{t}^{\mathbf{g}_{1}}|^{q}+|X_{t}^{\mathbf{g}_{2}}|^{q} \big{)}\right]\] \[\leq C\Big{(}\sup_{t\in[0,T_{0}]}\mathbb{E}[|X_{t}^{\mathbf{g}_{1}}-X _{t}^{\mathbf{g}_{2}}|^{2}]\Big{)}^{1/2}\Big{(}\sup_{t\in[0,T_{0}]}\mathbb{E} \left[\big{(}1+|X_{t}^{\mathbf{g}_{1}}|^{q}+|X_{t}^{\mathbf{g}_{2}}|^{q}\big{)}^{2} \right]\Big{)}^{1/2}\] \[\leq Ce^{CT_{0}}\sqrt{T_{0}}\|\mathbf{g}_{1}-\mathbf{g}_{2}\|_{[0,T_{0}],q }\left(1+\sup_{t\in[0,T_{0}]}\mathbb{E}\left[|X_{t}^{\mathbf{g}_{1}}|^{2q+2}+|X_{t }^{\mathbf{g}_{2}}|^{2q+2}\right]\right).\] Performing similar calculations as above for the moments of \(X_{t}^{\mathbf{g}_{1}}\) and \(X_{t}^{\mathbf{g}_{2}}\), which exist up to order \(m>2q+2\) by assumption, allows to deduce that \(T_{0}\) can indeed be chosen small enough such that \(\Gamma\) maps \(E\) onto \(E\) and is a contraction operator when restricted to \(E\). Since, we have established a-priori \(L^{p}\)-moment bounds, for \(p\in[2,m]\), (which substitutes for [3, Proposition 3.13]), we can repeat the arguments from above to establish the existence of a solution to an arbitrary time interval \([0,T]\), see [3] for details. ### Proof of Theorem 2.7 : Exponential contraction and the ergodic property Proof of Theorem 2.7.: We prove the statements by the order they were stated. Proof of statement 1.: Consider two solutions \(X,Y\) of (1.1), driven by the same Brownian motion but with different initial conditions \(X_{0}\sim\mu,Y_{0}\sim\nu\), \(\mu,\nu\in\mathcal{P}_{\ell}(\mathbb{R}^{d}),\ \ell>2q+2\). Let \(\bar{X},\bar{Y}\) be independent copies of \(X,Y\) respectively. By the wellposedness result these processes have finite moments up to order \(\ell\). We now establish an exponential contraction statement: For all \(t\in[0,\infty)\), using Ito's formula and taking expectations, by Remark 2.4, we derive the following estimate \[e^{-\rho_{1}t}\mathbb{E}\big{[}|X_{t}-Y_{t}|^{2}\big{]} =\mathbb{E}\big{[}|X_{0}-Y_{0}|^{2}\big{]}+2\int_{0}^{t}e^{-\rho _{1}s}\mathbb{E}\big{[}\langle X_{s}-Y_{s},v(X_{s},\mu_{s}^{X})-v(Y_{s},\mu_{ s}^{Y})\rangle\big{]}\mathrm{d}s\] \[\quad+\int_{0}^{t}e^{-\rho_{1}s}\mathbb{E}\big{[}2\langle X_{s}-Y _{s},b(s,X_{s},\mu_{s}^{X})-b(s,Y_{s},\mu_{s}^{Y})\rangle+|\overline{\sigma}(s,X_{s},\mu_{s}^{X})-\overline{\sigma}(s,Y_{s},\mu_{s}^{Y})|^{2}\big{]}\mathrm{d}s\] \[\quad+\int_{0}^{t}(-\rho_{1})e^{-\rho_{1}s}\mathbb{E}\big{[}|X_{s }-Y_{s}|^{2}\big{]}\mathrm{d}s\] \[\leq\mathbb{E}\big{[}|X_{0}-Y_{0}|^{2}\big{]}+\int_{0}^{t}e^{- \rho_{1}s}\mathbb{E}\big{[}2\langle X_{s}-Y_{s},b(s,X_{s},\mu_{s}^{X})-b(s,Y_{s },\mu_{s}^{Y})\rangle-\rho_{1}|X_{s}-Y_{s}|^{2}\big{]}\mathrm{d}s\] \[\quad+2\int_{0}^{t}e^{-\rho_{1}s}\mathbb{E}\big{[}\langle X_{s}-Y _{s},f(X_{s}-\bar{X}_{s})-f(Y_{s}-\bar{Y}_{s})\rangle+|f_{\sigma}(X_{s}-\bar{X} _{s})-f_{\sigma}(Y_{s}-\bar{Y}_{s})|^{2}\big{]}\mathrm{d}s\] \[\quad+2\int_{0}^{t}e^{-\rho_{1}s}\mathbb{E}\Big{[}\langle X_{s}-Y _{s},u(X_{s},\mu_{s}^{X})-u(Y_{s},\mu_{s}^{Y})\rangle+|\sigma(s,X_{s},\mu_{s}^{ X})-\sigma(s,Y_{s},\mu_{s}^{Y})|^{2}\Big{]}\mathrm{d}s\] \[\leq\mathbb{E}\big{[}|X_{0}-Y_{0}|^{2}\big{]}+\big{(}L_{(b)}^{(2)}+L_ {(b)}^{(3)}+2L_{(u\sigma)}^{(1)}+2L_{(u\sigma)}^{(2)}+4L_{(f)}^{(1),+}-\rho_{1} \big{)}\int_{0}^{t}e^{-\rho_{1}s}\mathbb{E}\big{[}|X_{s}-Y_{s}|^{2}\big{]} \mathrm{d}s\] \[=\mathbb{E}\big{[}|X_{0}-Y_{0}|^{2}\big{]},\] where we used Lemma A.2. Using the properties of the Wasserstein metric, we obtain \[\big{(}W^{(2)}(P_{0,t}^{*}\mu,P_{0,t}^{*}\nu)\big{)}^{2}=\big{(}W^{ (2)}(\mu_{t}^{X},\mu_{t}^{Y})\big{)}^{2}\leq\mathbb{E}\big{[}|X_{t}-Y_{t}|^{2} \big{]} \leq\mathbb{E}\big{[}|X_{0}-Y_{0}|^{2}\big{]}e^{\rho_{1}t}\] \[\leq e^{\rho_{1}t}\big{(}W^{(2)}(\mu,\nu)\big{)}^{2}, \tag{4.2}\] where in the last inequality we took the infimum on both sides over all couplings between \(\mu\) and \(\nu\). This concludes the proof of the first statement. _Proof of statement 2._ Let the corresponding assumptions hold and let \(X_{0}\sim\mu\), \(\mu\in\mathcal{P}_{\ell}(\mathbb{R}^{d})\) with \(2q+2<\ell\leq m\) be given. From (4.1), for some positive constant \(C\) depending on \(\ell\), \(L^{(1)}_{(bu\sigma)}\) and \(\sup_{t}|b(t,0,\delta_{0})|\), for \(\rho_{2,\ell}\neq 0\), we have \[e^{-\rho_{2,\ell}t} \mathbb{E}\big{[}|X_{t}|^{\ell}\big{]}\leq\mathbb{E}\big{[}|X_{0 }|^{\ell}\big{]}+\int_{0}^{t}\ell e^{-\rho_{2,\ell}s}\mathbb{E}\big{[}|X_{s}|^ {\ell-2}\big{(}\langle X_{s},f(X_{s}-\bar{X}_{s})\rangle+(\ell-1)|f_{\sigma}(X _{s}-\bar{X}_{s})\rangle\big{]}\big{)}\mathbb{d}s\] \[+C\int_{0}^{t}e^{-\rho_{2,\ell}s}\mathrm{d}s+\big{(}\tfrac{\ell- 2}{\ell}+L^{(2)}_{(bu\sigma)}\ell+L^{(3)}_{(bu\sigma)}\ell-\rho_{2,\ell}\big{)} \int_{0}^{t}e^{-\rho_{2,\ell}s}\mathbb{E}\big{[}|X_{s}|^{\ell}\big{]}\mathrm{d }s\leq\mathbb{E}\big{[}|X_{0}|^{\ell}\big{]}+\frac{C}{\rho_{2,\ell}}(1-e^{- \rho_{2,\ell}t}),\] where we used Assumption \((\mathbf{A}^{f},\ \mathbf{A}^{f_{\sigma}})\), (2.4) and Young's inequality. Similarly, for \(\rho_{2,\ell}=0\), we have \[\mathbb{E}\big{[}|X_{t}|^{\ell}\big{]}\leq\mathbb{E}\big{[}|X_{0}|^{\ell} \big{]}+Ct.\] Using the properties of the Wasserstein metric we have \[\big{(}W^{(\ell)}(P_{0,t}^{*}\mu,\delta_{0})\big{)}^{\ell} =\big{(}W^{(\ell)}(\mu_{t}^{X},\delta_{0})\big{)}^{\ell}\leq \mathbb{E}\big{[}|X_{t}|^{\ell}\big{]}\] \[\leq\mathbb{E}\big{[}|X_{0}|^{\ell}\big{]}e^{\rho_{2,\ell}t}+ \frac{C}{\rho_{2,\ell}}(e^{\rho_{2,\ell}t}-1)\mathbb{1}_{\rho_{2,\ell}\neq 0} +Ct\mathbb{1}_{\rho_{2,\ell}=0}\] \[\leq e^{\rho_{2,\ell}t}\big{(}W^{(\ell)}(\mu,\delta_{0})\big{)}^{ \ell}+\frac{C}{\rho_{2,\ell}}(e^{\rho_{2,\ell}t}-1)\mathbb{1}_{\rho_{2,\ell} \neq 0}+Ct\mathbb{1}_{\rho_{2,\ell}=0}. \tag{4.3}\] _Proof of statement 3._ In the previous two statements we worked on the finite time interval \([0,T]\) and this statement extends the work to \([0,\infty)\). We also emphasize that the reason why we work with \(\mathcal{P}_{2\ell-2}\) instead of \(\mathcal{P}_{\ell}\) with \(1+m/2\geq\ell>2q+2\) will become apparent later in the proof. Let \(X_{0}\sim\mu_{0},Y_{0}\sim\nu_{0}\) with \(\mu_{0},\nu_{0}\in\mathcal{P}_{2\ell-2}(\mathbb{R}^{d})\) be given. From Theorem 2.5 and the flow property on \(\mathcal{P}_{\ell}(\mathbb{R}^{d})\) of (1.1) described by the semigroup operator \((P_{s,t}^{*})\) (defined above Theorem 2.7), we extend \((\mu_{t}^{X})_{t},(\mu_{t}^{Y})_{t}\) for \(t\geq 0\) (e.g., via patching up solutions inductively over intervals \([nT,(n+1)T]\) for \(n\in\mathbb{N}\)). Further, since \(\rho_{1}<0\), we have a contraction in (4.2) and hence \(\lim_{t\to\infty}W^{(2)}(\mu_{t}^{X},\mu_{t}^{Y})=0\). By using \(\rho_{2,2\ell-2}<0\), we have \(\sup_{t\geq 0}W^{(2\ell-2)}(\mu_{t}^{X},\delta_{0})<\infty\), which guarantees that \(\mu_{t}^{X}\in\mathcal{P}_{2\ell-2}(\mathbb{R}^{d})\) for all \(t\geq 0\). The main proof follows via a shift-coupling argument and the properties shown so far under \(\rho_{1},\rho_{2,2\ell-2}<0\), but with a critical additional element regarding establishing contraction and higher order moments for the candidate invariant measure so that the wellposedness result applies. We start by showing that \((P_{0,t}^{*}\nu_{0})_{t\geq 0}\) is a Cauchy-sequence in \((\mathcal{P}_{2}(\mathbb{R}^{d}),W^{(2)})\), and use this result to show that \((P_{0,t}^{*}\nu_{0})_{t\geq 0}\) is also Cauchy-sequence in \((\mathcal{P}_{\ell}(\mathbb{R}^{d}),W^{(\ell)})\) for a given \(\nu_{0}\in\mathcal{P}_{2\ell-2}(\mathbb{R}^{d})\). These arguments suffice to first find a candidate invariant distribution and then to characterize it as an ergodic limit (see below). _Using the \(W^{(2)}\)-contraction._ Given \(\nu_{0}\in\mathcal{P}_{2\ell-2}(\mathbb{R}^{d})\), from (2.5) with \(\rho_{1}<0\), we have exponential contraction and hence for any \(0\leq s<t<\infty\) \[W^{(2)}\big{(}P_{0,t}^{*}\nu_{0},P_{0,t+s}^{*}\nu_{0}\big{)}=W^{(2)}\big{(}P_{0,t}^{*}\nu_{0},P_{0,t}^{*}\big{(}P_{0,s}^{*}\nu_{0}\big{)}\big{)}\leq e^{\rho_ {1}t/2}W^{(2)}\big{(}\nu_{0},P_{0,s}^{*}\nu_{0}\big{)},\] where we used the semigroup property that \(P_{s,t}^{*}=P_{0,t-s}^{*}\) (since \(b,\sigma\) are independent of \(t\); see [43, 52, 69]). _The bounded orbit argument._ From (2.6) with \(\rho_{2,2\ell-2}<0\) and \(m\geq 2\ell-2>4q+2\), we have via the triangle inequality \[\sup_{t\geq 0} \big{(}W^{(2\ell-2)}\big{(}P_{0,t}^{*}\nu_{0},\nu_{0}\big{)}\big{)} ^{2\ell-2}\leq C\Big{(}\big{(}W^{(2\ell-2)}\big{(}\nu_{0},\delta_{0}\big{)} \big{)}^{2\ell-2}+\sup_{t\geq 0}\big{(}W^{(2\ell-2)}\big{(}P_{0,t}^{*}\nu_{0},\delta_{0} \big{)}\big{)}^{2\ell-2}\Big{)}\] \[\leq C\Big{(}\big{(}W^{(2\ell-2)}\big{(}\nu_{0},\delta_{0}\big{)} \big{)}^{2\ell-2}-\sup_{t\geq 0}\Big{(}e^{\rho_{2,2\ell-2}t}\big{(}W^{(2\ell-2)} \big{(}\nu_{0},\delta_{0}\big{)}\big{)}^{2\ell-2}\Big{)}+\sup_{t\geq 0}\frac{1}{\rho_{2,2\ell-2}}(e^{\rho_{2,2 \ell-2}t}-1)\Big{)}\] \[\leq C\Big{(}\big{(}W^{(2\ell-2)}\big{(}\nu_{0},\delta_{0}\big{)} \big{)}^{2\ell-2}-\frac{1}{\rho_{2,2\ell-2}}\Big{)}<\infty. \tag{4.4}\] In other words, the orbit of \(t\mapsto P_{0,t}^{*}\nu_{0}\) remains within a sufficiently large \(W^{(2\ell-2)}\)-ball, which also shows the finiteness of \(\sup_{t\geq 0}W^{(2)}\big{(}P_{0,t}^{*}\nu_{0},\nu_{0}\big{)}\). _A \(W^{(2)}\)-Cauchy-sequence and the completeness argument._ Combining the two previous elements we have \[\lim_{s\to\infty}W^{(2)}\big{(}P_{0,t}^{*}\nu_{0},P_{0,t+s}^{*}\nu_{ 0}\big{)} =\lim_{s\to\infty}W^{(2)}\big{(}P_{0,t}^{*}\nu_{0},P_{0,t}^{*}(P_{0,s }^{*}\nu_{0})\big{)}\] \[\leq e^{\rho_{1}t/2}\lim_{s\to\infty}W^{(2)}\big{(}\nu_{0},P_{0,s }^{*}\nu_{0}\big{)}\leq Ce^{\rho_{1}t/2}.\] This shows the sequence to be Cauchy and since \((\mathcal{P}_{2}(\mathbb{R}^{d}),W^{(2)})\) is complete, there exists a limiting measure \(\bar{\mu}\in\mathcal{P}_{2}(\mathbb{R}^{d})\) to the sequence, i.e., we have \[\lim_{t\to\infty}W^{(2)}(P_{0,t}^{*}\nu_{0},\bar{\mu})=0.\] _The candidate invariant measure \(\bar{\mu}\) has sufficiently high moments._ The current issue with \(\bar{\mu}\in\mathcal{P}_{2}(\mathbb{R}^{d})\) is that we cannot guarantee, via Theorem 2.5, that \(P_{0,t}^{*}\bar{\mu}\) has meaning (although we have convergence in \(\mathcal{P}_{2}(\mathbb{R}^{d})\)). Thus, we need to show that \((P_{0,t}^{*}\nu_{0})_{t\geq 0}\) also has the Cauchy-sequence property in \((\mathcal{P}_{\ell}(\mathbb{R}^{d}),W^{(\ell)})\) so that \(\bar{\mu}\in\mathcal{P}_{\ell}(\mathbb{R}^{d})\). Set \(X_{0}\sim\nu_{0}\in\mathcal{P}_{2\ell-2}(\mathbb{R}^{d})\), \(Y_{0}\sim P_{0,s}^{*}\nu_{0}\in\mathcal{P}_{2\ell-2}(\mathbb{R}^{d})\) for \(s\geq 0\), then for any \(t\geq 0\) we have via Cauchy-Schwarz inequality \[\mathbb{E}\big{[}|X_{t}-Y_{t}|^{\ell}\big{]} =\mathbb{E}\big{[}|X_{t}-Y_{t}|\;|X_{t}-Y_{t}|^{\ell-1}\big{]} \leq\sqrt{\mathbb{E}\big{[}|X_{t}-Y_{t}|^{2}\big{]}\mathbb{E}\big{[}|X_{t}-Y_{ t}|^{2\ell-2}\big{]}}\leq Ce^{\rho_{1}t/2},\] where \(C\) is uniformly bounded in \(t\) and depends on \(\nu_{0},\ P_{0,s}^{*}\nu_{0}\) due to (4.3) and (4.4). Therefore, \[\big{(}W^{(\ell)}(P_{0,t}^{*}\nu_{0},P_{0,t+s}^{*}\nu_{0})\big{)} ^{\ell}\leq\mathbb{E}\big{[}|X_{t}-Y_{t}|^{\ell}\big{]}\leq Ce^{\rho_{1}t/2}.\] We are then able to recognize \((P_{0,t}^{*}\nu_{0})_{t\geq 0}\) as a \(W^{(\ell)}\) Cauchy-sequence in \(\mathcal{P}_{\ell}(\mathbb{R}^{d})\), and by completeness of the space \((\mathcal{P}_{\ell}(\mathbb{R}^{d}),W^{(\ell)})\) we conclude that the sequence converges to \(\bar{\mu}\in\mathcal{P}_{\ell}(\mathbb{R}^{d})\). _Invariance argument._ To show the invariance property, it suffices to argue in \(W^{(2)}\). From here, using [68, Lemma 4.2], we obtain for any \(t\geq 0\) that \[W^{(2)}(P_{0,t}\bar{\mu},\bar{\mu})\leq\liminf_{s\to\infty}W^{(2)} \big{(}P_{0,t}^{*}(P_{0,s}^{*}\nu_{0}),\bar{\mu}\big{)}=0.\] We then conclude that \(\bar{\mu}\) is an invariant measure. _The ergodicity property of the system._ The contraction inequality (2.5) with \(\rho_{1}<0\) yields the exponential ergodicity of the invariant measure \(\bar{\mu}\) in the following sense, \[W^{(2)}\big{(}P_{0,t}^{*}\nu_{0},\bar{\mu}\big{)} =\lim_{s\to\infty}W^{(2)}\big{(}P_{0,t}^{*}\nu_{0},P_{0,t}^{*}(P_ {0,s}^{*}\nu_{0})\big{)}\] \[\leq e^{\rho_{1}t/2}\lim_{s\to\infty}W^{(2)}\big{(}\nu_{0},P_{0,s }^{*}\nu_{0}\big{)}=e^{\rho_{1}t/2}W^{(2)}\big{(}\nu_{0},\bar{\mu}\big{)}.\] Via a straightforward application of the same arguments as above, we have \[\text{for any }\nu_{0}\in\mathcal{P}_{2\ell-2}(\mathbb{R}^{d})\qquad\lim_{t\to \infty}W^{(\ell)}(P_{0,t}^{*}\nu_{0},\bar{\mu})=0.\] ### Proof of Theorem 2.9: Propagation of chaos Proof.: Due to Remark 2.8 and conditions \((\mathbf{A}^{u},\ \mathbf{A}^{g},\mathbf{A}^{f},\ \mathbf{A}^{f_{\pi}})\), we observe that the drift and diffusion of the interacting particle system (viewed as an SDE in \(\mathbb{R}^{Nd}\)) satisfy a monotonicity condition; hence the results in [49, 54, 59] guarantee the existence of a unique solution to the particle system. Critically, the wellposedness results there do not yield moment estimates that are independent of \(N\), as we interpreted the particle system as one single SDE in \(\mathbb{R}^{Nd}\). In the next step, we prove moment bounds independent of \(N\). Now, by Ito's formula, Assumption 2.1, Remark 2.4 and Jensen's inequality, we have, for all \(t\in[0,T],\ i\in[1,N]\), \(2\leq p\leq m\), \[\mathbb{E}\big{[}|X_{t}^{i,N}|^{p}\big{]}=\mathbb{E}\big{[}|X_{0} ^{i,N}|^{p}\big{]}+p\mathbb{E}\Big{[}\int_{0}^{t}|X_{s}^{i,N}|^{p-2}\langle X _{s}^{i,N},v(X_{s}^{i,N},\mu_{s}^{X,N})+b(s,X_{s}^{i,N},\mu_{s}^{X,N})\rangle \Big{]}\mathrm{d}s\] \[\qquad+p\mathbb{E}\Big{[}\int_{0}^{t}|X_{s}^{i,N}|^{p-2}\langle X _{s}^{i,N},\overline{\sigma}(s,X_{s}^{i,N},\mu_{s}^{X,N})\mathrm{d}W_{s}^{i} \rangle\Big{]}+\tfrac{p(p-1)}{2}\mathbb{E}\Big{[}\int_{0}^{t}|X_{s}^{i,N}|^{p- 2}|\overline{\sigma}(s,X_{s}^{i,N},\mu_{s}^{X,N})|^{2}\Big{]}\mathrm{d}s\] \[\leq\mathbb{E}\big{[}|X_{0}^{i,N}|^{p}\big{]}+C\int_{0}^{t}\mathbb{ E}\big{[}|X_{s}^{i,N}|^{p}\big{]}\mathrm{d}s+p\mathbb{E}\Big{[}\int_{0}^{t}|X_{s}^{i,N}|^{p- 2}\big{(}\langle X_{s}^{i,N},v(X_{s}^{i,N},\mu_{s}^{X,N})\rangle+(p-1)|\overline{ \sigma}(s,X_{s}^{i,N},\mu_{s}^{X,N})|^{2}\big{)}\Big{]}\mathrm{d}s\] \[\qquad+p\int_{0}^{t}\mathbb{E}\Big{[}|X_{s}^{i,N}|^{p-2}\big{(} \langle X_{s}^{i,N},\frac{1}{N}\sum_{j=1}^{N}f(X_{s}^{i,N}-X_{s}^{j,N})\rangle+(p -1)\frac{1}{N}\sum_{j=1}^{N}|f_{\sigma}(X_{s}^{i,N}-X_{s}^{j,N})|^{2}\big{)} \Big{]}\mathrm{d}s+CT\] \[\leq\mathbb{E}\big{[}|X_{0}^{i,N}|^{p}\big{]}+\int_{0}^{t}\frac{p}{N} \sum_{j=1}^{N}\mathbb{E}\Big{[}|X_{s}^{i,N}|^{p-2}\big{(}\tfrac{1}{2}\langle X_{ s}^{i,N}-X_{s}^{j,N},f(X_{s}^{i,N}-X_{s}^{j,N})\rangle+(p-1)\big{|}f_{\sigma}(X_{s}^{i,N }-X_{s}^{j,N})\big{|}^{2}\big{)}\Big{]}\mathrm{d}s\] \[\quad+CT+C\int_{0}^{t}\mathbb{E}\big{[}|X_{s}^{i,N}|^{p}\big{]} \mathrm{d}s+\int_{0}^{t}\frac{p}{4N}\sum_{j=1}^{N}\mathbb{E}\Big{[}(|X_{s}^{i, N}|^{p-2}-|X_{s}^{j,N}|^{p-2})\langle X_{s}^{i,N}+X_{s}^{j,N},f(X_{s}^{i,N}-X_{s}^{j,N}) \rangle\Big{]}\mathrm{d}s\] \[\leq\mathbb{E}\big{[}|X_{0}^{i,N}|^{p}\big{]}+C\int_{0}^{T} \mathbb{E}\big{[}|X_{s}^{i,N}|^{p}\big{]}\mathrm{d}s+CT.\] Taking supremum over \(i\) and \(t\), shows the claim using Gronwall's lemma; Jensen's inequality yields the estimate for \(1\leq p<2\). The estimate (2.9) is then a consequence of [59, Proposition 2.1]. ### Proof of Lemma 2.13: Stochastic \(C\)-Stability The proof shown in this section is an extension of the results for classical SDEs in [10] to the particle system considered in this paper. Proof.: For every \(n\in\llbracket 0,M\rrbracket\), we denote the difference of the two particles by \[e_{n}^{i,N}:=X_{n}^{i,N}-\hat{X}_{n}^{i,N}.\] By the orthogonality of the conditional expectation it holds \[\mathbb{E}\big{[}|e_{n}^{i,N}|^{2}\big{]}=\mathbb{E}\Big{[}\left| \mathbb{E}\big{[}e_{n}^{i,N}\mid\mathcal{F}_{t_{n-1}}\big{]}\right|^{2}\Big{]} +\mathbb{E}\Big{[}\left|e_{n}^{i,N}-\mathbb{E}\big{[}e_{n}^{i,N}\mid\mathcal{ F}_{t_{n-1}}\big{]}\right|^{2}\Big{]}. \tag{4.5}\] The term \(e_{n}^{i,N}\) can be expressed as follows \[e_{n}^{i,N}=X_{n}^{i,N}+\Psi_{i}(X_{n-1}^{i,N},\mu_{n-1}^{X,N},t_{n-1},h)-\Psi _{i}(X_{n-1}^{i,N},\mu_{n-1}^{X,N},t_{n-1},h)-\hat{X}_{n}^{i,N}.\] Thus, for the first term in (4.5), it follows from the inequality \((a+b)^{2}=a^{2}+2ab+b^{2}\leq\big{(}1+h^{-1}\big{)}\,a^{2}+(1+h)\,b^{2}\) that, we have \[\mathbb{E}\Big{[}\left|\mathbb{E}\big{[}e_{n}^{i,N}\mid\mathcal{ F}_{t_{n-1}}\big{]}\right|^{2}\Big{]}\leq (1+\tfrac{1}{h})\mathbb{E}\Big{[}\left|\mathbb{E}\big{[}X_{n}^{i,N }-\Psi_{i}(X_{n-1}^{i,N},\mu_{n-1}^{X,N},t_{n-1},h)\mid\mathcal{F}_{t_{n-1}} \big{]}\right|^{2}\Big{]}\] \[+(1+h)\mathbb{E}\Big{[}\left|\mathbb{E}\big{[}\Psi_{i}(X_{n-1}^{ i,N},\mu_{n-1}^{X,N},t_{n-1},h)-\hat{X}_{n}^{i,N}\mid\mathcal{F}_{t_{n-1}} \big{]}\right|^{2}\Big{]}. \tag{4.6}\] Similarly, for the second term in (4.5), choose \(\eta\) such that \(1<\eta\leq(m-1)\) in Assumption 2.1, we have \[\mathbb{E}\Big{[}\Big{|}e_{n}^{i,N}-\mathbb{E}\big{[}e_{n}^{i,N} |\mathcal{F}_{t_{n-1}}\big{]}\Big{]}^{2}\Big{]}\] \[\leq (1+\tfrac{1}{\eta-1})\mathbb{E}\Big{[}\Big{|}\big{(}\mathrm{d}- \mathbb{E}[\cdot\mid\mathcal{F}_{t_{n-1}}]\big{)}\Big{(}X_{n}^{i,N}-\Psi_{i}(X _{n-1}^{i,N},\mu_{n-1}^{X,N},t_{n-1},h)\Big{)}\Big{|}^{2}\Big{]}\] \[+\eta\,\mathbb{E}\Big{[}\Big{|}\big{(}\mathrm{d}-\mathbb{E}[\cdot \mid\mathcal{F}_{t_{n-1}}]\big{)}\Big{(}\Psi_{i}(X_{n-1}^{i,N},\mu_{n-1}^{X,N},t _{n-1},h)-\hat{X}_{n}^{i,N}\Big{)}\Big{|}^{2}\Big{]}. \tag{4.7}\] Using the fact \(\hat{X}_{n}^{i,N}=\Psi_{i}(\hat{X}_{n-1}^{i,N},\hat{\mu}_{n-1}^{X,N},t_{n-1},h)\), and the \(C\)-stability result for the terms (4.6), (4.7), we further estimate (4.5) by \[\mathbb{E}\big{[}|e_{n}^{i,N}|^{2}\big{]}\ \leq(1+\tfrac{1}{h})\mathbb{E}\Big{[}\left|\mathbb{E} \big{[}X_{n}^{i,N}-\Psi_{i}(X_{n-1}^{i,N},\mu_{n-1}^{X,N},t_{n-1},h)\mid \mathcal{F}_{t_{n-1}}\big{]}\right|^{2}\Big{]}\] \[+(1+\tfrac{1}{\eta-1})\mathbb{E}\Big{[}\left|\big{(}\mathrm{d}- \mathbb{E}\left[\cdot\mid\mathcal{F}_{t_{n-1}}\right]\big{)}\left(X_{n}^{i,N}- \Psi_{i}(X_{n-1}^{i,N},\mu_{n-1}^{X,N},t_{n-1},h)\right)\right|^{2}\Big{]}\] \[+(1+Ch)\mathbb{E}\big{[}|e_{n-1}^{i,N}|^{2}\big{]}+Ch\mathbb{E} \big{[}|W^{(2)}(\hat{\mu}_{n-1}^{X,N},\mu_{n-1}^{X,N})|^{2}\big{]}.\] Using the fact that the particles are identically distributed \[\mathbb{E}\big{[}|W^{(2)}(\hat{\mu}_{n-1}^{X,N},\mu_{n-1}^{X,N})|^{2}\big{]} \leq\frac{1}{N}\sum_{j=1}^{N}\mathbb{E}[|e_{n-1}^{j,N}|^{2}]=\mathbb{E}\big{[}|e_ {n-1}^{i,N}|^{2}\big{]}.\] By induction, with \(C_{\eta}=1+(\eta-1)^{-1}\), we have \[\sup_{n\in\llbracket 0,M\rrbracket}\mathbb{E}\big{[}\big{|}X_{n}^{i,N }-\hat{X}_{n}^{i,N}\big{|}^{2}\big{]} \leq\mathbb{E}\big{[}|\hat{X}_{0}^{i,N}-\xi^{i}|^{2}\big{]}\] \[\quad+C_{\eta}\sum_{k=1}^{M}\mathbb{E}\Big{[}\Big{|}\left( \mathrm{id}-\mathbb{E}\left[\cdot\mid\mathcal{F}_{t_{k-1}}\right]\right)\big{(}X _{k}^{i,N}-\Psi_{i}(X_{k-1}^{i,N},\mu_{k-1}^{X,N},t_{k-1},h)\big{|}\Big{|}^{2} \,\Big{]}\] \[\quad+Ch\sum_{k=1}^{M}\mathbb{E}\big{[}|X_{k}^{i,N}-\hat{X}_{k}^{i,N}|^{2}\big{]}+\frac{Ch}{N}\sum_{k=1}^{M}\sum_{j=1}^{N}\mathbb{E}\big{[}|X_{k} ^{j,N}-\hat{X}_{k}^{j,N}|^{2}\big{]}.\] Taking supremum over \(i\in\llbracket 1,N\rrbracket\) and applying the discrete Gronwall's Lemma yields the result. ### Proof of Theorem 2.14 Proof.: Using Definition 2.11, Definition 2.12 and the result in Lemma 2.13, we obtain \[\sup_{n\in\llbracket 0,M\rrbracket}\sup_{i\in\llbracket 1,N \rrbracket}\mathbb{E}\big{[}\big{|}X_{n}^{i,N}-\hat{X}_{n}^{i,N}|^{2}\big{]} \leq\mathrm{e}^{CT}\Bigg{[}\mathbb{E}\big{[}|X_{0}^{i,N}-\hat{X}_{0}^{i,N}|^{ 2}\big{]}\] \[\quad+\sum_{k=1}^{M}\sup_{i\in\llbracket 1,N\rrbracket}\bigg{(} \big{(}1+h^{-1}\big{)}\,\mathbb{E}\Big{[}\Big{|}\mathbb{E}\big{[}X_{k}^{i,N} -\Psi_{i}\left(X_{k-1}^{i,N},\mu_{k-1}^{X,N},t_{k-1},h\right)\mid\mathcal{F}_{ t_{k-1}}\big{]}\big{|}^{2}\,\Big{]}\] \[\quad\quad\quad+C_{\eta}\,\,\mathbb{E}\Big{[}\Big{|}\big{(} \mathrm{id}-\mathbb{E}\left[\cdot\mid\mathcal{F}_{t_{k-1}}\right]\big{)}\big{(} X_{k}^{i,N}-\Psi_{i}(X_{k-1}^{i,N},\mu_{k-1}^{X,N},t_{k-1},h)\big{)}\Big{|}^{2}\, \Big{]}\bigg{)}\Bigg{]}\Bigg{]}\] \[\leq C\mathrm{e}^{CT}\,\sum_{k=1}^{M}\Big{(}(1+h^{-1})h^{2+2 \gamma}+C_{\eta}h^{1+2\gamma}\Big{)}\leq Ch^{2\gamma},\] where in the second last estimate we used \(Mh=T\). ### Proof of Theorem 2.18: Convergence of the SSM scheme #### 4.6.1 The SSM is \(C\)-stable We first need to prove (2.10), i.e., \(\hat{X}_{n+1}^{i,N}\in L^{2}(\Omega,\mathcal{F}_{t_{n}+h},\mathbb{P};\mathbb{R }^{d})\) for all \(n\in\llbracket 0,M-1\rrbracket\) and \(i\in\llbracket 1,N\rrbracket\) given \(\hat{X}_{n}^{i,N}\in L^{2}\big{(}\Omega,\mathcal{F}_{t_{n}},\mathbb{P}; \mathbb{R}^{d}\big{)}\), where \(\hat{X}^{i,N}\) is constructed by the SSM scheme defined in (2.12) and (2.13). **Proposition 4.1** (Second order moment bounds of SSM).: _Let the setting of Theorem 2.18 hold. Then there exists a constant \(C>0\) independent of \(h,N,M\) such that_ \[\sup_{i\in\llbracket 1,N\rrbracket}\sup_{n\in\llbracket 0,M\rrbracket} \mathbb{E}\big{[}|\hat{X}_{n}^{i,N}|^{2}\big{]}+\sup_{i\in\llbracket 1,N \rrbracket}\sup_{n\in\llbracket 0,M-1\rrbracket}\mathbb{E}\big{[}|Y_{n}^{i,*,N}|^{2} \big{]}\leq C\big{(}1+\mathbb{E}\big{[}|\hat{X}_{0}^{N}|^{2}\big{]}\big{)}.\] Proof.: The proof is similar to [23, Section 4.1]. For \(i\in\llbracket 1,N\rrbracket\), \(n\in\llbracket 0,M-1\rrbracket\), by Assumption 2.1, Proposition 4.3, and the particles are identically distributed, we have \[\mathbb{E}\big{[}1+|Y_{n}^{i,*,N}|^{2}\big{]}\leq(1+Ch)\mathbb{E} \big{[}1+|\hat{X}_{n}^{i,N}|^{2}\big{]}. \tag{4.8}\] From (2.12) and Jensen's inequality, we have \[|Y_{n}^{i,*,N}|^{2} =\big{\langle}Y_{n}^{i,*,N},\hat{X}_{n}^{i,N}+hv(Y_{n}^{i,*,N},\hat {\mu}_{n}^{Y,N})\big{\rangle}\] \[\Rightarrow |Y_{n}^{i,*,N}|^{2} \leq|\hat{X}_{n}^{i,N}|^{2}+2h\big{\langle}Y_{n}^{i,*,N},v(Y_{n} ^{i,*,N},\hat{\mu}_{n}^{Y,N})\big{\rangle}. \tag{4.9}\] Also, from (2.13) and using the result above, we have \[|\hat{X}_{n+1}^{i,N}|^{2}=\big{|}Y_{n}^{i,*,N}+b(t_{n},Y_{n}^{i,*,N},\hat{\mu}_ {n}^{Y,N})h+\overline{\sigma}(t_{n},Y_{n}^{i,*,N},\hat{\mu}_{n}^{Y,N})\Delta W _{n}^{i}\big{|}^{2}.\] Taking expectation on both sides, by Jensen's inequality, (4.8), (4.9), Assumption 2.1 and Remark 2.4, we have \[\mathbb{E}\big{[}1+|\hat{X}_{n+1}^{i,N}|^{2}\big{]}\leq(1+Ch)\mathbb{E}\big{[} 1+|\hat{X}_{n}^{i,N}|^{2}\big{]}+h\mathbb{E}\Big{[}2\big{\langle}Y_{n}^{i,*,N},v(Y _{n}^{i,*,N},\hat{\mu}_{n}^{Y,N})\big{\rangle}+|\overline{\sigma}(t_{n},Y_{n}^{i,*, N},\hat{\mu}_{n}^{Y,N})|^{2}\Big{]}\] \[\leq(1+Ch)\mathbb{E}\big{[}1+|\hat{X}_{n}^{i,N}|^{2}\big{]}+Ch \mathbb{E}\big{[}|W^{(2)}(\hat{\mu}_{n}^{X,N},\hat{\mu}_{n}^{Y,N})|^{2}\big{]}.\] Due to Remark 2.4 and Remark 2.8, we observe \[|Y_{n}^{i,X,N}-Y_{n}^{i,Z,N}|^{2}\] \[\quad=\langle Y_{n}^{i,X,N}-Y_{n}^{i,Z,N},\hat{X}_{n}^{i,N}-\hat{ Z}_{n}^{i,N}+v(Y_{n}^{i,X,N},\hat{\mu}_{n}^{Y,X,N})h-v(Y_{n}^{i,Z,N},\hat{\mu}_{n}^{Y,Z,N })h\rangle\] \[\quad\leq\frac{1}{2}\big{(}|Y_{n}^{i,X,N}-Y_{n}^{i,Z,N}|^{2}+|\hat {X}_{n}^{i,N}-\hat{Z}_{n}^{i,N}|^{2}\big{)}\] \[\quad\quad+h\langle Y_{n}^{i,X,N}-Y_{n}^{i,Z,N},v(Y_{n}^{i,X,N}, \hat{\mu}_{n}^{Y,X,N})-v(Y_{n}^{i,Z,N},\hat{\mu}_{n}^{Y,Z,N})\rangle\] \[\quad\Rightarrow \leq|\hat{X}_{n}^{i,N}-\hat{Z}_{n}^{i,N}|^{2}+2h\langle Y_{n}^{i, X,N}-Y_{n}^{i,Z,N},v(Y_{n}^{i,X,N},\hat{\mu}_{n}^{Y,X,N})-v(Y_{n}^{i,Z,N},\hat{ \mu}_{n}^{Y,Z,N})\rangle\] \[\quad\leq|\hat{X}_{n}^{i,N}-\hat{Z}_{n}^{i,N}|^{2}+2h\langle Y_{n }^{i,X,N}-Y_{n}^{i,Z,N},\frac{1}{N}\sum_{j}^{N}\big{(}f(Y_{n}^{i,X,N}-Y_{n}^{ j,X,N})-f(Y_{n}^{i,Z,N}-Y_{n}^{j,Z,N})\big{)}\rangle\] \[\quad\quad+2h\langle Y_{n}^{i,X,N}-Y_{n}^{i,Z,N},u(Y_{n}^{i,X,N}, \hat{\mu}_{n}^{Y,X,N})-u(Y_{n}^{i,Z,N},\hat{\mu}_{n}^{Y,Z,N})\rangle. \tag{4.12}\] For the second component (4.11), by Jensen's inequality, we have \[\mathbb{E}\Big{[}\Big{|}(\mathrm{id}-\mathbb{E}\left[\cdot\mid \mathcal{F}_{t_{n}}\right])\left(\Psi_{i}(\hat{X}_{n}^{i,N},\hat{\mu}_{n}^{X,N },t_{n},h)-\Psi_{i}(\hat{Z}_{n}^{i,N},\hat{\mu}_{n}^{Z,N},t_{n},h)\right) \Big{|}^{2}\Big{]}\] \[=\mathbb{E}\big{[}|\overline{\sigma}(t_{n},Y_{n}^{i,X,N},\hat{\mu}_{n} ^{Y,X,N})\Delta W_{n}^{i}-\overline{\sigma}(t_{n},Y_{n}^{i,Z,N},\hat{\mu}_{n}^{ Y,Z,N})\Delta W_{n}^{i}|^{2}\big{]}\] \[\leq 2h\mathbb{E}\Big{[}|\sigma(t_{n},Y_{n}^{i,X,N},\hat{\mu}_{n} ^{Y,X,N})-\sigma(t_{n},Y_{n}^{i,Z,N},\hat{\mu}_{n}^{Y,Z,N})|^{2}+\frac{1}{N} \sum_{j=1}^{N}|f_{\sigma}(Y_{n}^{i,X,N}-Y_{n}^{j,X,N})-f_{\sigma}(Y_{n}^{i,Z,N} -Y_{n}^{j,Z,N})|^{2}\Big{]}.\] From Assumption 2.1 and Remark 2.4, we derive, for some \(\eta>1\), \[\mathbb{E}\Big{[}\langle Y_{n}^{i,X,N}-Y_{n}^{i,Z,N},\frac{1}{N} \sum_{j=1}^{N}\big{(}f(Y_{n}^{i,X,N}-Y_{n}^{j,X,N})-f(Y_{n}^{i,Z,N}-Y_{n}^{j,Z, N})\big{)}\rangle\Big{]}\] \[\quad+\eta\mathbb{E}\Big{[}\frac{1}{N}\sum_{j=1}^{N}|f_{\sigma}(Y _{n}^{i,X,N}-Y_{n}^{j,X,N})-f_{\sigma}(Y_{n}^{i,Z,N}-Y_{n}^{j,Z,N})|^{2}\Big{]}\] \[=\frac{1}{2N^{2}}\sum_{i=1}^{N}\sum_{j=1}^{N}\mathbb{E}\Big{[} \langle(Y_{n}^{i,X,N}-Y_{n}^{j,X,N})-(Y_{n}^{i,Z,N}-Y_{n}^{j,Z,N}),f(Y_{n}^{i, X,N}-Y_{n}^{j,X,N})-f(Y_{n}^{i,Z,N}-Y_{n}^{j,Z,N})\rangle\Big{]}\] \[\quad+\frac{\eta}{N^{2}}\sum_{i=1}^{N}\sum_{j=1}^{N}\mathbb{E} \Big{[}\big{|}f_{\sigma}(Y_{n}^{i,X,N}-Y_{n}^{j,X,N})-f_{\sigma}(Y_{n}^{i,Z,N} -Y_{n}^{j,Z,N})|^{2}\Big{]}\] \[\leq\frac{1}{2N^{2}}\sum_{i=1}^{N}\sum_{j=1}^{N}\mathbb{E}\Big{[} L_{(f)}^{(1)}\big{|}(Y_{n}^{i,X,N}-Y_{n}^{j,X,N})-(Y_{n}^{i,Z,N}-Y_{n}^{j,Z,N}) \big{|}^{2}\Big{]}\leq 2L_{(f)}^{(1),+}\mathbb{E}\big{[}|Y_{n}^{i,X,N}-Y_{n}^{i,Z,N}|^{ 2}\big{]}. \tag{4.13}\] Collecting the above estimates and using Remark 2.4, we have \[\mathbb{E}\Big{[}\Big{|}\mathbb{E}\big{[}\Psi_{i}(\hat{X}_{n}^{i,N},\hat{\mu}_{n}^{X,N},t_{n},h)-\Psi_{i}(\hat{Z}_{n}^{i,N},\hat{\mu}_{n}^{Z,N },t_{n},h)\mid\mathcal{F}_{t}\big{]}\Big{|}^{2}\Big{]}\] \[\quad+\eta\mathbb{E}\Big{[}\Big{|}(|\mathrm{id}-\mathbb{E}\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! **Proposition 4.3** (Summation relationship).: _Let Assumption 2.1 hold and choose \(h\) as in (2.14). Then there exists a constant \(C>0\) such that, for all \(n\in[\![0,M-1]\!]\),_ \[\frac{1}{N}\sum_{j=1}^{N}|Y_{n}^{j,\star,N}|^{2}\leq Ch+(1+Ch)\ \frac{1}{N}\sum_{j=1}^{N}| \hat{X}_{n}^{j,N}|^{2}. \tag{4.17}\] Proof.: See [23, Proposition 4.4]. Now, we state the following moment relationship for the first step of the SSM. **Proposition 4.4** (Moment relationship).: _Let Assumption 2.1 hold and choose \(h\) as in (2.14), then there exist a constant \(C>0\) independent of \(N\), such that for all \(i\in[\![1,N]\!],\ n\in[\![0,M]\!],\ p\geq 1\) we have_ \[\mathbb{E}\big{[}|Y_{n}^{i,\star,N}|^{2p}\big{]}\leq C\Big{(}\frac{1}{N}\sum_ {j=1}^{N}\mathbb{E}[|X_{n}^{i,N}-X_{n}^{j,N}|^{2p}]+\mathbb{E}\Big{[}\Big{|} \frac{1}{N}\sum_{j=1}^{N}(1+|X_{n}^{j,N}|^{2})\Big{|}^{p}\Big{]}+1\Big{)}. \tag{4.18}\] Proof.: By Young's inequality and Jensen's inequality \[\mathbb{E}\big{[}|Y_{n}^{i,\star,N}|^{2p}\big{]} \leq\mathbb{E}\Big{[}\Big{|}\frac{1}{N}\sum_{j=1}^{N}\Big{(}2|Y_ {n}^{i,\star,N}-Y_{n}^{j,\star,N}|^{2}+2|Y_{n}^{j,\star,N}|^{2}\Big{)}\Big{|} ^{p}\Big{]}\] \[\leq\frac{4^{p}}{N}\sum_{j=1}^{N}\mathbb{E}\big{[}|Y_{n}^{i,\star,N}-Y_{n}^{j,\star,N}|^{2p}\big{]}+4^{p}\mathbb{E}\Big{[}\Big{|}\frac{1}{N} \sum_{j=1}^{N}|Y_{n}^{j,\star,N}|^{2}\Big{|}^{p}\Big{]}.\] Combining Propositions 4.2 and 4.3 allows to conclude the claim. The main goal of this section is to prove that \(X_{t}^{i,N}\) defined by (2.7) satisfies for all \(t\in[0,T]\), \(i\in[\![1,N]\!]\) the following estimates with \(\gamma=1/2\). \[\mathbb{E}\Big{[}\left|\mathbb{E}\big{[}X_{t+h}^{i,N}-\Psi_{i}(X_ {t}^{i,N},\mu_{t}^{X,N},t,h)\mid\mathcal{F}_{t}\big{]}\right|^{2}\Big{]} \leq Ch^{2\gamma+2}, \tag{4.19}\] \[\mathbb{E}\Big{[}\left|(\mathrm{id}-\mathbb{E}\!\left[\cdot\mid \mathcal{F}_{t}\right])(X_{t+h}^{i,N}-\Psi_{i}(X_{t}^{i,N},\mu_{t}^{X,N},t,h)) \right|^{2}\Big{]} \leq Ch^{2\gamma+1}. \tag{4.20}\] Proof of statement 2 in Theorem 2.18.: Recall (2.7) and the SSM given in (2.11)-(2.13). Then, we introduce the following quantities, for all \(t\in[0,T]\), \(i\in[\![1,N]\!]\), \[X_{t+h}^{i,N}=X_{t}^{i,N}+\int_{t}^{t+h}\Big{(}v(X_{s}^{i,N},\mu_ {s}^{X,N})+b(s,X_{s}^{i,N},\mu_{s}^{X,N})\Big{)}\mathrm{d}s+\int_{t}^{t+h} \overline{\sigma}(s,X_{s}^{i,N},\mu_{s}^{X,N})\mathrm{d}W_{s}^{i}, \tag{4.21}\] \[Y_{t}^{i,N}=X_{t}^{i,N}+v(Y_{t}^{i,N},\mu_{t}^{Y,N})h,\qquad\mu_ {t}^{Y,N}(\mathrm{d}x):=\frac{1}{N}\sum_{j=1}^{N}\delta_{Y_{t}^{j,N}}( \mathrm{d}x),\] (4.22) \[\Psi_{i}(X_{t}^{i,N},\mu_{t}^{X,N},t,h)=X_{t}^{i,N}+\int_{t}^{t+h }\Big{(}v(Y_{t}^{i,N},\mu_{t}^{Y,N})+b(t,Y_{t}^{i,N},\mu_{t}^{Y,N})\Big{)} \mathrm{d}s+\int_{t}^{t+h}\overline{\sigma}(t,Y_{t}^{i,N},\mu_{t}^{Y,N}) \mathrm{d}W_{s}^{i},\] where the last equation is the integration form for the one-step map of SSM. Therefore, the first term (4.19) can be estimated by Jensen's inequality \[\mathbb{E}\Big{[}\left|\mathbb{E}\big{[}X_{t+h}^{i,N}-\Psi_{i}(X _{t}^{i,N},\mu_{t}^{X,N},t,h)\mid\mathcal{F}_{t}\big{]}\right|^{2}\Big{]} \tag{4.23}\] \[\leq 2h\int_{t}^{t+h}\mathbb{E}\big{[}|v(X_{s}^{i,N},\mu_{s}^{X,N})-v (Y_{t}^{i,N},\mu_{t}^{Y,N})|^{2}\big{]}\ \mathrm{d}s\] (4.24) \[+2h\int_{t}^{t+h}\mathbb{E}\big{[}|b(s,X_{s}^{i,N},\mu_{s}^{X,N})- b(t,Y_{t}^{i,N},\mu_{t}^{Y,N})|^{2}\big{]}\ \mathrm{d}s.\] For the second term (4.20), we get \[\mathbb{E}\Big{[}\left|\left(\mathrm{id}-\mathbb{E}\!\left[\cdot\mid \mathcal{F}_{t}\right]\right)(X_{t+h}^{i,N}-\Psi_{i}(X_{t}^{i,N},\mu_{t}^{X,N},t,h))\right|^{2}\Big{]} \tag{4.25}\] \[\leq C\int_{t}^{t+h}\mathbb{E}\big{[}|\overline{\sigma}(s,X_{s}^{i,N}, \mu_{s}^{X,N})-\overline{\sigma}(t,Y_{t}^{i,N},\mu_{t}^{Y,N})|^{2}\big{]}\; \mathrm{d}s.\] By Young's inequality and Jensen's inequality, Assumption 2.1 and Proposition 4.2, for \(s\in[t,t+h]\), we have \[|X_{s}^{i,N}-Y_{t}^{i,N}|^{2} \leq 2|X_{s}^{i,N}-X_{t}^{i,N}|^{2}+2|X_{t}^{i,N}-Y_{t}^{i,N}|^{2},\] \[|X_{t}^{i,N}-Y_{t}^{i,N}|^{2} =|v(Y_{t}^{i,N},\mu_{t}^{Y,N})h|^{2}\leq\frac{2h^{2}}{N}\sum_{j=1} ^{N}|f(Y_{t}^{i,N}-Y_{t}^{j,N})|^{2}+2h^{2}|u(Y_{t}^{i,N},\mu_{t}^{Y,N})|^{2}\] \[\leq\frac{Ch^{2}}{N}\sum_{j=1}^{N}\Big{(}1+|Y_{t}^{i,N}-Y_{t}^{j, N}|^{2q+2}\Big{)}+Ch^{2}\Big{(}1+|Y_{t}^{i,N}|^{2q+2}+\frac{1}{N}\sum_{j=1}^{N}|Y_{ t}^{j,N}|^{2}\Big{)}\] \[\leq\frac{Ch^{2}}{N}\sum_{j=1}^{N}\Big{(}1+|X_{t}^{i,N}-X_{t}^{j, N}|^{2q+2}\Big{)}+Ch^{2}\Big{(}1+|Y_{t}^{i,N}|^{2q+2}+\frac{1}{N}\sum_{j=1}^{N}|Y_{ t}^{j,N}|^{2}\Big{)}.\] Similarly, we have \[|X_{s}^{i,N}-Y_{t}^{i,N}|^{4} \leq 16|X_{s}^{i,N}-X_{t}^{i,N}|^{4}+16|X_{t}^{i,N}-Y_{t}^{i,N}|^{4},\] \[|X_{t}^{i,N}-Y_{t}^{i,N}|^{4} \leq Ch^{4}\Big{(}1+|Y_{t}^{i,N}|^{4q+4}+\frac{1}{N}\sum_{j=1}^{ N}|Y_{t}^{j,N}|^{4}\Big{)}+\frac{Ch^{4}}{N}\sum_{j=1}^{N}\Big{(}1+|X_{t}^{i,N}-X_{t }^{j,N}|^{4q+4}\Big{)}.\] Using the moment stability of \(X^{i,N}\) (note \(m>4q+4>\max\{2(q+1),4\}\)) and Jensen's inequality, we get \[\frac{Ch^{2}}{N}\sum_{j=1}^{N}\mathbb{E}\Big{[}\Big{(}1+|X_{t}^{i,N}-X_{t}^{j, N}|^{2q+2}\Big{)}\Big{]}\leq Ch^{2},\quad\frac{Ch^{4}}{N}\sum_{j=1}^{N}\mathbb{E} \Big{[}\Big{|}\Big{(}1+|X_{t}^{i,N}-X_{t}^{j,N}|^{2q+2}\Big{)}\Big{|}^{2}\Big{]} \leq Ch^{4}.\] By (4.21) and another application of Jensen's inequality \[\mathbb{E}\big{[}|X_{s}^{i,N}-X_{t}^{i,N}|^{2}\big{]}\leq Ch\int_{t}^{\ast}\mathbb{E}\big{[}|v(X_{u}^{i,N},\mu_{u}^{X,N})+b(u,X_{u} ^{i,N},\mu_{u}^{X,N})|^{2}\big{]}\mathrm{d}u\] \[+C\int_{t}^{\ast}\mathbb{E}\big{[}|\overline{\sigma}(u,X_{u}^{i,N},\mu_{u}^{X,N})|^{2}\big{]}\mathrm{d}u\;\leq Ch.\] Similarly, we have \[\mathbb{E}\big{[}|X_{s}^{i,N}-X_{t}^{i,N}|^{4}\big{]}\leq Ch^{2}.\] Using the above results and we have sufficient moment bounds for \(Y_{t}^{i,N}\) from Proposition 4.4, we conclude that \[\mathbb{E}\big{[}|X_{s}^{i,N}-Y_{t}^{i,N}|^{2}\big{]} \leq Ch,\quad\mathbb{E}\big{[}|X_{s}^{i,N}-Y_{t}^{i,N}|^{4}\big{]} \leq Ch^{2},\] \[\mathbb{E}\big{[}|W^{(2)}(\mu_{s}^{X,N},\mu_{t}^{Y,N})|^{2}\big{]} \leq\frac{1}{N}\sum_{j=1}^{N}\mathbb{E}\big{[}|X_{s}^{j,N}-Y_{t}^{j,N}|^{2} \big{]}\leq Ch.\] Thus, for the term (4.24), taking Assumption 2.1 into account, following the arguments in [23, Section 4.2], Jensen's inequality, Cauchy-Schwarz inequality and Young's inequality yield \[\mathbb{E}\big{[}|v(X_{s}^{i,N},\mu_{s}^{X,N})-v(Y_{t}^{i,N},\mu_ {t}^{Y,N})|^{2}\big{]}\] \[\leq C\sqrt{\mathbb{E}\big{[}1+|X_{s}^{i,N}|^{4q}+|Y_{t}^{i,N}|^{ 4q}\big{]}\mathbb{E}\big{[}|X_{s}^{i,N}-Y_{t}^{i,N}|^{4}\big{]}}+C\mathbb{E} \big{[}|X_{s}^{i,N}-Y_{t}^{i,N}|^{2}\big{]}\leq Ch.\] Also, from Assumption 2.1, we have \[\mathbb{E}\big{[}|b(s,X_{s}^{i,N},\mu_{s}^{X,N})-b(t,Y_{t}^{i,N}, \mu_{t}^{Y,N})|^{2}\big{]}\] \[\leq C\Big{(}h+\mathbb{E}\big{[}|X_{s}^{i,N}-Y_{t}^{i,N}|^{2} \big{]}+\mathbb{E}\big{[}|W^{(2)}(\mu_{s}^{X,N},\mu_{t}^{Y,N})|^{2}\big{]} \big{)}\leq Ch,\] and similarly, from Jensen's inequality and Remark 2.4, we have \[\mathbb{E}\big{[}|\overline{\sigma}(s,X_{s}^{i,N},\mu_{s}^{X,N})- \overline{\sigma}(t,Y_{t}^{i,N},\mu_{t}^{Y,N})|^{2}\big{]}\] \[\leq C\mathbb{E}\Big{[}h+\big{(}1+|X_{s}^{i,N}|^{2q}+|Y_{t}^{i,N}| ^{2q}\big{)}|X_{s}^{i,N}-Y_{t}^{i,N}|^{2}+\frac{1}{N}\sum_{j=1}^{N}|X_{s}^{j,N}-Y_ {t}^{j,N}|^{2q+2}\Big{]}\leq Ch.\] Substituting the results above back to (4.23) and (4.25), we have \[\mathbb{E}\Big{[}\left|\left[\mathbb{E}\big{[}X_{t+h}^{i,N}-\Psi_{i}(X _{t}^{i,N},\mu_{t}^{X,N},t,h)\mid\mathcal{F}_{t}\big{]}\right]^{2}\right| \leq Ch\int_{t}^{t+h}h\mathrm{d}s\leq Ch^{3},\] \[\mathbb{E}\Big{[}\left|\left(\mathrm{id}-\mathbb{E}\left[\cdot \mid\mathcal{F}_{t}\right]\right)(X_{t+h}^{i,N}-\Psi_{i}(X_{t}^{i,N},\mu_{t}^{ X,N},t,h))\right|^{2}\Big{]} \leq C\int_{t}^{t+h}h\mathrm{d}s\leq Ch^{2}.\] #### 4.6.3 Proof of convergence for the SSM scheme Proof of statement 3 in Theorem 2.18.: At last, we will prove the third statement in Theorem 2.18. By combining the first two statements and Theorem 2.14, we first have \[\sup_{n\in[0,M]}\sup_{i\in[1,N]}\mathbb{E}\big{[}|\,X_{n}^{i,N}-\hat{X}_{n}^{i,N}|^{2}\big{]}\leq Ch. \tag{4.26}\] Now, we extend the strong convergence rate to the continuous time version of the SSM, which has not been discussed in [10]. In order to extend the result above to the continuous extension of the SSM, we consider, for all \(n\in[\![0,M-1]\!]\), \(i\in[\![1,N]\!]\), \(r\in[0,h]\), \[|X_{t_{n}+r}^{i,N}-\hat{X}_{t_{n}+r}^{i,N}|^{2}= \Big{|}X_{t_{n}}^{i,N}-\hat{X}_{n}^{i,N}+\int_{t_{n}}^{t_{n}+r} \big{(}v(X_{s}^{i,N},\mu_{s}^{X,N})-v(Y_{n}^{i,N},\mu_{n}^{Y,N})\big{)} \mathrm{d}s \tag{4.27}\] \[+\int_{t_{n}}^{t_{n}+r}\big{(}b(s,X_{s}^{i,N},\mu_{s}^{X,N})-b(t_{ n},Y_{n}^{i,N},\mu_{n}^{Y,N})\big{)}\mathrm{d}s\] (4.28) \[+\int_{t_{n}}^{t_{n}+r}\big{(}\overline{g}(s,X_{s}^{i,N},\mu_{s}^ {X,N})-\overline{g}(t_{n},Y_{n}^{i,N},\mu_{n}^{Y,N})\big{)}\mathrm{d}W_{s}^{i}\] (4.29) \[+\int_{t_{n}}^{t_{n}+r}\big{(}v(Y_{n}^{i,N},\mu_{n}^{Y,N})-v(Y_{n} ^{i,*,N},\hat{\mu}_{n}^{Y,N})\big{)}\mathrm{d}s\] \[+\int_{t_{n}}^{t_{n}+r}\big{(}b(t_{n},Y_{n}^{i,N},\mu_{n}^{Y,N})-b (t_{n},Y_{n}^{i,*,N},\hat{\mu}_{n}^{Y,N})\big{)}\mathrm{d}s\] \[+\int_{t_{n}}^{t_{n}+r}\big{(}\overline{g}(t_{n},Y_{n}^{i,N},\mu_ {n}^{Y,N})-\overline{g}(t_{n},Y_{n}^{i,*,N},\hat{\mu}_{n}^{Y,N})\big{)}\mathrm{d }W_{s}^{i}\big{|}^{2},\] where \(Y_{n}^{i,N}=Y_{t_{n}}^{i,N}\), \(\mu_{n}^{Y,N}=\mu_{t_{n}}^{Y,N}\) are defined in (4.22). Taking expectation on both sides and using Jensen's inequality, we derive \[\mathbb{E}\big{[}|X_{t_{n}+r}^{i,N}-\hat{X}_{t_{n}+r}^{i,N}|^{2} \big{]}\leq C\mathbb{E}\Big{[}\Big{[}\big{(}X_{t_{n}}^{i,N}+v(Y_{n}^{i,N},\mu_ {n}^{Y,N})r+b(t_{n},Y_{n}^{i,N},\mu_{n}^{Y,N})r+\overline{g}(t_{n},Y_{n}^{i,N},\mu_{n}^{Y,N})\Delta W_{n,r}^{i}\Big{)}\] \[-\big{(}\hat{X}_{n}^{i,N}+v(Y_{n}^{i,*,N},\hat{\mu}_{n}^{Y,N})+b(t _{n},Y_{n}^{i,*,N},\hat{\mu}_{n}^{Y,N})r+\overline{g}(t_{n},Y_{n}^{i,*,N},\hat {\mu}_{n}^{Y,N})\Delta W_{n,r}^{i}\big{)}\Big{|}^{2}\Big{]}+Ch,\] where \(\Delta W_{n,r}^{i}=W_{t_{n}+r}^{i}-W_{t_{n}}^{i}\) and we remark that the integral terms in (4.27)-(4.29) can be analyzed using the results in Section 4.6.2. We now consider the following differences: From (2.12) and following similar calculations to [23, Section 4.2], we have \[\mathbb{E}\big{[}\big{|}\big{(}X_{t_{n}}^{i,N}+v(Y_{n}^{i,N},\mu_ {n}^{Y,N})r\big{)}-\big{(}\hat{X}_{n}^{i,N}+v(Y_{n}^{i,*,N},\hat{\mu}_{n}^{Y,N} )r\big{)}^{2}\big{]}\] \[\qquad=\mathbb{E}\big{[}\big{(}X_{t_{n}}^{i,N}-\hat{X}_{n}^{i,N} \big{)}+r\Delta V_{n}^{Y},\big{(}Y_{n}^{i,N}-Y_{n}^{i,*,N}\big{)}-(h-r)\Delta V _{n}^{Y}\big{)}\big{]}\] \[\qquad\leq\mathbb{E}\big{[}X_{t_{n}}^{i,N}-\hat{X}_{n}^{i,N}|^{2} \big{]}\frac{2\eta_{n}-\mathbb{E}}{\mathbb{E}\big{[}|Y_{n}^{i,N}-Y_{n}^{i,*,N}|^ {2}\big{]}}\frac{r}{2h}+\mathbb{E}\big{[}\big{(}Y_{n}^{i,N}-Y_{n}^{i,*,N}, \Delta V_{n}^{Y}\big{)}\big{]}r,\] where \(\Delta V_{n}^{Y}=v(Y_{n}^{i,N},\mu_{n}^{Y,N})-v(Y_{n}^{i,*,N},\hat{\mu}_{n}^{Y,N})\). By Jensen's inequality and the results in Section 4.6.1, we conclude that for all \(n\in[\![0,M-1]\!]\), \(i\in[\![1,N]\!]\), \(r\in[0,h]\), we have \[\mathbb{E}\big{[}|X_{t_{n}+r}^{i,N}-\hat{X}_{t_{n}+r}^{i,N}|^{2} \big{]}\leq Ch+C\mathbb{E}\big{[}|X_{t_{n}}^{i,N}-\hat{X}_{n}^{i,N}|^{2}\big{]} ### Proof of Theorem 2.19: Mean-square contractivity for the SSM Proof of Theorem 2.19.: Using the notations of Theorem 2.19 and Section 4.6.1, and recalling the results in (4.12) and (4.13), for all \(i\in\llbracket 1,N\rrbracket\), \(n\in\llbracket 0,M-1\rrbracket\), we have \[\mathbb{E}\big{[}\,|Y_{n}^{i,X,N}-Y_{n}^{i,Z,N}|^{2}\big{]}\] \[\quad\leq\frac{2h}{N}\sum_{j=1}^{N}\mathbb{E}\big{[}\,(Y_{n}^{i,X,N}-Y_{n}^{i,Z,N},f(Y_{n}^{i,X,N}-Y_{n}^{j,X,N})-f(Y_{n}^{i,Z,N}-Y_{n}^{j,Z,N}) )\big{]}\] \[\qquad\quad+\mathbb{E}\big{[}|\hat{X}_{n}^{i,N}-\hat{Z}_{n}^{i,N}| ^{2}\big{]}+2h\mathbb{E}\big{[}(Y_{n}^{i,X,N}-Y_{n}^{i,Z,N},u(Y_{n}^{i,X,N}, \hat{\mu}_{n}^{Y,X,N})-u(Y_{n}^{i,Z,N},\hat{\mu}_{n}^{Y,Z,N}))\big{]}\] \[\quad\leq|\hat{X}_{n}^{i,N}-\hat{Z}_{n}^{i,N}|^{2}+h(4L_{(f)}^{(1),+}+2L_{(u)}^{(1)}+2L_{(u)}^{(2)})\mathbb{E}\big{[}\,|Y_{n}^{i,X,N}-Y_{n}^{i,Z,N}|^{2}\big{]}\] \[\Rightarrow\mathbb{E}\big{[}\,|Y_{n}^{i,X,N}-Y_{n}^{i,Z,N}|^{2} \big{]}\leq\mathbb{E}\big{[}\,|\hat{X}_{n}^{i,N}-\hat{Z}_{n}^{i,N}|^{2}\big{]} \,\frac{1}{1-h(4L_{(f)}^{(1),+}+2L_{(u)}^{(1)}+2L_{(u)}^{(2)})}. \tag{4.30}\] Next, we consider \[\mathbb{E}\big{[}\,|\hat{X}_{n+1}^{i,N}-\hat{Z}_{n+1}^{i,N}|^{2} \big{]}=\mathbb{E}\Big{[}\,|Y_{n}^{i,X,N}+b(t_{n},Y_{n}^{i,X,N},\hat{\mu}_{n}^{ Y,X,N})h+\overline{\sigma}(t_{n},Y_{n}^{i,X,N},\hat{\mu}_{n}^{Y,X,N})\Delta W _{n}^{i}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad-Y_{n}^{i,Z,N}-b (t_{n},Y_{n}^{i,Z,N},\hat{\mu}_{n}^{Y,Z,N})h-\overline{\sigma}(t_{n},Y_{n}^{i,Z,N},\hat{\mu}_{n}^{Y,Z,N})\Delta W_{n}^{i}|^{2}\Big{]}\] \[\quad=\mathbb{E}\big{[}\,|Y_{n}^{i,X,N}-Y_{n}^{i,Z,N}|^{2}+\big{|} \overline{\sigma}(t_{n},Y_{n}^{i,X,N},\hat{\mu}_{n}^{Y,X,N})-\overline{\sigma }(t_{n},Y_{n}^{i,Z,N},\hat{\mu}_{n}^{Y,Z,N})\big{|}^{2}h\big{]}\] \[\qquad\quad+h^{2}\mathbb{E}\big{[}\,|b(t_{n},Y_{n}^{i,X,N},\hat{ \mu}_{n}^{Y,X,N})-b(t_{n},Y_{n}^{i,Z,N},\hat{\mu}_{n}^{Y,Z,N})|^{2}\big{]}\] \[\quad\quad\quad+2h\mathbb{E}\big{[}\,\,\langle Y_{n}^{i,X,N}-Y_{n }^{i,Z,N},b(t_{n},Y_{n}^{i,X,N},\hat{\mu}_{n}^{Y,X,N})-b(t_{n},Y_{n}^{i,Z,N}, \hat{\mu}_{n}^{Y,Z,N})\big{]}\] \[\quad\leq\mathbb{E}\big{[}\,|\hat{X}_{n}^{i,N}-\hat{Z}_{n}^{i,N}| ^{2}\big{]}+\mathbb{E}\big{[}\,|Y_{n}^{i,X,N}-Y_{n}^{i,Z,N}|^{2}\big{]}\big{(}h (4L_{(f)}^{(1),+}+2L_{(u\sigma)}^{(1)}+2L_{(u\sigma)}^{(2)}+2L_{(b)}^{(2)}+2L_ {(b)}^{(3)}+2L_{(b)}^{(1)}h^{2}\big{)},\] where in the last inequality we used the results above, (4.13) and Cauchy-Schwarz inequality. Substituting (4.30) into the last inequality yields the result. ## Appendix A Properties of the convolved drift term after integration **Lemma A.1**.: _Let \((\mathbf{A}^{f},\mathbf{A}^{f_{\sigma}})\) in Assumption 2.1 hold. Then it holds for any \(\mu\in\mathcal{P}_{2}(\mathbb{R}^{d})\) and \(m>2\)_ \[\int_{\mathbb{R}^{d}}\big{(}\langle x,(f*\mu)(x)\rangle+(m-1)|(f_ {\sigma}*\mu)(x)|^{2}\big{)}\mu(\mathrm{d}x) =\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\big{(}\langle x,f(x-y) \rangle+(m-1)|f_{\sigma}(x-y)|^{2}\big{)}\mu(\mathrm{d}x)\mu(\mathrm{d}y)\] \[\leq L_{(f)}^{(1)}\big{(}\mu(|\cdot|^{2})-|\mu(\mathrm{d}d)|^{2} \big{)}=L_{(f)}^{(1)}\mathsf{var}_{\mu},\] _where \(\mu(|\cdot|^{2}):=\int_{\mathbb{R}^{d}}|x|^{2}\mu(\mathrm{d}x)\), \(\mu(\mathrm{d}d):=\int_{\mathbb{R}^{d}}x\mu(\mathrm{d}x)\) and \(\mathsf{var}_{\mu}=\mu(|\cdot|^{2})-|\mu(\mathrm{d}d)|^{2}\)._ Proof.: Using \(f(0)=f_{\sigma}(0)=0\) and that \(f\) is an odd function we have \[\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\big{(}\langle x,f(x-y) \rangle+(m-1)|f_{\sigma}(x-y)|^{2}\big{)}\mu(\mathrm{d}x)\mu(\mathrm{d}y)\] \[\quad=\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\frac{1}{2}\big{(} \langle x-y,f(x-y)\rangle+2(m-1)|f_{\sigma}(x-y)|^{2}\big{)}\mu(\mathrm{d}x) \mu(\mathrm{d}y)\] \[\quad\leq\frac{1}{2}L_{(f)}^{(1)}\int_{\mathbb{R}^{d}}\int_{ \mathbb{R}^{d}}|x-y|^{2}\mu(\mathrm{d}x)\mu(\mathrm{d}y)=\frac{1}{2}L_{(f)}^{(1 )}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\big{(}|x|^{2}-2\,\langle x,y \rangle+|y|^{2}\big{)}\mu(\mathrm{d}x)\mu(\mathrm{d}y)\] \[\quad=\frac{1}{2}L_{(f)}^{(1)}\left(2\mu(|\cdot|^{2})-2\int_{ \mathbb{R}^{d}}x\mu(\mathrm{d}x)\int_{\mathbb{R}^{d}}y\mu(\mathrm{d}y)\right)= L_{(f)}^{(1)}\big{(}\mu(|\cdot|^{2})-|\int_{\mathbb{R}^{d}}x\mu( \mathrm{d}x)|^{2}\big{)}=L_{(f)}^{(1)}\text{var}_{\mu},\] where for the inequality we used the monotonicity condition on the convolution kernels and the symmetry of the double integration in \(\mu\). **Lemma A.2**.: _Let \(f\) and \(f_{\sigma}\) satisfy conditions \((\mathbf{A}^{f},\ \mathbf{A}^{f_{\sigma}})\) of Assumption 2.1. Set \(L_{(f)}^{(1),+}=\max\{0,L_{(f)}^{(1)}\}\). Then, we have_ \[\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\Big{(}\langle x-y,(f* \mu)(x)-(f*\nu\,)(y)\rangle+(m-1)\big{|}(f_{\sigma}*\mu)(x)-(f_{\sigma}*\nu)(y )\big{|}^{2}\Big{)}\mu(\mathrm{d}x)\nu(\mathrm{d}y)\] \[\leq 2L_{(f)}^{(1),+}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}|x-y|^ {2}\mu(\mathrm{d}x)\nu(\mathrm{d}y).\] Proof.: For any \(\mu,\nu\in\mathcal{P}_{2}(\mathbb{R}^{d})\), we compute \[\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\langle x-y,(f*\mu)(x)-(f* \nu)(y)\rangle\mu(\mathrm{d}x)\nu(\mathrm{d}y)\] \[=\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}} \langle x-y,f(x-x^{\prime})-f(y-y^{\prime})\rangle\mu(\mathrm{d}x^{\prime})\nu (\mathrm{d}y^{\prime})\mu(\mathrm{d}x)\nu(\mathrm{d}y)\] \[=\frac{1}{2}\bigg{[}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}} \int_{\mathbb{R}^{d}}\langle x-y,f(x-x^{\prime})-f(y-y^{\prime})\rangle\mu( \mathrm{d}x^{\prime})\nu(\mathrm{d}y^{\prime})\mu(\mathrm{d}x)\nu(\mathrm{d}y)\] \[\qquad-\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\int_{\mathbb{R} ^{d}}\langle x^{\prime}-y^{\prime},f(x-x^{\prime})-f(y-y^{\prime})\rangle\mu (\mathrm{d}x)\nu(\mathrm{d}y)\mu(\mathrm{d}x^{\prime})\nu(\mathrm{d}y^{\prime}) \bigg{]}\] \[=\frac{1}{2}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\int_{ \mathbb{R}^{d}}\langle(x-x^{\prime})-(y-y^{\prime}),f(x-x^{\prime})-f(y-y^{ \prime})\rangle\mu(\mathrm{d}x)\nu(\mathrm{d}y)\mu(\mathrm{d}x^{\prime})\nu( \mathrm{d}y^{\prime}),\] and thus, \[\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\Big{(}\langle x-y,(f* \mu)(x)-(f*\nu)(y)\rangle+(m-1)\big{|}(f_{\sigma}*\mu)(x)-(f_{\sigma}*\nu)(y) \big{|}^{2}\Big{)}\mu(\mathrm{d}x)\nu(\mathrm{d}y)\] \[=\frac{1}{2}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\int_{ \mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\Big{(}\langle(x-x^{\prime})-(y-y^{\prime }),f(x-x^{\prime})-f(y-y^{\prime})\rangle\] \[\qquad\qquad\qquad\qquad\qquad+2(m-1)|f_{\sigma}(x-x^{\prime})-f _{\sigma}(y-y^{\prime})|^{2}\Big{)}\mu(\mathrm{d}x^{\prime})\nu(\mathrm{d}y^{ \prime})\mu(\mathrm{d}x)\nu(\mathrm{d}y)\] \[\leq\frac{1}{2}L^{(1)}_{(f)}\int_{\mathbb{R}^{d}}\int_{\mathbb{R }^{d}}\int_{\mathbb{R}^{d}}\big{|}(x-x^{\prime})-(y-y^{\prime})\big{|}^{2}\mu( \mathrm{d}x^{\prime})\nu(\mathrm{d}y^{\prime})\mu(\mathrm{d}x)\nu(\mathrm{d}y)\] \[\leq 2L^{(1),+}_{(f)}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}|x-y| ^{2}\mu(\mathrm{d}x)\nu(\mathrm{d}y).\]
2303.09420
The jump effect of a general eccentric cylinder rolling on a ramp
Interesting phenomena occur when an eccentric rigid body rolls on an inclined or horizontal plane. For example, a variety of motions between rolling and sliding are exhibited until suddenly a jump occurs. We provide a detailed theoretical description of the jump effect for a general eccentric cylinder. Before the jump, when the cylinder moves along the ramp, we can assume a pure rolling motion. However, it turns out that when the cylinder reaches its jumping position, both the normal and static frictional forces approach zero. Thus, it seems that there will no longer be sufficient force to maintain rolling without slip. In order to have a jump without slipping, we prove that the parameters that characterize the dynamic behavior of the cylinder must belong to some restricted region.
E. Aldo Arroyo, M. Aparicio Alcalde
2023-03-16T15:54:01Z
http://arxiv.org/abs/2303.09420v2
# The jump effect of a general eccentric cylinder rolling on a ramp ###### Abstract Interesting phenomena occur when an eccentric rigid body rolls on an inclined or horizontal plane. For example, a variety of motions between rolling and sliding are exhibited until suddenly a jump occurs. We provide a detailed theoretical description of the jump effect for a general eccentric cylinder. Before the jump, when the cylinder moves along the ramp, we can assume a pure rolling motion. However, it turns out that when the cylinder reaches its jumping position, both the normal and static frictional forces approach zero. Thus, it seems that there will no longer be sufficient force to maintain rolling without slip. In order to have a jump without slipping, we prove that the parameters that characterize the dynamic behavior of the cylinder must belong to some restricted region. ## 1 Introduction The physical system consists of a rigid body, such as hoops, wheels, disks, and spheres, whose center of mass is located at a distance \(d\neq 0\) from the geometric center and rolls on a horizontal or inclined plane with friction. This system has interesting and unexpected dynamic behavior, which has attracted the attention of the community. In Fig. 1, a general configuration of the physical system is exhibited, where we can identify an inclined ramp making an angle \(\alpha\) with the horizontal, and a cylinder of radius \(R\) with its center of mass located at a distance \(d\) from the geometric center. The cylinder rolls over the ramp, and its motion is tracked by the angle \(\theta\). In the case of a horizontal plane (i.e., \(\alpha=0^{\circ}\)), there is literature on the dynamics of a particular cylinder. This cylinder consists of a very thin, massless cylindrical shell with a point mass stuck to its surface. The motion of this cylinder has been studied and debated by several authors [1, 2]. When the point mass attached to the cylinder is initially at the highest point of the cylinder, and the system is released from rest (i.e., the initial conditions are such that \(\theta_{0}=0\) and \(\dot{\theta}_{0}=0\)), Tokieda [2] showed that the cylinder must jump at \(\theta=90^{\circ}\). However, subsequent articles have shown that the assumption made by the author, that the jump happens immediately after the pure rolling motion, is incorrect. For instance, it has been shown that before the jump, there must be sliding [3, 4, 5]. Despite this fact, Pritchett [4] obtained numerical and experimental results (for a hula hoop with a stuck point mass) showing that the jump happens around \(\theta=90^{\circ}\), even when considering slippage. Moreover, to test the jump at this angle of \(90^{\circ}\), Theron [6] claimed that the elasticity of the hula hoop must be considered. As observed, despite the physical system being simple, the motion of the cylinder turns out to be, in general, complex. Further analysis has shown that a variety of these motions can include self-induced jumping motions [7, 8, 9, 10, 11], as well as multiple transitions back and forth from rolling to slipping [12, 13, 14, 15]. In the case of an inclined plane (i.e., \(\alpha\neq 0\)), we have found a few models that consider this case. One of them studied an eccentric wheel [6], and the other studied an eccentric disk [16]. These works hypothesize that the wheel (or disk) must slip before jumping. Other references [17, 18, 19, 20] have studied the equations of motion for this cylinder by means of the Lagrangian formulation or a cumbersome torque analysis. However, these works did not determine the position where the cylinder jumps. In all the references we have found (with the exception of [21]), it was assumed that the necessary condition for the jump is when the normal force \(F_{y}\) (acting perpendicularly from the ramp over the cylinder) vanishes at the instant of the hop. Let us remark that a different jump condition will be used in this work, and this condition turns out to be equivalent to that of [21]. As argued in reference [21], the equation that the angle \(\theta\) and the angular velocity \(\dot{\theta}\) must satisfy at the position where the cylinder jumps are given by \[-g\cos\alpha+d\,\dot{\theta}^{2}\cos\theta=0. \tag{1}\] Assuming pure rolling motion, which implies conservation of energy, and using the initial conditions \(\theta_{0}=0\) and \(\dot{\theta}_{0}=0\), it was possible to find an angle \(\theta_{J}\) that is a solution of Eq. (1). Consequently, the position \(l_{J}=R\theta_{J}\) along the inclined plane where the cylinder jumps can be determined. However, there is a subtlety in the pure rolling assumption. For a given value of the coefficient of static friction \(\mu_{s}\) between the cylinder and inclined plane, before the cylinder reaches the position determined by the angle \(\theta_{J}\), it may happen that the static friction exceeds its maximum value. This would imply that the cylinder slips before jumping. Figure 1: Schematic configuration of the physical system showing a cylinder rolling down a ramp of angle \(\alpha\). The position of the geometric center \(C\), and the center of mass \(CM\) of the cylinder with respect to point \(O\) are given by the vectors \(\vec{l}+\vec{b}\) and \(\vec{l}+\vec{b}+\vec{d}\), respectively. The vector \(\vec{F}\) represents the force acting at the contact point between the cylinder and the ramp, and \(M\vec{g}\) is the cylinder’s weight. Therefore, aside from the jump condition issue, we remark that there is an important open question: Are there cases where an eccentric rigid body rolling on an inclined (or horizontal) plane jumps just after pure rolling motion? As we are going to see, the answer to this question is not trivial and depends on the values of \(\mu_{s}\), the angle \(\alpha\), and the moment of inertia of the rigid body considered. A general body that encompasses all types of eccentric bodies having a cylindrical shape will be referred to as a general eccentric cylinder. It turns out that the dynamic characteristics of this cylinder can be parameterized using two parameters, \(\chi\) and \(k_{m}\). By exploring this generality that we consider in our work, we prove that there are (quite common) situations in which the jump occurs immediately after sliding roll, and there are (less common) situations where the jump occurs immediately after pure rolling. We also prove that when the jump occurs immediately after sliding roll, we have \(F_{y}=0\), and in the cases where the jump occurs immediately after pure rolling, we have \(F_{y}>0\). Therefore, it would be correct to consider \(F_{y}=0\) as a necessary condition for the jump whenever there is sliding rolling immediately before the jump, as occurs in many studies in the literature, but it would be incorrect to state the same when the jump occurs immediately after pure rolling or in general cases. This paper is organized as follows. In section 2, we will describe the physical system to be analyzed and derive the corresponding equations of motion. Then we will propose a model for the general eccentric cylinder. In section 3, we determine the region where the parameters that characterize the dynamical behavior of the general eccentric cylinder must belong in order to have a jump without slip, for any value of \(\alpha\) and \(\mu_{s}\). In section 4, we will provide a summary and suggest further directions for exploration. ## 2 Description of the system and derivation of the equations of motion In this section, we will define the physical system to be studied and derive the corresponding equations of motion. The mechanical system consists of a general eccentric cylinder rolling down an inclined ramp, as shown in Fig. 1. The term eccentric means that the center of mass is located at a distance of \(d\) from the geometric center. By general, we mean that the mass distribution within the cylinder is arbitrary, with the only restriction being that this distribution is invariant under translations along the \(z\)-axis. Throughout the motion, we will assume that the principal axis of the cylinder passing through point \(C\) always remains parallel to the \(z\)-axis. Note that when the cylinder's height is very small compared to the radius, the cylinder represents a disk, and if the mass distribution is concentrated on the edge, the cylinder becomes a hoop or wheel. In this sense, we can say that this general eccentric cylinder can encompass all types of eccentric bodies that have cylindrical shape. In Fig. 1, we also show the coordinate system \(xy\), where the origin is set at the point \(O\) located at the top of the ramp. The \(x\)-axis and \(y\)-axis are parallel and perpendicular to the ramp, respectively. The position of the center of mass with respect to the geometric center of the cylinder is given by the vector \[\vec{d}=d\sin\theta\,\hat{i}+d\cos\theta\,\hat{j}\,, \tag{2}\] where \(\theta\) is the angle between the vector \(\vec{d}\) and the \(y\)-axis. Using the coordinate system \(xy\), which is an inertial reference frame, we can write the position of the center of mass as follows \[\vec{r}_{CM}=\vec{l}+\vec{b}+\vec{d}=(l+d\sin\theta)\hat{i}+(b+d\cos\theta) \hat{j}\,. \tag{3}\] In the case where the cylinder remains in contact with the ramp, the vector \(\vec{b}\) is given by \(\vec{b}=R\,\hat{j}\), where \(R\) is the radius of the cylinder. While in the case where the cylinder loses contact with the ramp and flies, we have \(|\vec{b}|>R\). The two equations that are used to determine the dynamics of this rigid body are: 1. Newton's second law for the motion of the center of mass: \[\vec{F}_{T}=M\ddot{\vec{r}}_{CM}\,,\] (4) where the subscript \(T\) means that we are considering the total force acting on the cylinder; and 2. Newton's second law for rotations: \[\vec{\tau}_{T}=I_{CM}\,\ddot{\vec{\theta}}\,,\] (5) where \(\ddot{\vec{\theta}}\) is defined as \(\ddot{\vec{\theta}}=-\ddot{\theta}\,\hat{k}\), and the total torque \(\vec{\tau}_{T}\) and moment of inertia \(I_{CM}\) are taken around the center of mass. From the configuration of the system shown in Fig. 1, we can write Eq. (5) as follows: \[-(\vec{b}+\vec{d})\times\vec{F}=-I_{CM}\ddot{\vec{\theta}}\,\hat{k}\,. \tag{6}\] Since the total force acting on the cylinder is given by \(\vec{F}_{T}=\vec{F}+M\vec{g}\), using Eqs. (3) and (4), we obtain \[\vec{F}=M(\ddot{\vec{l}}+\ddot{\vec{b}}+\ddot{\vec{d}}-\vec{g})\,. \tag{7}\] Let us write the force \(\vec{F}\) in terms of its components in the directions of the \(x\) and \(y\)-axis \[\vec{F}=F_{x}\hat{i}+F_{y}\hat{j}\,, \tag{8}\] note that these components \(F_{x}\) and \(F_{y}\) are the friction and normal force, respectively. Now substituting Eqs. (3) and (8) into Eq. (7), we get \[F_{x}=M\ddot{l}+Md(\sin\theta)\char 94-Mg\sin\alpha\,,\] \[F_{y}=M\ddot{b}+Md(\cos\theta)\char 94+Mg\cos\alpha\,. \tag{9}\] Using Eqs. (2), (8) and writing \(\vec{b}=b\hat{j}\), from Eq. (6) we obtain \[I_{CM}\ddot{\theta}=d\sin\theta\,F_{y}-(b+d\cos\theta)F_{x}\,. \tag{10}\] The equations (9) and (10) will be useful in analyzing three possible types of motion for the cylinder: pure rolling, rolling with slipping, and flight motion. For each type of motion, additional relations between the forces and positions must be established. In the next two subsections, we will study the equations in the case of pure rolling and flight motion. ### Equations in the case of pure rolling motion Since the cylinder is in contact with the ramp, we have \(b=R\), which means that \(b\) is constant and, therefore, \(\ddot{b}=0\). Moreover, due to the pure rolling condition, we also have \(l=l_{0}+R(\theta-\theta_{0})\), and consequently \(\ddot{l}=R\,\ddot{\theta}\). The moments of inertia \(I_{C}\) and \(I_{CM}\) with respect to the geometric center \(C\) and the center of mass \(CM\) of the cylinder are related by the equation \(I_{C}=I_{CM}+Md^{2}\). Substituting these equations into Eqs. (9) and (10), we can derive the following nonlinear second-order differential equation \[\ddot{\theta}\left(\frac{I_{C}}{MR^{2}}+1+2\chi\cos\theta\right)-\chi\dot{ \theta}^{2}\sin\theta-\frac{g}{R}(\chi\sin(\alpha+\theta)+\sin\alpha)=0\,, \tag{11}\] where we have defined the following dimensionless parameter \(\chi\) as follows \[\chi=\frac{d}{R}\,. \tag{12}\] ### Equations in the case of flight motion In the case of flight motion, the cylinder has no contact with the ramp, therefore the contact force is null, i.e., \(F_{x}=F_{y}=0\). From Eq. (10) it is straightforward to show that the angular velocity \(\dot{\theta}\) is constant. And from Eq. (9) we obtain: \[\ddot{l} =d\,\dot{\theta}^{2}\sin\theta+g\sin\alpha\,,\] \[\ddot{b} =d\,\dot{\theta}^{2}\cos\theta-g\cos\alpha\,. \tag{13}\] These equations are consistent with the common knowledge about the free-fall motion of a rigid body, where the center of mass performs a parabolic trajectory and the angular velocity is constant. ### Transition from rolling to flight motion In order to understand the condition for the transition from rolling to flight motion, we will perform the following analysis. Before the jump, the cylinder stays in contact with the ramp, so \(b(t)=R\), \(\dot{b}(t)=\ddot{b}(t)=0\), and the values of \(\theta(t)\), \(\dot{\theta}(t)\), and \(\ddot{\theta}(t)\) are related by means of Eq. (10). At the moment of the jump, the values of \(b(t)\), \(\theta(t)\), \(\dot{b}(t)\), and \(\dot{\theta}(t)\) change continuously. The continuity of \(\dot{b}(t)\) and \(\dot{\theta}(t)\) is supported by the fact that there are no additional external forces acting on the cylinder at the instant of the jump that would change the cylinder's momentum. After the jump, the cylinder performs a flight motion where the relations in Eq. (13) are valid. Due to the continuities mentioned above, the values of \(b=R\) and \(\dot{b}=0\) are the initial conditions for \(b(t)\) in the flight motion. Therefore, when the value of \(\ddot{b}\) given by Eq. (13) starts to be greater than zero, it implies that the values of \(\dot{b}\) and \(b\) start to increase. It is interesting to note that this increase in the values of \(\dot{b}\) and \(b\) happens because we assumed that the cylinder is not attached to the ramp. Consequently, at the point where the transition from rolling to flight motion occurs, we must have \(\ddot{b}=0\) in Eq. (13). This condition means 1: Footnote 1: This jump condition was obtained in Ref. [21] using an alternative approach. \[d\,\dot{\theta}^{2}\cos\theta-g\cos\alpha=0\,. \tag{14}\] Before the jump, in order to understand the behavior of \(\ddot{b}\) as defined in Eq. (13), we need to track the value of \(\dot{\theta}\). The motion of the cylinder starts with \(\dot{\theta}=0\), and due to the action of gravity, \(\dot{\theta}\) increases. Therefore, at the beginning, \(\ddot{b}=-g\cos\alpha<0\), and subsequently, \(\ddot{b}\) increases because of the contribution of \(\dot{\theta}^{2}\) (with some oscillation due to the factor \(\cos\theta\)). This implies that at some future instant, when \(\ddot{b}\) approaches zero, i.e., when Eq. (14) is satisfied, we reach the moment when the cylinder jumps. Let us comment that in other works [5, 6, 16], the jump condition has been given by \(F_{y}=0\). Namely, the jump happens at the point where the normal force vanishes. However, these works consider the hypothesis that the jump is not possible from pure rolling, which means that the cylinder must slide before jumping. Although the main focus of our work is not on the slip case, using equations (9) and (10), together with the relation \(F_{x}=\sigma\mu_{k}F_{y}\), where \(\mu_{k}\) is the coefficient of kinetic friction, and \(\sigma=-1\) in case \(\dot{l}>R\dot{\theta}\) (i.e., skidding motion) and \(\sigma=+1\) in case \(\dot{l}<R\dot{\theta}\) (i.e., spinning motion), we can show that the normal force \(F_{y}\) is given by \[F_{y}=\frac{M\,I_{CM}\left(d\,\dot{\theta}^{2}\cos\theta-g\cos\alpha\right)}{Md \left(\sigma\,\mu_{k}\sin\theta(d\cos\theta+R)-d\sin^{2}\theta\right)-I_{CM}}\,. \tag{15}\] From Eq. (15), we observe that the jump condition, as given by Eq. (14), clearly implies that the normal force \(F_{y}\) vanishes. It should be noted that the expression for the normal force, as given by Eq. (15), is only valid when there is slippage. If the jump occurs from pure rolling motion, employing Eq. (14), we will show that the normal force does not necessarily vanish. ### Scale invariance of the dynamics and a model for the general eccentric cylinder Considering a general mass distribution inside the cylinder, with the only restriction being that this distribution is invariant under translations along the cylinder's principal axis, such that the center of mass does not coincide with the cylinder's geometric center, in this subsection we analyze the equations of motion of this general eccentric cylinder. It turns out that the equations of motion have scale invariance. By using this invariance, we can find common characteristics between two different cylinders in a way that the dynamics of both cylinders are equivalent. These common characteristics between two different cylinders can be parameterized using two independent parameters. We also propose a simplified model of the eccentric cylinder where the two independent parameters have a simple geometric and mass distribution interpretation. We prove that this simplified model is dynamically equivalent to any general cylinder considered in our study. Let us consider a cartesian coordinate system fixed to the cylinder, such that the \(z\)-axis coincides with the principal axis, and the coordinate origin is located at the cylinder's geometric center \(C\). The center of mass \(CM\) and the cylinder's moment of inertia with respect to the \(z\)-axis are computed as \(\vec{r}_{CM}=\frac{1}{M}\int dxdydz\,\vec{r}\rho(x,y)\) and \(I_{C}=\int dxdydz\,(x^{2}+y^{2})\rho(x,y)\), respectively, where \(\rho(x,y)\) is the mass density bounded by the cylindrical surface. Since the mass distribution is invariant under translations along the \(z\)-axis, the density \(\rho(x,y)\) does not depend on \(z\). These specifications provide two general implications: 1. The \(CM\) can be located at any distance \(d\) from the cylinder's geometric center \(C\), with this distance restricted to the interval \(d\in[0,R]\). For example, the particular case where \(d=R\) occurs when the entire cylinder's mass is distributed along a line on the cylindrical lateral surface that is parallel to the \(z\)-axis. 2. For fixed values of \(M\), \(R\), and \(d\), the moment of inertia \(I_{CM}\) (or equivalently \(I_{C}\), thanks to the relation \(I_{C}=I_{CM}+Md^{2}\)), has minimum and maximum values, where these bounding values are: 1. _The minimum:_ this value corresponds to the case where the whole cylinder's mass is distributed along a line (parallel to the \(z\)-axis) that passes through the \(CM\), therefore in this case we have: \(I_{CM}=0\) or \(I_{C}=Md^{2}\). 2. _The maximum:_ this value corresponds to the case where the whole cylinder's mass is distributed on the cylinder's lateral surface, so in this case we have: \(I_{CM}=MR^{2}-Md^{2}\) or \(I_{C}=MR^{2}\). Given a value of \(I_{C}\) such that \(Md^{2}\leq I_{C}\leq MR^{2}\), there are a variety of possibilities for the mass distribution \(\rho(x,y)\) that yield the same value of \(I_{C}\). Since for a fixed values of \(M\), \(R\), \(d\) and \(g\), the equations of motion, given by Eqs. (9) and (10), depend only on \(I_{C}\), the cylinder's dynamical behavior does not depend on specific details of the mass distribution \(\rho(x,y)\). From Eqs. (9) and (10), we can write the following equation \[\tilde{I}_{CM}\frac{d^{2}\tilde{\theta}}{d\tilde{t}^{2}}=\chi\sin\tilde{ \theta}\,\left(\frac{d^{2}\tilde{b}}{d\tilde{t}^{2}}+\chi\frac{d^{2}\cos \tilde{\theta}}{d\tilde{t}^{2}}+\cos\alpha\right)-(\tilde{b}+\chi\cos\tilde{ \theta})\,\left(\frac{d^{2}\tilde{l}}{d\tilde{t}^{2}}+\chi\frac{d^{2}\sin \tilde{\theta}}{d\tilde{t}^{2}}-\sin\alpha\right)\,, \tag{16}\] where we have defined the adimensional quantities \(\tilde{I}_{CM}=\frac{I_{CM}}{MR^{2}}\), \(\tilde{t}=\sqrt{\frac{2}{R}}t\), \(\tilde{\theta}=\theta\), \(\tilde{b}=b/R\), \(\tilde{l}=l/R\) and \(\chi\) is given by Eq. (12). The functions \(\tilde{\theta}\), \(\tilde{b}\), and \(\tilde{l}\) are related to the solutions \(\theta\), \(b\), and \(l\) of the equations of motion. More explicitly, these relations are given by \(\theta(t)=\tilde{\theta}(\sqrt{\frac{2}{R}}t)\), \(b(t)=R\tilde{b}(\sqrt{\frac{2}{R}}t)\), and \(l(t)=R\tilde{l}(\sqrt{\frac{2}{R}}t)\). At this point, we are ready to analyze the dependence of the cylinder's dynamics with respect to \(M\) and \(R\). Consider two cylinders, one with mass and radius given by the set \((M,R)\) and the other by \((M^{\prime},R^{\prime})\). If \(\chi\) and \(\tilde{I}_{CM}\) are the same for both cylinders, then due to Eq. (16), they share the same solutions \(\tilde{\theta}\), \(\tilde{b}\), and \(\tilde{l}\). Therefore, the set of functions \((\theta^{\prime},b^{\prime},l^{\prime})\) and \((\theta,b,l)\) are related by \(\theta^{\prime}=\theta\), \(b^{\prime}=\frac{R^{\prime}}{R}b\), \(l^{\prime}=\frac{R^{\prime}}{R}l\). Namely, \(\theta\) is equal for both sets, while \(b\) and \(l\) do not depend on \(M\) and scale by a factor proportional to \(R\). Similarly, since \(t=\sqrt{\frac{R}{g}}\tilde{t}\), the time of the events that happen throughout the motion (like the time when the jump happens) scales by a factor proportional to the square root of \(R\). From the above observations, and defining the scale factor \(\lambda=\frac{R^{\prime}}{R}\), we conclude that the equations of motion are invariant under the scale transformation \(\theta^{\prime}=\theta\), \(b^{\prime}=\lambda\,b\), \(l^{\prime}=\lambda\,l\), \(t^{\prime}=\sqrt{\lambda}\,t\) provided that \(\chi\) and \(\tilde{I}_{CM}\) are the same for the two different sets of values \((M,R)\) and \((M^{\prime},R^{\prime})\). Since \(\chi=\chi^{\prime}\), we have that \(d^{\prime}=\lambda\,d\). Furthermore, from \(\tilde{I}_{CM}=\tilde{I}_{CM}\), namely \(\frac{I_{CM}}{MR^{2}}=\frac{I_{CM}^{\prime}}{M^{\prime}R^{2}}\), or \(\frac{I_{C}}{MR^{2}}=\frac{I_{C}^{\prime}}{M^{\prime}R^{2}}\), and using \(I_{C}=\int dxdydz\,(x^{2}+y^{2})\rho(x,y)\), we get the relation \[\int dxdydz\,\left(\left(\frac{x}{R}\right)^{2}+\left(\frac{y}{R}\right)^{2} \right)\frac{\rho(x,y)}{M}=\int dx^{\prime}dy^{\prime}dz^{\prime}\,\left( \left(\frac{x^{\prime}}{R^{\prime}}\right)^{2}+\left(\frac{y^{\prime}}{R^{ \prime}}\right)^{2}\right)\frac{\rho^{\prime}(x^{\prime},y^{\prime})}{M^{ \prime}}\,,\] which means that under the transformation \((x^{\prime},y^{\prime},z^{\prime})=(\lambda x,\lambda y,\lambda z)\), and the change of mass from \(M\) to \(M^{\prime}\), we have that \(\frac{\rho(x,y)}{M}dxdydz=\frac{\rho^{\prime}(x^{\prime},y^{\prime})}{M^{ \prime}}dx^{\prime}dy^{\prime}dz^{\prime}\), or \(\frac{dm}{M}=\frac{dm^{\prime}}{M^{\prime}}\), that is the mass densities (divided by the total mass) of the cylinders are related by \(\frac{\rho(x,y)}{M}=\lambda^{3}\,\frac{\rho(x^{\prime},y^{\prime})}{M^{\prime}}\). A summary of the previous analysis is as follows: given two different cylinders, where to calculate their centers of mass and moments of inertia, we use cartesian coordinate systems fixed to them. Let \((x^{\prime},y^{\prime},z^{\prime})\) the coordinate system fixed to one cylinder, and \((x,y,z)\) to the other one. If these coordinate systems are related by the scale transformation \((x^{\prime},y^{\prime},z^{\prime})=(\lambda x,\lambda y,\lambda z)\), and given the mass densities \(\rho(x,y)\) and \(\rho^{\prime}(x^{\prime},y^{\prime})\) such that \(\frac{\rho(x,y)}{M}=\lambda^{3}\,\frac{\rho^{\prime}(x^{\prime},y^{\prime})}{M ^{\prime}}\), this condition guarantees that \(\chi=\chi^{\prime}\) and \(\tilde{I}_{CM}=\tilde{I}_{CM}^{\prime}\), then the dynamic behavior of the cylinders are equivalent. Since the cylinder's dynamics is basically ruled by the two parameters \(\chi\) and \(\tilde{I}_{CM}\), we will propose a particular construction of a cylinder which will be characterized by other two parameters, where one of them can be chosen as being the parameter \(\chi\), and the second one will be related to the parameter \(\tilde{I}_{CM}\). In the Fig. 2, we show a cross section of our cylinder model, which consists of a thin cylindrical shell of mass \(m_{C}\) with uniform mass distribution, attached to a mass line \(m_{P}\), parallel to the cylinder's principal axis. Besides the theoretical importance of this cylinder model that will be used in the presentation of our main results, this model could also be used for an experimental study of the jump effect. Let us argue that this cylinder model encompasses all possible cases of general eccentric cylinders characterized by the parameters \(\chi\) and \(\tilde{I}_{CM}\). Note that the mass of the cylinder model is given by \(M=m_{C}+m_{P}\). Aside the parameter \(\chi=d/R\), where \(R\) is the radius of the thin cylindrical shell, we introduce the parameter \(k_{m}=m_{C}/M\). For example, some particular cases are: Figure 2: Cross section of a thin cylindrical shell of mass \(m_{C}\) plus a mass line \(m_{P}\) parallel to the cylinder’s principal axis. * One where \(CM\) coincides with \(C\), so we have \(d=0\) (namely \(\chi=0\)), this case happens when the mass line \(m_{P}\) passes through \(C\). The parameter \(k_{m}\) controls the different possibilities for the values of the moment of inertia. * One where \(CM\) is over the border of the cylinder, it means \(d=R\), this case happens when \(\chi=1\) and \(k_{m}=0\). Through the relation that defines the \(CM\), we can show that the distance \(s\) between the \(CM\) and the mass line \(m_{P}\), is given by \(s=\frac{k_{m}}{1-k_{m}}\chi R\). Then the moment of inertia \(I_{C}\) is computed as follows \[I_{C}=m_{C}R^{2}+m_{P}(d+s)^{2}=m_{C}R^{2}+\frac{M^{2}}{m_{P}}d^{ 2}\,,\] \[\Rightarrow\frac{I_{C}}{MR^{2}}=k_{m}+\frac{\chi^{2}}{1-k_{m}}\,. \tag{17}\] Since \(Md^{2}\leq I_{C}\leq MR^{2}\), we have that: \(\chi^{2}\leq\frac{I_{C}}{MR^{2}}\leq 1\), and therefore \(0\leq\chi\leq 1\). Then from Eq. (17), we have that \(0\leq k_{m}\leq 1-\chi\). Moreover, according to Eq. (17) for the different values of \(\chi\) and \(k_{m}\) in the former intervals, we cover all the possible values for \(\frac{I_{C}}{MR^{2}}\). From Eq. (17), we have that \(\tilde{I}_{CM}=\frac{I_{CM}}{MR^{2}}=k_{m}+\frac{k_{m}\chi^{2}}{1-k_{m}}\), where this parameter \(\tilde{I}_{CM}\) appears in Eq. (16), which explicitly exhibits the independence of the dynamics in relation to the mass \(M\) and the radius \(R\) when \(\chi\) and \(k_{m}\) are fixed. ## 3 Conditions for a slip-free transition from pure rolling to flight motion As mentioned in the previous section, the cylinder whole motion is basically composed of three types of particular motions: pure rolling, rolling plus slipping and flight motion. From results of numerical simulation (work in progress [22]), we have observed that depending on the initial conditions, the values of the parameters \(\chi\), \(k_{m}\) and the ramp inclination \(\alpha\), the cylinder performs a variety of interesting motions. The most common situation corresponds to the case where the cylinder initially performs a pure rolling motion, then the motion is alternated between pure rolling and rolling plus slipping, until at some point the cylinder loses contact with the ramp and jumps. In this section, we will study the following sequence of motions for the cylinder and the conditions required to have such a sequence. \[\theta=\theta_{0} \Rightarrow\text{initial position},\] \[\theta_{0}<\theta<\theta_{J} \Rightarrow\text{pure rolling motion},\] \[\theta=\theta_{J} \Rightarrow\text{jump position},\] \[\theta>\theta_{J} \Rightarrow\text{flight motion}.\] Note that the transition from pure rolling to flight motion happens at the point where \(\theta=\theta_{J}\). The subscript \(J\) means that we are considering the value of the quantities at the instant of the jump. From here to the rest of the paper, we will use the following acronym JARM to mean: jump after a pure rolling motion. As discussed in section 2, at the point where the cylinder jumps, the angle \(\theta\) and the angular velocity \(\dot{\theta}\) must satisfy Eq. (14). We have denoted by \(\theta_{J}\) the angle that is a solution of Eq. (14). Therefore from this equation, we can write \[\dot{\theta}_{J}^{2}=\frac{g\cos\alpha}{d\cos\theta_{J}}. \tag{18}\] Substituting Eq. (18) into the equation of motion Eq. (11), we obtain \[\ddot{\theta}_{J}=\frac{g(1+\chi\cos\theta_{J})\sec\theta_{J}\sin(\alpha+\theta_{ J})}{R\left(k_{m}+\frac{\chi^{2}}{1-k_{m}}+1+2\chi\cos\theta_{J}\right)}. \tag{19}\] In order to have a JARM, since by definition the cylinder does not slip, the force of friction between the cylinder and the ramp should be static, and \[\left|\frac{F_{x}}{F_{y}}\right|\leq\mu_{s}\,, \tag{20}\] where \(F_{x}\) and \(F_{y}\) are the friction and normal force, respectively. Remember that the definition of these forces as given in Eqs. (9) are valid for general rolling motions (pure rolling or rolling plus slipping). In order to restrict our analysis to the pure rolling motion, we should substitute the equations \(\ddot{b}=0\) and \(\ddot{l}=R\,\ddot{\theta}\), into Eqs. (9); after such substitution, we obtain \[F_{x} = MR\big{(}-\frac{g}{R}\sin\alpha-\chi\dot{\theta}^{2}\sin\theta+ (1+\chi\cos\theta)\ddot{\theta}\,\big{)}\,,\] \[F_{y} = MR\big{(}\frac{g}{R}\cos\alpha-\chi\dot{\theta}^{2}\cos\theta- \chi\ddot{\theta}\sin\theta\big{)}\,. \tag{21}\] It is interesting to note that at the beginning of the cylinder's motion, the kinetic energy is low (or zero if the cylinder is left from rest), and the \(CM\) of the cylinder is in a higher position so that this initial configuration allows the cylinder to roll down. Clearly at the beginning, \(F_{y}\) is positive because it is dominated by the term \(Mg\cos\alpha\), subsequently the values of \(\dot{\theta}\) and \(\ddot{\theta}\) grow and so the value of the normal force approaches to zero and at some point becomes negative. Since the cylinder is not attached to the ramp, negative values of the normal force are not physical allowed in our study. Before the normal force becomes negative (when it is positive and approaching zero), the inequality given by Eq. (20) is satisfied. However, at some point, this inequality will no longer hold, which means that the cylinder will begin to slip. Numerical inspection [22] has revealed that the point where inequality (20) is no longer valid is located in proximity to two other points: the point where the normal force becomes zero and the point where the jump condition is satisfied (as given by Eq. (14)). Based on these observations, and in order to have a JARM in the case of pure rolling motion, we will analyze the following condition for the normal force: \[F_{y}>0. \tag{22}\] Since this condition is less restrictive than the condition given by Eq. (20), it is clear that we will need to complement Eq. (22) with Eq. (20). ### Conditions for a JARM independent of the initial conditions In order to perform a general analysis of the values and signs of the friction and normal force at the jump point, namely when \(\theta=\theta_{J}\), we substitute Eqs. (18) and (19) into Eqs. (21), so that we obtain \[F_{x,J} =-\frac{gM\left(k_{m}+\frac{\chi^{2}}{1-k_{m}}-\chi^{2}\cos^{2} \theta_{J}\right)\sec\theta_{J}\sin(\alpha+\theta_{J})}{k_{m}+\frac{\chi^{2}} {1-k_{m}}+1+2\chi\cos\theta_{J}}\,, \tag{23}\] \[F_{y,J} =-\frac{gM\chi(1+\chi\cos\theta_{J})\tan\theta_{J}\sin(\alpha+ \theta_{J})}{k_{m}+\frac{\chi^{2}}{1-k_{m}}+1+2\chi\cos\theta_{J}}\,. \tag{24}\] Regarding the result of the normal force \(F_{y,J}\) from equation (24), we note that \(F_{y,J}\) is not necessarily equal to zero. In the case where \(F_{y,J}>0\), and since for \(\theta>\theta_{J}\) we have \(F_{y}=0\), it is clear that the normal force changes discontinuously from a non-vanishing to zero value at the point where the cylinder jumps. Note that some terms in Eqs. (23) and (24) are positive, these are: \(k_{m}+\frac{\chi^{2}}{1-k_{m}}+1+2\chi\cos\theta_{J}>0\); and \(1+\chi\cos\theta_{J}>0\). Also from Eq. (18) we have: \(\cos\theta_{J}>0\); and thus \(k_{m}+\frac{\chi^{2}}{1-k_{m}}-\chi^{2}\cos^{2}\theta_{J}>0\). After consideration of the positivity condition of these terms, employing Eqs. (23) and (24), we get \[sign(F_{x,J}) =-sign(\sin(\alpha+\theta_{J})), \tag{25}\] \[sign(F_{y,J}) =-sign(\sin\theta_{J}\sin(\alpha+\theta_{J})). \tag{26}\] A subtle issue that can be observed from equation (26) is that the normal force at the jump point could even be negative. Physically, this would mean that before the cylinder jumps, the normal force could have been zero. Therefore, the cylinder may have slipped before jumping, implying that the pure rolling assumption is no longer valid. In what follows, we will address this issue in more detail. As mentioned before, in order to have a JARM, the condition that the normal force should be positive needs to be complemented with Eq. (20). Therefore, let us start our analysis by searching for possible allowed values of \(\theta_{J}\) through Eq. (26), so that the condition given by Eq. (22) is satisfied. In that sense, it is not difficult to show that \[2\pi n-\alpha<\theta_{J}<2\pi n,\ \ \ n=1,2,3,\ldots \tag{27}\] From Fig. (a)a, we can see that the angle \(\alpha+\theta\) is measured between \(\vec{d}\) and the vertical line. Therefore from this geometrical configuration of the angles, we can easily prove that the whole shadowed regions (gray and green) corresponds to the regions where \(\cos\theta>0\). Now from Eq. (26), it is not difficult to see that in the gray region \(F_{y,J}<0\), and in the green region \(F_{y,J}>0\). Therefore, the only domain where a JARM could happen (due to the condition in Eq. (22)), is when the \(CM\) is inside the green region, on the other regions a JARM is precluded. Notice that the green region agrees with the interval for \(\theta_{J}\) given by Eq. (27), where the integer number \(n\) is interpreted as the counting of the full turns completed by the cylinder. Finally, in the green region we can check that \(F_{x,J}<0\) (this result is obtained from Eq. (25)), which means that the static friction force over the cylinder points along and upward the ramp. More restrictions for the possible allowed values of \(\theta_{J}\) can be obtained from the condition given by Eq. (20). So, substituting Eqs. (23) and (24) into Eq. (20) we obtain \[\mu(\chi,k_{m},\theta_{J})\equiv\frac{\frac{-k_{m}^{2}+k_{m}+\chi^{2}}{(k_{m}-1) \chi^{2}}+\cos^{2}\theta_{J}}{\sin\theta_{J}\,\left(\frac{1}{\chi}+\cos\theta_ {J}\right)}\leq\mu_{s}\,. \tag{28}\] In the Fig. 3b, a typical plot of the function \(\mu(\chi,k_{m},\theta_{J})\) is shown. By fixing the value of \(\mu_{s}\), and using the inequality given in Eq. (28), we can calculate the region to which \(\theta_{J}\) belongs. It is worth noting that the domain of \(\theta_{J}\) as shown in Fig. 3b is bigger than the one defined by Eq. (27), however it is important to keep in mind that the lower bound for \(\theta_{J}\) must be greater than or equal to \(2\pi n-\alpha\). We also note that due to the usual periodic property of the trigonometric functions that appear in \(\mu(\chi,k_{m},\theta_{J})\), the plot of this function is the same for any integer value \(n\). After some analysis of the Fig. 3b, we conclude that to have a JARM: 1. There is a minimum value \(\mu_{s-min}\) for the coefficient of static friction. 2. For each \(\mu_{s}>\mu_{s-min}\), we have that \(max\{2\pi n-\alpha,\theta_{J-min}\}<\theta_{J}<\theta_{J-max}\); where \(\theta_{J-min}\) and \(\theta_{J-max}\) are identified as follows, when \(\mu_{s}>\mu_{s-min}\) and \(\mu_{s}\approx\mu_{s-min}\), there are two solutions to the equation \(\mu(\chi,k_{m},\theta_{J})=\mu_{s}\), these solutions are precisely \(\theta_{J-min}\) and \(\theta_{J-max}\), while for values of \(\mu_{s}\) such that \(\mu_{s}\gg\mu_{s-min}\), there is only one solution that is \(\theta_{J-max}\). 3. Since there is a maximum value for \(\theta_{J}\), which was denoted by \(\theta_{J-max}\), then there is a minimum value for \(\alpha\), denoted by \(\alpha_{min}\) which satisfies \(\alpha_{min}=2\pi n-\theta_{J-max}\), this is true because \(\theta_{J}\) satisfies the inequality given in Eq. (27). As a specific application of the general results presented above, let us consider the case where \(\alpha=0\), which corresponds to the horizontal plane. From Eq. (27), we can see that the only valid value for the angle \(\theta_{J}\) in this case is \(\theta_{J}=2\pi n\). However, substituting this value into the left-hand side of Eq. (28) results in a divergence, indicating that to maintain roll without slipping, the coefficient of static friction \(\mu_{s}\) must approach infinity. Since such a value is physically impossible, we can conclude that the no-slip condition must be violated before the cylinder jumps, regardless of the values of \(\chi\) and \(k_{m}\). In the following discussion, we will consider the case where \(\alpha\neq 0\). ### Conditions for a JARM using initial conditions Our study in the previous subsection is interesting because it proves that there are restrictions for the possible allowed values of \(\alpha\), \(\mu_{s}\) and \(\theta_{J}\). Also note that to derive our previous results, we have not used any particular initial condition. In this subsection, we consider the following quite standard initial conditions: the cylinder is left from rest \(\dot{\theta}_{0}=0\) at the top of the ramp, with \(\theta_{0}=0\). As we are going to see, the setting of these initial conditions will further restrict the existence of a JARM. Using the conservation of energy, which is valid for pure rolling motion, together with the initial conditions \(\dot{\theta}_{0}=0\) and \(\theta_{0}=0\), from the jump condition given by Eq. (14), we can show that the angle \(\theta_{J}\) at the jump point is the root of the function: \[J(\theta)=-1+\frac{\left(\chi\cos\alpha-\chi\cos(\alpha+\theta)+\theta\sin \alpha\right)\cos\theta}{(2\pi\gamma+\cos\theta-1)\cos\alpha}, \tag{29}\] where the parameter \(\gamma\) is defined by \[\gamma=\frac{k_{m}^{2}+2k_{m}\chi-\chi^{2}-2\chi-1}{4\pi(k_{m}-1)\chi}. \tag{30}\] Let us provide conditions for the existence of roots \(\theta_{J}\) of the function \(J(\theta)\) with the restriction given by equation (27). It turns out that to guarantee the existence of a root \(\theta_{J}\in[2\pi n-\alpha,2\pi n]\) of the function \(J(\theta)\), we must have that \[J(2\pi n-\alpha) =\Big{(}\frac{\chi\cos\alpha+(2\pi n-\alpha)\sin\alpha-\chi}{ \cos\alpha+2\pi\gamma-1}-1\Big{)}<0,\quad\text{and}, \tag{31}\] \[J(2\pi n) =\Big{(}\frac{n\tan\alpha}{\gamma}-1\Big{)}>0. \tag{32}\] Note that through the equation: \(J(\theta_{J})=0\), it should be possible to express \(\theta_{J}\) as a function of the parameters \(\alpha\), \(\chi\), and \(k_{m}\), namely \[\theta_{J}=\theta_{n,J}(\alpha,\chi,k_{m}), \tag{33}\] where the subscript \(n\) explicitly indicates that \(\theta_{J}\) belongs to the interval \([2\pi n-\alpha,2\pi n]\). Using this equation (33), we can write the inequality (28) as follows \[\frac{\frac{-k_{m}^{2}+k_{m}+\chi^{2}}{(k_{m}-1)\chi^{2}}+\cos^{2}\big{(} \theta_{n,J}(\alpha,\chi,k_{m})\big{)}}{\sin\big{(}\theta_{n,J}(\alpha,\chi,k _{m})\big{)}\left[\frac{1}{\chi}+\cos\big{(}\theta_{n,J}(\alpha,\chi,k_{m}) \big{)}\right]}\leq\mu_{s}\,. \tag{34}\] We could think that for given values of \(n\), \(\alpha\), \(\chi\), \(k_{m}\) and \(\mu_{s}\) such that inequalities (31), (32) and (34) are true, it would be enough to guarantee that the cylinder will jump without first having slipped. However, note that the above analysis has been performed at the point where the cylinder jumps, namely the inequalities (31), (32) and (34) do not necessarily guarantee pure rolling motion for values of the angle \(\theta\) such that \(\theta<\theta_{J}\). Therefore we will need to impose more restrictions. Since extra conditions will come from the analysis of inequalities of the type given in (22) and (20), we need to write \(F_{x}\) and \(F_{y}\) for generic values of \(\theta\) such that \(\theta<\theta_{J}\). These components of the force are given by Eqs. (21). For the initial conditions \(\dot{\theta}_{0}=0\) and \(\theta_{0}=0\), using the equation of motion (11), and the conservation of the energy, we can express the angular velocity \(\dot{\theta}\) and the angular acceleration \(\ddot{\theta}\) in terms of the angle \(\theta\), so that the components of the force given in Eqs. (21) can be written as functions that depend explicitly on \(\theta\) \[F_{x}(\theta)= -\frac{gM}{4\chi(2\pi\gamma+\cos\theta-1)^{2}}\Big{[}\sin\alpha \big{(}16\pi^{2}\gamma^{2}\chi-16\pi\gamma\chi-4\pi\gamma-\chi^{2}+3\chi+2\] \[+2\pi\gamma\chi^{2}+2(-1+(-3+6\pi\gamma)\chi)\cos\theta+\chi\cos (2\theta)-2\theta\sin\theta-4\theta\chi\sin\theta\] \[+8\pi\gamma\theta\chi\sin\theta+\theta\chi\sin(2\theta)\big{)}+ \chi(2\cos\alpha(-1-2\chi+4\pi\gamma\chi+\chi\cos\theta)\sin\theta\] \[-(-2+4\pi\gamma+\chi)\sin(\alpha+\theta)-\chi((-3+6\pi\gamma)\sin (\alpha+2\theta)+\sin(\alpha+3\theta))\Big{]}, \tag{35}\] \[F_{y}(\theta)= \frac{gM}{4(2\pi\gamma+\cos\theta-1)^{2}}\Big{[}\cos\alpha\big{(} 6-16\pi\gamma+16\pi^{2}\gamma^{2}-4\chi+2\pi\gamma\chi\] \[+(-8-8\pi\gamma(-2+\chi)+7\chi)\cos\theta+(2-4\chi+6\pi\gamma\chi )\cos(2\theta)+\chi\cos(3\theta)\big{)}\] \[-\sin\alpha(3\theta+4(-1+2\pi\gamma)\theta\cos\theta+\theta\cos(2 \theta)-2\sin\theta+4\pi\gamma\sin\theta\] \[+3\chi\sin\theta+\sin(2\theta)-3\chi\sin(2\theta)+6\pi\gamma\chi \sin(2\theta)+\chi\sin(3\theta))\Big{]}. \tag{36}\] In order to guarantee a pure rolling motion throughout the entire path from \(\theta=0\) to the point where the cylinder jumps \(\theta=\theta_{J}\), for any value of \(\theta\) such that \(0<\theta<\theta_{J}\), the following inequalities must be satisfied \[F_{y}(\theta)>0,\quad\text{and}, \tag{37}\] \[\frac{|F_{x}(\theta)|}{F_{y}(\theta)}\leq\mu_{s}. \tag{38}\] Let us summarize the main result of this subsection. We have shown that using the initial conditions \(\theta_{0}=0\) and \(\dot{\theta}_{0}=0\), to have a JARM the parameters \(n\), \(\alpha\), \(\chi\), \(k_{m}\) and \(\mu_{s}\) need to be chosen such that the inequalities (31), (32), (34), (37) and (38) are satisfied. Therefore, this result imposes nontrivial restrictions on the possible allowed values of the parameters that appear in the equations of the problem. Regions in the parameter space (\(\alpha\),\(\chi\),\(k_{m}\)) for fixed values of \(n\) and \(\mu_{s}\) In order to have a JARM for the initial conditions \(\theta_{0}=0\) and \(\dot{\theta}_{0}=0\), in this subsection, we are going to show the region where the parameters must belong. Since essentially we have five parameters \(n\), \(\mu_{s}\), \(\alpha\), \(\chi\), and \(k_{m}\) to visualize the region defined by the inequalities (31), (32), (34), (37) and (38), we will need to fix at least two parameters so that the remaining three parameters can be visualized in a three-dimensional space. By fixing the value of the parameters \(n\) and \(\mu_{s}\), in Fig. 4 we present the regions in the parameter space \((\alpha,\chi,k_{m})\) where the inequalities mentioned in the previous paragraph are fulfilled, namely if we choose any set of parameters \(\alpha\), \(\chi\), and \(k_{m}\) that belong to these regions, we guarantee the occurrence of a JARM. Comparing Fig. 4(b) and Fig. 4(a), we see that the regions with \(\mu_{s}=1\) are bigger than the regions with \(\mu_{s}=0.7\). This result makes sense since for a larger static coefficient of friction, we expect that the cylinder has more chances to maintain pure rolling motion. For a fixed value of \(\mu_{s}\), we can also compare the regions obtained with different values of \(n\). For instance, from Fig. 4(a), for the value of \(\mu_{s}=0.7\), we observe that as the values of \(n\) increase, the corresponding regions are getting smaller. In general, for any value of \(\mu_{s}\), this pattern was observed. The physical interpretation of this result is as follows, first let us remember that \(n\) represents the number of turns performed by the cylinder, so for a given value of \(\mu_{s}\), since \(n=1\) is the lowest possible value for \(n\), the greatest chance of having a JARM occurs before the cylinder completes a full turn, and the chance decreases every time we increase the value of \(n\). Regarding the inclination of the ramp given by the angle \(\alpha\) (where \(\pi/2\) is its maximum value), we observed that, inside the regions shown in Fig. 4, large values of \(\alpha\) are in correspondence with small values of \(\chi\). This type of correspondence makes sense since for a high inclination of the ramp, in order to avoid a slipping, the normal force should not oscillate to much, and that happens when the value of \(\chi\) is small. When the inclination of the ramp gets closer to the value of \(\pi/2\), we notice that the regions defined by the values of \(\chi\) and \(k_{m}\) becomes smaller, where in the limit case \(\alpha\rightarrow\pi/2\) the parameters \(\chi\) and \(k_{m}\) vanish. This is evidenced by the fact that when \(\alpha\) is close to \(\pi/2\), the region where a JARM occurs has the shape of a wedge where its vertex is given by the point \(\alpha=\pi/2\), \(\chi=0\) and \(k_{m}=0\). In order to interpret this result, let us remember that inside the region where a JARM happens, large values of \(\alpha\) are in correspondence with small values of \(\chi\), namely when the \(CM\) is close to the geometric center \(C\) of the cylinder, there are less oscillations of the normal force, which implies in less chances to have a slip. We conclude that high values of \(\alpha\) are allowed provided that the parameters \(\chi\) and \(k_{m}\) are small enough. Therefore, an unexpected result of having a JARM is obtained in the limit case where \(\alpha\rightarrow\pi/2\), \(\chi\to 0\) and \(k_{m}\to 0\). ## 4 Summary and discussion For a given value of the coefficient of static friction \(\mu_{s}\), we have shown that a general eccentric cylinder performs a jump starting from pure rolling motion, provided that the angle \(\alpha\), and the parameters \(\chi\) and \(k_{m}\) that characterize the cylinder, belong to a restricted region. If these parameters do not belong to the aforementioned region, the cylinder has to perform another type of motion, such as slipping with rolling, before the jump. In a future paper [22], we will analyze these other varieties of motion using our general cylinder. Another important issue that can be explored is related to the initial conditions. We have presented a general discussion about the existence of JARM that is independent of the initial conditions. As a result of this analysis, we show that the value of the angle \(\theta_{J}\) is restricted to some interval. To fix the value of \(\theta_{J}\), some particular initial conditions are needed. Therefore, we have used the somewhat standard initial conditions, \(\theta_{0}=0\) and \(\dot{\theta}_{0}=0\). It will be an interesting and non-trivial problem to analyze how the regions shown in Fig. 4 change when other initial conditions are used. For example, we can see if these regions increase or decrease, which would physically imply a greater or lesser chance of having JARM. What would be the optimal initial conditions that allow a greater chance of having JARM? Finally, since only the slipping motion before the jump has been observed in experiments and theoretical studies carried out to date, it has been a common conclusion that the no-slip conditions must be violated before the jump. Indeed, when \(\alpha=0\), on general grounds, we have definitely shown that this last conclusion is true. However, in the case where \(\alpha\neq 0\), there is a chance to have JARM. Therefore, to empirically test the occurrence of JARM, it would be important to set up an experiment that takes into account appropriate values for \(\mu_{s}\) and the parameters \(\chi\) and \(k_{m}\). ## Acknowledgements We would like to thank Dominique Sugny, Gabriela and Pavao Mardesic for useful discussions.
2310.09518
Instruction Tuning with Human Curriculum
In this work, we (1) introduce Curriculum Instruction Tuning, (2) explore the potential advantages of employing diverse curriculum strategies, and (3) delineate a synthetic instruction-response generation framework that complements our theoretical approach. Distinct from the existing instruction tuning dataset, our generation pipeline is systematically structured to emulate the sequential and orderly characteristic of human learning. Additionally, we describe a methodology for generating instruction-response datasets that extensively span the various stages of human education, from middle school through the graduate level, utilizing educational subject catalogs. Before training, we meticulously organize the instruction data to ensure that questions escalate in difficulty regarding (A) the subject matter and (B) the intricacy of the instructions. The findings of our study reveal that substantial improvements in performance can be achieved through the mere application of curriculum ordering to instruction data (achieving gains of +4.76 on TruthfulQA, +2.98 on MMLU, +2.8 on OpenbookQA, and +1.28 on ARC-hard) compared to random shuffling. This enhancement is achieved without incurring additional computational expenses. Through comprehensive experimentation, we observe that the advantages of our proposed method are consistently evident across nine benchmarks.
Bruce W. Lee, Hyunsoo Cho, Kang Min Yoo
2023-10-14T07:16:08Z
http://arxiv.org/abs/2310.09518v4
# Instruction Tuning with Human Curriculum ###### Abstract Recent work for instruction tuning in large language models (LLMs) mainly addresses the training of maximally diverse instruction-response pairs while overlooking the learning aspect. This paper explores the potential benefits of applying a structured cognitive learning approach to instruction tuning in contemporary LLMs. In contrast to the existing approach of randomizing the order of instruction sets, we propose a highly structured synthetic dataset that mimics the progressive and organized nature of human education. We curate our dataset by aligning it with educational frameworks, incorporating meta information including its topic and cognitive rigor level for each sample. Our dataset covers comprehensive fine-grained topics spanning diverse educational stages (from middle school to graduate school) with various questions for each topic to enhance conceptual depth using Bloom's taxonomy--a classification framework distinguishing various levels of human cognition for each concept. The results demonstrate that this cognitive rigorous training approach yields significant performance enhancements -- \(+3.06\) on the MMLU benchmark and an additional \(+1.28\) on AI2 Reasoning Challenge (ARC) -- compared to conventional randomized training, all while avoiding additional computational costs. This research highlights the potential of leveraging human learning principles to enhance the capabilities of language models in comprehend and responding to complex instructions and tasks. ## 1 Introduction In contemporary times, state-of-the-art instruction-following models like ChatGPT and GPT-4 (OpenAI, 2023) have drawn attention owing to their unparalleled proficiency and versatility. A notable advancement over previous generation large language models (LLMs), like GPT-3 (Brown et al., 2020), is their impressive capability to adeptly comprehend and act upon human instructions, where this _alignment_ is attributed to the additional instruction tuning process (Wei et al., 2021). As these models continue to display progress, numerous research studies have offered many intriguing insights on instruction tuning through their endeavors to make models follow more complex instructions and enhance performance across a broad spectrum of tasks. For instance, various studies emphasize the significant influence of instruction data quality (Touvron et al., 2023; Zhou et al., 2023) and the incorporation of diverse instruction formats (Wang et al., 2023; Xu et al., 2023) on overall performance. Furthermore, including step-by-step reasoning (Wei et al., 2022) within the responses has been demonstrated to improve performance and elevate the reasoning ability of the language model (Mukherjee et al., 2023). While recent research has offered valuable insights into optimizing data formats to a better form, exploring how to curate and train such data in a more grounded, trackable manner remains elusive, often relying on randomized or undirected diversity as the prevailing norm. Specifically, ensuring efficiency in the instruction fine-tuning process is of utmost importance as extended instruction fine-tuning the inherent capability of the LLM, e.g., alignment tax (Askell et al., 2021; Ouyang et al., 2022), or favors memorization over generalization. Meanwhile, since the architectures of neural network innately emulates the human brain (Han et al., 2021), adopting a learning process analogous to human education -- a highly organized approach, progressively refined and empirically proven effective over centuries -- constitutes a logically coherent and methodologically robust learning strategy for the machine as well (Bengio et al., 2009). While many studies within the realm of curriculum learning have demonstrated the efficacy of this hypothesis in reaching faster convergence and finding better local minima, these investigations have predominantly offered a nuanced _micro view_, mostly confined to a specific task. To draw an educational analogy, such studies are akin to observing how students behave when learning a particular subject within the vast curricula. Venturing beyond the niche perspective, our study aims to explore a comprehensive, holistic viewpoint on curriculum learning in the knowledge domain. Specifically, we conceptualize the language model as a high school student about to progressively acquire intellectual knowledge from educational institutions such as schools and universities over the coming decades. And attempt to guide the student by the fundamental principle of learning _from simple to complex_(Sweller, 1988; Bloom et al., 1956) based on two primary distinct dimensions: (1) Educational Stage: sequentially mastering elementary to intricate concepts and (2) Cognitive Hierarchy: gradually deepening the understanding of each concept. For instance, in mathematics, humans initiate the learning process with the fundamental concept of addition, gradually progressing to more complex concepts like subtraction and multiplication by exploiting previously learned concepts to ease the learning (Bengio et al., 2009). Furthermore, when humans learn multiplication, the initial stage usually involves rote memorization of the _times tables_, progressively deepening the comprehension of the concept to the extent where we expand its application to real-world situations. This cognitive process enables the human intellect to traverse diverse fields, aligning _massively multi-domain knowledge_. To systematically explore the potential merits of the interplay between educational curriculum and human cognitive process, we curated a massive synthetic knowledge instruction dataset and its training method called Corgi (Cognitively rigorous instructions). As illustrated in Figure 1, we initially establish a continuous progression across educational stages by integrating concrete educational frameworks provided by international secondary education curricula (i.e., Cambridge IGCSE) and a combination of several university catalogs. Subsequently, using a teacher model like ChatGPT, we extracted various topics covered in every course at each educational level. Based on the learning objectives in Bloom's taxonomy (Bloom et al., 1956), we crafted a comprehensive set of questions for each topic, with varying degrees of cognitive level. A standout feature of our dataset is its rich meta-information for each data point, facilitating the generation of coherent and contextually meaningful training data sequences. We found compelling empirical evidence from Corgi that our cognitive progressive training inspired by the human curriculum yields significant advantages over randomized training. Notably, when Corgi is subjected to random training, its performance is comparable to other instruction Figure 1: Overview of our educational framework. We create a dataset based on a continuum from secondary school to grad school, extracting multiple concepts from each course. For every concept, we formulate 19 questions of varied cognitive levels using Bloom’s taxonomy. datasets such as WizardLM (Xu et al., 2023) and Vicuna (Chiang et al., 2023). However, by simply optimizing the sequence of learning data, we observed a roughly 3 points improvement in the knowledge benchmark (i.e., MMLU), surpassing both WizardLM and Vicuna with a considerably smaller dataset size (66K). Moreover, this improvement is not limited to the knowledge domain and extends beyond the broader benchmarks, including +1.73 in commonsense reasoning benchmarks (i.e., OpenBookQA, ARC, PIQA, CommonsenseQA) and +2.37 in language understanding (i.e., HellaSwag, Lambada). ## 2 Corgi Corgi is a structured educational model that mimics the educational journey of a student. In this section, we delve into the detailed process of constructing our dataset and efficient training method inspired by the human knowledge acquisition process. ### Dataset Construction The primary objectives of our dataset are: (1) to encompass the full coverage of knowledge students acquire through their curriculum and (2) to store detailed meta information for each data, enabling the formation of meaningful order. However, constructing such a broad scope of knowledge dataset from scratch can be prohibitively costly or nearly impossible. To overcome this hurdle, we propose an automatic approach to generate synthetic data by utilizing a teacher language model (i.e., ChatGPT). Furthermore, we also utilize real-world educational curricula, such as university catalogs and the Cambridge IGCSE curriculum (refer to Appendix C for more information), as a foundational source when generating synthetic datasets. These curricula cover 45 distinct subjects and provide rich metadata, including educational stage (i.e., secondary, undergraduate, or graduate), subject (e.g., biology, math, etc.), course, and syllabus (i.e., course description), ensuring a broad spectrum of knowledge coverage as well. At a high level, the process of constructing our instruction dataset consists of three steps. (See Appendix B for a graphical illustration with examples.) **Step 1. Generate Concepts**. This step aims to extract multiple essential academic concepts for each course based on its syllabus. However, the initial syllabus often contains unnecessary details, such as administrative jargon and scheduling, with limited content about the actual coverage of the course. Accordingly, we employ a specialized refinement prompt to convert these descriptions into more substantive, textbook-like variants. Using these enriched versions as a source, we extract fine-grained academically meaningful concepts through a concept-generation prompt (specific prompts are stipulated in Appendix E). To achieve maximal diversity and distinction among the selected concepts, we harvested an extensive array of fine-grained concepts and subsequently eliminated any redundancies. Specifically, we employed semantic deduplication utilizing a cosine similarity threshold of 0.67 using the sentence-transformers library (Reimers and Gurevych, 2019) model _all-MiniLM-L12-v2_. As a result, we amassed a total of 5.6K fine-grained concepts in 1.8K courses in 45 subjects. **Step 2. Collecting Instructions**. On top of previously collected concepts, we generate actual instruction data based on a systematic educational learning object called Bloom's taxonomy (Bloom et al., 1956; Krathwohl, 2002), which serves as a seminal guide for many educators. This taxonomy is a hierarchical arrangement of six cognitive processes that can be visualized as a pyramid. The lower-order layers consist of relatively simple thinking skills (i.e., Remember, Understand, and Apply), and the upper layers represent more complex cognitive processes (i.e., Analyze, Evaluate, and Create). The progression through these levels ensures that learners gather information and learn how to use, analyze, and even create original knowledge. Exploiting this concept, we produce diverse data for a single concept by giving a detailed object from each cognitive level as instructions to a teacher language model during data generation. Namely, we first build a pre-defined 19 plug-and-play templates leveraging the definition and objectives of the three lower cognitive hierarchies: Remember, Understand, and Apply, as outlined in the original paper (Bloom et al., 1956). (Appendix D summarizes the actual templates with corresponding original definitions.) We focus solely on these three levels because the higher cognitive levels often produce questions with no clear answers and contain biased or subjective content. Utilizing these modular templates and 5.6K concepts from the previous step, we produce 107K cognitive hierarchy datasets. Each query incorporates a random system message (see Appendix E) to elicit comprehensive explanations or rationale for the answer following previous work (Mukherjee et al., 2023). **Step 3. Filter Data**. It is important to note that our dataset is synthetic and relies heavily on the teacher language model. This innate dependence occasionally results in inconsistency in the question-answer pairs, which could drastically degrade the performance (Touvron et al., 2023; Zhou et al., 2023). To ensure the quality of our dataset, we employ a third-party tool, Contriever (Izacard et al., 2022), to filter out low-quality data. For each data instance, we gather three distinct passages sourced from Wikipedia, comprising a precise span of 256 words. We then assess the relevance between experts and a question using a retrieval-checking prompt, and only those that meet the relevance criteria are included in the final dataset. We also applied some basic string-match rules to remove refusal data containing particular text sequences, like 'As an AI assistant...'. ### Curriculum Instruction Tuning In sync with our richly annotated dataset, which embodies meta-details such as subject, course, concept, and cognitive hierarchy, we introduce a rigorous cognitive training method to inject knowledge from the dataset efficiently. The primary philosophy of our training paradigm is to gradually step towards a genuine understanding of various concepts by following the hierarchical progression in Bloom's taxonomy. When only a single concept is to be learned, one can linearly follow this hierarchy. Yet, as the breadth of knowledge increases, as in our case, there are numerous design choices in determining how to assort these multiple concepts efficiently. One straightforward way is blocking, which stacks each hierarchical block for each subject. (See Figure 2.) However, numerous studies suggest that interleaving practice, a strategy of mixing different topics, is more helpful to students to incorporate existing knowledge and skills with new ones. Specifically, interleaving helps mitigate the risk of cognitive decay (Luo et al., 2023b), a notable drawback of blocking where previously learned concepts are set aside for long periods. Intriguingly, this phenomenon is also the case in machine learning and is commonly known as catastrophic forgetting (McCloskey & Cohen, 1989). To make the best of the two worlds, our training curriculum traverses a global progression of the cognitive load from Bloom's taxonomy while interleaving different subjects to reinforce retention and understanding. As discussed in the subsequent sections, the proposed arrangement displays superiority on various benchmarks compared to other alternatives, revealing tendencies similar to reference experiments on humans (Taylor & Rohrer, 2010). ## 3 Experiments ### Setup This section assesses the performance of Corgi with other open-sourced models across various knowledge-related benchmarks closely aligned with our data domain. We highlight here the most important components of our experimental setup. Figure 2: A comparison of two training sequences. Small blocks (e.g., H1, M1) stand for fine-grained concepts per subject. _Blocking_ naively stacks hierarchical blocks per subject, while _interleaving_ cyclically revisits each subject, adhering to the cognitive hierarchy from Bloom’s taxonomy. **Baselines.** We adopt LLaMA 2 13B models as the primary backbone in the following main experiment. We subsequently instruction-tuned 5 epochs on our dataset, both curriculum-based and non-curriculum-based (naive stacking - blocking) approaches, to take a closer analysis of our framework on two dimensions: the data-centric and curriculum-centric aspects. We selected Vicuna v1.5 (Chiang et al., 2023) and WizardLM v1.2 (Xu et al., 2023) for other competing baselines. These models are also instruction-tuned on LLaMA 2 with different data collection paradigms. Specifically, Vicuna sources a diverse array of real-world user queries from a publicly accessible Chat-GPT prompt-sharing platform, while WizardLM utilizes an innovative method termed _Evol-Instruct_, which generates synthetic instructions by formulating progressively challenging questions. **Benchmarks.** We evaluated the aforementioned baselines across six different benchmarks: MMLU, ARC, PIQA, CommonsenseQA, OpenbookQA, and HellaSwag1. Among these benchmarks, MMLU is closely aligned with our data since MMLU assesses the extensive coverage of educational content, spanning from secondary school to graduate levels, across diverse subjects. Footnote 1: The detailed descriptions and references of each dataset are stipulated in Appendix A. ### Results Table 1 reports the performance of Corgi and other competing methods on 6 benchmarks, where Corgi generally outperforms others with a considerably smaller dataset size. Our observations indicate that interleaving, which involves a global progression of cognitive difficulty while revisiting diverse subjects, consistently outperforms blocking, which simply stacks subjects on top of one another in a straightforward manner. Overall, the order in which one presents learning material during instruction tuning can make a big difference in the final performance. When one employs a suitable curriculum, it can improve performance on most major benchmarks, including knowledge, commonsense reasoning, and language understanding (this is further evidenced in Figure 4). In our experiments, Corgi demonstrated notable improvements when subjected to our interleaved curriculum training (\(\Delta\text{MMLU}+0.64^{\text{\tiny\text{\tiny\text{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{{ \tiny{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\) \) \) \) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{\text{\ ## 4 Analysis ### Analysis on Curriculum When training towards multi-domain knowledge, there is more than one way to give structure to the overall instruction tuning process. In this section, we conduct a comparative analysis of various curricula with additional training strategies. From our experiments, we verified two intriguing observations: **1. Not all curricula guarantee transferability to machine training** and **2. Global curricula give large benefits, while local curricula can mislead.** We separate various curricula into two branches: global curriculum and local curriculum, based on their progression of conceptual and cognitive complexity. To illustrate, the **interleaving** strategy _globally_ steps the cognitive load according to Bloom's taxonomy, whereas the **blocking** strategy _locally_ advances from lower to higher cognitive loads, emphasizing the internal organization of concepts within a subject (Gibbons, 2002; Vygotsky, 1978). Incorporating the previously introduced strategies, Figure 3 represents two additional alternative sorting strategies also motivated by educational paradigms: **Clustering** is similar to blocking but is different in that it facilitates the "deep learning" (Warburton, 2003) of a concept while ignoring the intra-subject dependency of concepts. **Spiral** is designed to revisit subjects and concepts at fluctuating cognitive load levels in a repetitive manner (Masters & Gibbs, 2007). In Figure 4, we further establish that the final performance of an LLM can be significantly impacted by the order in which one presents instruction tuning data. However, this does not mean that any educational science-inspired structured learning paradigm benefits instruction tuning. Depending on the global batch size, the number of difficulty levels available per concept, and the number of concepts per subject (or any other large semantic category), it is very likely that most local progressions or structures are destroyed when employing a larger global batch size. This resulted in a biased training batch. Such biased training batches are even worse than naive random shuffling be Figure 4: **Local curriculum diminishes performance improvement. The figure shows a macroscopic, averaged performance comparison of several benchmark improvements with respect to the base model (LLaMA 2 13B) performance. _World Knowledge:_ MMLU, TruthfulQA, TriviaQA, _Commonsense Reasoning:_ OpenBookQA, ARC, PIQA, CommonsenseQA, _Language Understanding:_ HellaSwag, and Lambda. A full breakdown of this chart is given in the Appendix H.** Figure 3: (Continued from Figure 2) **More examples of local progressions**. A comparison of clustering and spiral training sequences. The _clustering_ stacks hierarchical blocks for each concept, while the _spiral_ cyclically revisits each concept and alternates cognitive difficulty from Bloom’s taxonomy. cause they often display a more volatile learning trajectory. This assertion is substantiated by Figure 5, which shows how a global curriculum, which maintains structure under most larger batch sizes while ensuring that all subjects are covered in every training batch, successfully pushes performance above the random shuffling baseline. Another noteworthy observation is that the impact of curriculum extends beyond our target domain (i.e., knowledge), and often improves reasoning ability. Recent studies have demonstrated that models trained with specific datasets often experience performance degradation when extrapolated beyond that domain. Specifically, Wang et al. (2023) reports that many recent instruction tuning datasets like Supernatural Instructions (Wang et al., 2022) seem to show a trade-off performance relationship between benchmarks, such as MMLU and ARC, of which the latter additionally requires reasoning ability to derive correct answers. While we observe a similar tendency in Vicuna, WizardLM, and random trained Corgi -- all show mixed results on MMLU, ARC, OpenBookQA, or HellaSwag -- our curriculum-based Corgi notably stands apart and does not suffer from this trade-off. ### Ablation study on LLaMA 1 In this section, we conduct ablation experiments on LLaMA 1 to analyze the impact of specific components. As displayed in Figure 6, our dataset demonstrates scalability, showing better performance with more data quantity. Moreover, our data filtering scheme yields superior performance with a smaller volume of data, which aligns with previous research (Zhou et al., 2023; Touvron et al., 2023) emphasizing the significance of data quality. Another key observation is that the **negative impacts of this noisy data become more pronounced as the performance gap between the teacher and student models narrows**. For instance, in Figure 6, we can clearly see that models like Vicuna, WizardLM, and Corgi consistently show significant performance improvements across various benchmarks when trained with randomized data from LLaMA 1. However, the situation changes when we move to LLaMA 2, even with additional training on a larger dataset. The gains start to diminish and, in some cases, reverse. Recent literature has proposed data filtering as a viable solution to mitigate this phenomenon, as demonstrated by studies such as Alpagasus (Chen et al., 2023), TEGIT (Chen et al., 2023), and InstructionGPT-4 (Wei et al., 2023). Our observations align with this trend as well. Filtering out poor-quality data points yields significant benefits across different data sizes in LLaMA 1 (e.g., \(\Delta\) MMLU +1.7: 107K \(\xrightarrow{\text{filters}}\) 66K; \(\Delta\) MMLU +1.9: 60K \(\xrightarrow{\text{filters}}\) 37K; \(\Delta\) MMLU +1.7: 30K \(\xrightarrow{\text{filters}}\) 15K). However, our research suggests that employing a curriculum-based training approach can be a promising solution. This approach demonstrates robust and resilient benefits over randomized training when dealing with noisy training datasets (Wu et al., 2020). More specifically, we observe that several benchmarks, which initially show decreased performance after random shuffled instruction tuning, exhibit substantial performance improvements after curriculum-based instruction tuning (\(\Delta\)MMLU \(-0.31\xrightarrow{\text{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{{\tiny{{\tiny{\tiny{}}}}}}}}}}}}}}}}} }+2.75\), APIQA \(-0.55\xrightarrow{\text{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{{\tiny{\tiny{{\tiny{{\tiny{{\tiny{{\tiny{{\tiny{{\tiny{\tiny{{\tiny{{\tiny{{}}}}}}}}}}}}}}}}}}} }+1.14\), \(\Delta\)HellaSwag \(-1.49\xrightarrow{\text{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{{\tiny{\tiny{{\tiny{\tiny{{\tiny{{\tiny{{\tiny{{\tiny{{{}}}}}}}}}}}}}}}}}}}}+2.18\)).}\). ## 5 Background **Cognitively understanding human learning processes.** One of the basic questions facing educators has always been "Where do we begin to improve human thinking?" (Houghton, 1997). Among diverse learning theories, Bloom's Taxonomy (Bloom et al., 1956) is a well-cited approach, categorizing learning processes into six hierarchical stages, ranging from simple to complex and concrete to abstract: Remembering, Understanding, Applying, Analyzing, Evaluating, and Creating (Krathwohl, 2002). Its effectiveness spans diverse subjects, from Math to Political Sciences (Shorser, 1999; Dickie, 1994; Su et al., 2004; Mulcare & Shweded, 2017). Cognitive Load Theory underscores the significance of managing mental exertion during learning. The theory originated in the 1980s and underwent substantial development and expansion in the 1990s, serving as a major theory for classroom instructional design (Paas et al., 2003; Sweller et al., 1998). With the rise of e-learning in the 2000s, the theory was again widely applied to designing effective instructional strategies (Kirschner et al., 2009; Kalyuga, 2007; Grunwald & Corsbie-Massay, 2006). A major effort was devoted to finding strategies to manage cognitive load in a remote setup where learners are communicating with teachers through pre-made instructions. Notably, Clark et al. (2011) suggests that learning contents should be delivered in alternating (diverse) formats but with sequentially increasing difficulties. **Benefiting neural networks with human learning processes.** Machine learning can benefit from adopting human-centric approaches. Curriculum learning, for instance, stands as a research area that arranges training data in a meaningful sequence, showcasing its potential to expedite convergence while enhancing generalization (Bengio et al., 2009; Saglietti et al., 2022; Wang et al., 2021; Xu et al., 2020; Yang et al., 2019; Shi et al., 2015; Krueger & Dayan, 2009; Elman, 1993) -- an attribute of great value to fine-tuning LLM. This synthesis of human cognition and machine algorithms remains a compelling area of study (Han et al., 2021; Shiffrin & Mitchell, 2023; Dasgupta et al., 2022). **Instruction tuning on LLMs.** This refers to optimizing pre-trained models to handle diverse natural language inquires (Wang et al., 2023b). Methods often involve supervised learning from instruction-response pairs (Taori et al., 2023; Longpre et al., 2023; Li et al., 2023; Chen et al., 2023b; Li et al., 2023a). Consequently, the methodology for generating or collecting this instruction data plays a significant role in the LLM's final performance (Lu et al., 2023; Wang et al., 2023a; Wan et al., 2023). While some research focused on enhancing general performances like reasoning or knowledge (Mukherjee et al., 2023; Lee et al., 2023; Wei et al., 2023b; Ghosal et al., 2023), others focused on instruction tuning for domain-specific use cases (Qin et al., 2023; Xie et al., 2023; Muennighoff et al., 2023; Li et al., 2023b; Luo et al., 2023a). Though instruction-tuning research made remarkable progress, it is rather challenging to find cognitively motivated work with some exceptions (Itzhak et al., 2023; Yu et al., 2023; Gao et al., 2023). ## 6 Conclusion We introduce Corgi, a novel methodology for instruction tuning in large language models that employ a structured, pedagogically-inspired dataset. Our methodology not only surpasses existing benchmarks in both reasoning and knowledge-based tasks but also achieves this efficiency without escalating computational demands. Moreover, the observed efficacy of interleaved sorting and two-tier filtering underlines the crucial role of structured, high-quality data in model performance. Collectively, these findings illuminate the potential of leveraging educational paradigms to elevate the capabilities of machine learning models.
2310.11357
A Pseudo-likelihood Approach to Under-5 Mortality Estimation
Accurate and precise estimates of the under-5 mortality rate (U5MR) are an important health summary for countries. However, full survival curves allow us to better understand the pattern of mortality in children under five. Modern demographic methods for estimating a full mortality schedule for children have been developed for countries with good vital registration and reliable census data, but perform poorly in many low- and middle-income countries (LMICs). In these countries, the need to utilize nationally representative surveys to estimate the U5MR requires additional care to mitigate potential biases in survey data, acknowledge the survey design, and handle the usual characteristics of survival data, for example, censoring and truncation. In this paper, we develop parametric and non-parametric pseudo-likelihood approaches to estimating child mortality across calendar time from complex survey data. We show that the parametric approach is particularly useful in scenarios where data are sparse and parsimonious models allow efficient estimation. We compare a variety of parametric models to two existing methods for obtaining a full survival curve for children under the age of 5, and argue that a parametric pseudo-likelihood approach is advantageous in LMICs. We apply our proposed approaches to survey data from four LMICs.
Taylor Okonek, Katherine Wilson, Jon Wakefield
2023-10-17T15:45:02Z
http://arxiv.org/abs/2310.11357v2
# A Pseudo-likelihood Approach to Under-5 Mortality Estimation ###### Abstract Accurate and precise estimates of under-5 mortality rates (U5MR) are an important health summary for countries. Full survival curves are additionally of interest to better understand the pattern of mortality in children under 5. Modern demographic methods for estimating a full mortality schedule for children have been developed for countries with good vital registration and reliable census data, but perform poorly in many low- and middle-income countries. In these countries, the need to utilize nationally representative surveys to estimate U5MR requires additional statistical care to mitigate potential biases in survey data, acknowledge the survey design, and handle aspects of survival data (i.e., censoring and truncation). In this paper, we develop parametric and non-parametric pseudo-likelihood approaches to estimating under-5 mortality across time from complex survey data. We argue that the parametric approach is particularly useful in scenarios where data are sparse and estimation may require stronger assumptions. The nonparametric approach provides an aid to model validation. We compare a variety of parametric models to three existing methods for obtaining a full survival curve for children under the age of 5, and argue that a parametric pseudo-likelihood approach is advantageous in low- and middle-income countries. We apply our proposed approaches to survey data from Burkina Faso, Malawi, Senegal, and Namibia. All code for fitting the models described in this paper is available in the R package pssst. **Keywords:** Survival analysis, Pseudo-likelihood, Age patterns of mortality, Under-5 mortality, Neonatal mortality **Significance Statement:** We propose a novel approach to estimating full age patterns of mortality for children under the age of 5 across time using continuous, parametric survival curves. We frame the demographic distinction of period versus cohort estimates of mortality in a survival analysis framework using period as a time-varying covariate, and draw on methods from the survey statistics literature to construct an approach designed to accurately estimate child mortality in low- and middle-income countries, where demographic surveys make up the majority of information available on child deaths. We detail the improvements our approach makes over existing demographic methods in a low- and middle-income country setting with regard to theoretical properties, and apply our approach to four countries, demonstrating the improved accuracy of our method with certain parametric model choices. ## 1 Introduction Estimates of child mortality rates for specific age groups at a national and subnational level provide important information on the health of a country and inform targeted public health interventions. Historically, estimates of interest have been the neonatal mortality rate (NMR: probability of dying before age 1 month), the infant mortality rate (IMR: probability of dying before age 1 year), and the under-5 mortality rate (U5MR: probability of dying before the age of 5). While these summaries give us a rough picture of the pattern of mortality under the age of 5, they do not constitute a _complete_ pattern of mortality before the age of 5. As such, producing a full, contin uous survival curve for children under the age of 5 is of interest for informing targeted interventions (Verhulst et al., 2022; Guillot et al., 2022), as well as for better quantifying the differences in mortality patterns between countries. Modern demographic methods for estimating a full mortality schedule for children under the age of 5 have been developed in a high-income country setting that assumes vital registration information is readily available (Guillot et al., 2022; Eilerts et al., 2021; Verhulst et al., 2022). One such method is the log-quad model (Guillot et al., 2022), which uses the Human Mortality Database (HMD) (Barbieri et al., 2015) to obtain a continuous curve quantifying the relationship between age and the (log) probability of dying before a given age. This approach uses the HMD to obtain parameter values, which are plugged into the log-quad model's formula to obtain full, continuous curves. Guillot et al. (2022) note that the patterns of mortality that are estimated from the model are importantly different from the observed data in low- and middle-income countries (LMICs). Eilerts et al. (2021) and Verhulst et al. (2022) note that sub-Saharan African and south Asian countries typically observe higher levels of child mortality rates (CMR: the probability of dying between ages 1 and 5 given survival to age 1) for a given IMR when compared to high-income countries. Verhulst et al. (2022) call this a "very late" pattern of under-5 mortality. Another popular method that makes use of HMD life tables is the Singular Value Decomposition (SVD) approach in Clark (2019). Here, the information from HMD life tables are compressed into three or four principal components that summarise observed full mortality schedules over an entire lifetime. Although used specifically with the HMD in Clark (2019) and intended to produce all-age mortality schedules at a yearly scale, this approach can be used more generally with other lifetables (see Alexander et al. (2017), for example) or to produce child mortality estimates in continuous time under the assumption of constant mortality hazards within years. In addition to different patterns of under-5 mortality in LMICs compared to high-income countries, the data sources available in LMICs typically differ from high-income countries. In most high-income countries, vital registration and reliable census information are readily available, hence the mortality data is more granular and potentially subject to fewer biases than that in LMICs. In countries without vital registration data or reliable census information, we instead rely on nationally representative surveys. In many LMICs, the Demographic and Health Surveys (DHS) are considered the most reliable source of information for such outcomes, and they are conducted with reasonably high frequency (the aim is every 5 years). Survey-weighted estimates of health outcomes with variance estimates that account for the survey design are preferred when there is enough data to obtain such estimates with high precision. As noted in Hill (1995), Lawn et al. (2008), and Guillot et al. (2022), surveys such as the DHS may be subject to biases in addition to other data limitations. One example of bias is age-heaping, where more children are recorded as having died at particular ages than is truly the case. In DHS surveys, this often occurs at age 12 months (see Appendix A.2 for examples). Additionally, the ages at death of some children are not observed exactly (i.e., censored). This combined with the need to appropriately account for survey weights and potential biases from age-heaping form statistical modeling challenges that are unique to surveys in LMICs; all of these challenges have not yet been addressed simultaneously in the literature. An additional challenge specific to U5MR estimation is distinguishing between cohort- and period-estimates of mortality. When estimating U5MR, we typically want to obtain period-specific estimates rather than cohort-specific estimates, as the most recent, cohort-specific estimates of U5MR we could obtain will always be five years in the past. Period estimates are for "synthetic" children, where we envisage a cohort of children that live their first five years of life in a single time period. This is opposed to a real cohort of children who are born in one time period and move through time (periods) as they age. The concept of synthetic children allows us to provide estimates of demographic indicators such as life expectancy or U5MR that are a reasonable summary of the current state of the mortality pattern. In practice, when estimating a demographic indicator for synthetic people, we consider what a real person would contribute to each period _as though_ they were a synthetic person. As detailed in Section 2, in a survival analysis framework this corresponds to treating time period as a time-varying covariate. While existing methods have made use of this approach in a discrete survival setting (Mercer et al., 2015), none have _explicitly_ formulated the problem as that of a time-varying covariate in continuous time. In this paper we aim to (1) reframe the production of period estimates for under-5 mortality rates in LMICs using continuous survival models for mortality with a time-varying covariate representation, accounting for potential censoring, and (2) propose a pseudo-likelihood estimate of full mortality schedules for children under the age of 5 in LMICs, that takes full advantage of the granularity of the data available while accounting for both the survey design and potential biases in the surveys. Rather than assume a model based on data from high-income countries, we instead deal with DHS data directly to obtain both a parametric or nonparametric estimate of the survival curve in LMICs at a national level. These methods are easily extended to alternative parametric distributions as well as subnational models. ## 2 Survival framework To begin, we define some notation that is common in the demography and statistics literature, and is used throughout this paper. Mortality is typically estimated as either a rate or probability. The under-5 mortality _rate_ (U5MR) is the _probability_ that a child dies before the age of 5. Three common probabilities that are often of demographic interest include: U5MR, the probability of dying before the age of 5 years; IMR, the probability of dying before the age of 1 year; and NMR, the probability of dying in the first month of life. Let \(X\) be survival time. We denote the probability that a child died between the ages of \(n\) and \(x+n\), given that they survived until at least age \(n\), as \({}_{x}q_{n}\,=\,\Pr(X<x+n\mid X>n)\). With age given in months, as we do throughout, we therefore denote U5MR as \({}_{60}q_{0}\), IMR as \({}_{12}q_{0}\), and NMR as \({}_{1}q_{0}\). We treat mortality as a time-to-event outcome in a survival framework. In this framework our estimand of interest is the survival curve \(S(x)\), or the probability of surviving to at least age \(x\). We can directly translate quantities \({}_{x}q_{0}\) to a survival curve via \(S(x)\,=\,1-{}_{x}q_{0}\), and \({}_{x}q_{n}\) can also be computed from conditional probabilities. An important distinction in demography, that again has a survival analysis flavor, is period versus cohort estimates. The age-period-cohort distinction is subtle but well-documented (see Carstensen (2007), for example). Importantly, the subset of data used to estimate cohort and period estimates consequently differs. In Figure 1, we illustrate this difference. For simplicity, we assume all children are born on January first of a given year. We see that the data used to obtain a cohort estimate of U5MR for the cohort born in 2000 consists of only children born in the year 2000. Note that we will always be five years behind schedule in terms of estimate production because we need to observe the full, first five years of a cohort before calculating cohort U5MR. The data used to obtain a period estimate of U5MR for the year 2004 contains data from five distinct cohorts: cohorts 2000, 2001, 2002, 2003, and 2004, as seen on the right-hand side of Figure 1. Of note, when obtaining cohort estimates, both age and time are synonymous, whereas when obtaining period estimates, age and time are distinct. This is because the age of a synthetic child is not directly tied to time as we observe it. Therefore, we let \({}_{x}q_{n,p}\) vary by period \(p\) in addition to the age range \([n,x+n)\). For example, we may write the probability a child dies between the ages of 1 and 2 in the year \(2001\) as \({}_{12}q_{12,2001}\), where the deaths that inform this estimate must come from the cohort of children born in \(2000\) who survive until at least age 1. It is important to note that when computing period estimates, some of the data will be subject to left-truncation. In general, left-truncation, also known as late entry, occurs if survival time is less than left-truncation time and thus no information is available on the subject. If not dealt with, Figure 1: Left-hand side: Potential lifespans of observations used to obtain a cohort estimate of U5MR for the cohort born in 2000. Right-hand side: Potential lifespans of observations used to obtain a period estimate of U5MR for the year 2004. All children are assumed to be born on January first of a given year. Horizontal lines indicate the potential lifespans of children up to January 1st, 2005. left-truncation can induce selection bias. In our applications, left-truncation occurs when an individual is not at risk of dying in a specific age band, within a specific time period because they die before the period begins. Truncation accounts for the potential bias that would be introduced into our estimate from the individuals who were born in the earlier cohorts, yet died before our time period of interest. As an example, in Figure 1 (right panel), all individuals born at the beginning of the year 2000 would be subject to left-truncation at age 4 when computing their contribution to the period U5MR estimate for 2004. If we compute the period estimate of \({}_{60}q_{0,2004}\) using five, discrete values, \({}_{12}q_{48,2004}\), \({}_{12}q_{36,2004}\), \({}_{12}q_{24,2004}\), \({}_{12}q_{12,2004}\), \({}_{12}q_{0,2004}\), for each cohort born in 2000 through 2004, respectively, left-truncation is dealt with implicitly through the conditional probability structure of \({}_{n}q_{x}\), as we will see in the discrete hazards approach (Allison, 2014; Mercer et al., 2015) described in Section 3.4. If we are interested in obtaining estimates of U5MR for multiple periods across time, this truncation structure can be incorporated into a model by treating period as a time-varying covariate. This is done implicitly in a discrete hazards approach, but can be done explicitly in the pseudo-likelihood approach we propose that allows us to use continuous survival models for age. The discretely-categorized variable, period, is treated as a covariate that changes _through_ time, and is simply an indicator for which synthetic cohort we are considering. A final piece of the survival framework is how we deal with censored observations. Not all children die before 5 years of age. These children will be right-censored, and therefore will not contribute any time at risk after the age of 5 to a statistical model. Children who die in a time period later than the period in which they were born also contribute right-censored observations to those earlier time periods. A survival framework also allows us to deal with interval censoring, where we know only that an event has occurred for an individual between two ages. DHS surveys contain daily observed death dates for children who died before the age of 1 month, monthly, interval-censored observations for children who died between 1 month and 24 months (e.g., we may observe that a child that dies between the ages of 2 and 3 months), and yearly, interval-censored observations for children who died after 2 years of age, with some exceptions (rarely, the DHS records more detailed information for particular children). Interval censoring can be appropriately addressed by discretely categorizing observations, as is done in all of the existing approaches described in Section 3, but can also be addressed in a continuous survival framework as we propose. ## 3 Methods Historically, survival methods have been extended to the survey setting, in the context of the Cox proportional hazards model (Binder, 1992; Lin, 2000; Breslow and Wellner, 2007, 2008). Such methods address the survey aspect of the data via a pseudo-likelihood approach to estimation (Binder, 1983), in which we weight each individual's likelihood contribution by their sampling weight, and maximize the pseudo-likelihood to give weighted (pseudo) MLEs. The variance of the estimates is computed via sandwich estimation. A brief description of the general approach to pseudo-likelihood estimation described in Binder (1983) is given in Appendix B. Bootstrap and jackknife procedures have been developed for variance estimation for various complex survey designs, including the two-stage, stratified cluster design common to DHS surveys. To obtain bootstrapped variance estimates, \(n_{h}-1\) clusters are sampled with replacement within strata \(h\), where \(n_{h}\) is the number of clusters in strata \(h\)(Rao and Wu, 1988). Pointwise confidence intervals may then be constructed using percentiles of the bootstrap samples. A jackknife procedure for the same setting is described in Pedersen and Liu (2012). For the remainder of this section, we describe two approaches for estimating continuous survival curves for synthetic children across multiple time periods. Both methods are based on pseudo-likelihood; one parametric, and one nonparametric. In Section 3.3 onward other approaches are described. ### Nonparametric approach The classic and most popular nonparametric estimate of a survival curve is the Kaplan-Meier estimator (Kaplan and Meier, 1958). Let \(t_{i}\) be a time when at least one event (death) occurred, \(d_{i}\) be the number of events that occurred at time \(t_{i}\), and \(n_{i}\) be the number of children who have not had an event or been censored up to time \(t_{i}\). Then the Kaplan-Meier estimator of the survival curve at time \(t\) is \[\hat{S}(t)=\prod_{i:t_{i}\leq t}\left(1-\frac{d_{i}}{n_{i}}\right).\] Under noninformative censoring, the Kaplan-Meier estimator is the nonparametric maximum likelihood estimator (NPMLE) of the survival curve. However, the Kaplan-Meier estimator, in it's simplest form, is unsuitable for interval censored data. A generalization of the Kaplan-Meier estimator to arbitrarily truncated and censored observations is the Turnbull estimator (Turnbull, 1976). In Appendix C.1 we introduce notation for the Turnbull estimator and describe the estimator alongside an example for our motivating application. Incorporating survey weights into the Turnbull estimator is straightforward. This extension to the Turnbull estimator is detailed in Appendix C.1, and produces a pseudo-NPMLE for arbitrarily truncated and censored data with survey weights. Groeneboom and Wellner (1992) note that, compared to the Kaplan-Meier estimator, the Turnbull estimator has less appealing asymptotics. The estimator converges pointwise (i.e. at a fixed value \(t\)) at a rate of \(n^{1/3}\) to a non-Gaussian distribution. The question of obtaining valid confidence bands for the Turnbull estimator remains an open statistical question. Though some have recommended using a bootstrap procedure for variance estimation (see Sun (2001), for example), the coverage of these procedures is not well justified (and therefore not necessarily correct) due to the rate of convergence and non-Gaussian asymptotics. Although the bootstrap is not well-justified for the Turnbull estimator, we do use a bootstrap procedure appropriate for a two-stage, stratified sampling design from Rao and Wu (1988) to as sist with model comparison in our application. The procedure is described in Section 3. In our model comparison approach, we treat the Turnbull estimator as a baseline estimate of the survival curve, and aim to determine whether a given parametric model is "reasonably" close to the Turnbull estimator. Obtaining some measure of uncertainty for the Turnbull estimator facilitates this comparison. As a well-justified variance estimator is not available for the Turnbull estimator, we do not recommend using the Turnbull estimator for _official_ estimates of full mortality schedules for children under the age of 5 in LMICs. It is especially important in scenarios where the data does not come from a census or other vital registration source to accurately quantify the uncertainty of estimates. The Turnbull estimator does, however, describe the survival function, and provide a useful reference when assessing how well a parametric distribution summarizes the pattern of U5MR in LMICs, as its point estimates do not rely on parametric assumptions. ### Parametric approach Suppose we have children \(i=1,\ldots,n\). Let, * \(p\,=\,1,\ldots,P\): consecutive time periods, which may be single years or combinations of years * \(l_{p}\): length of period \(p\), measured in the same units as age of child * \(y_{p}\): date at the start of time period \(p\) * \(b_{i}\): child's date of birth * \(t_{i}\): child's age at right-censoring or age at death * \(I_{i}\): an indicator that child \(i\) is interval-censored. If \(I_{i}\,=\,1\), child \(i\) is interval-censored. If \(I_{i}=0\), child \(i\) is right-censored or has an exact death time * \(t_{0i}\): child's age at beginning of interval censoring, if child is interval censored * \(t_{1i}\): child's age at end of interval censoring, if child is interval censored * \(a_{pi}=y_{p}-b_{i}\): the age the child would be at \(y_{p}\) * \(E_{i}\): an indicator that child \(i\)'s death is exactly observed. If \(E_{i}\ =\ 1\), then \(I_{i}\ =\ 0\), and if \(E_{i}=0\), then \(I_{i}\) could be \(0\) or \(1\) * \(\tilde{p}_{i}\): if \(E_{i}=1\), the period in which that child died * \(U_{x_{i}}(p)\ =\ \{p:a_{pi}>-l_{p},a_{pi}<x_{i}\}\). \(U_{x_{i}}(p)\) is the set of periods for which child \(i\) is alive and at risk of dying, where \(x_{i}\) is one of \(t_{i}\), \(t_{0i}\), or \(t_{1i}\) where appropriate Let \(F\) denote the CDF for the specified parametric distribution, and \(H\) the corresponding cumulative hazard function. The likelihood for all individuals in our dataset across all time periods can be written as \[L(\mathbf{\theta}) =\prod_{i=1}^{n}L_{i}(\mathbf{\theta})\] \[=\prod_{i=1}^{n}\left[1-F_{\mathbf{\theta},i}(t_{i})\right]^{1-I_{i} }\left[F_{\mathbf{\theta},i}(t_{1i})-F_{\mathbf{\theta},i}(t_{0i})\right]^{I_{i}}[f_{ \mathbf{\theta},i}(t_{i})]^{E_{i}},\] \[=\prod_{i=1}^{n}\underbrace{\left[\exp(-H_{\mathbf{\theta},i}(t_{i}) )\right]^{1-I_{i}}}_{right-censored}\underbrace{\left[\exp(-H_{\mathbf{\theta},i} (t_{0i}))-\exp(-H_{\mathbf{\theta},i}(t_{1i}))\right]^{I_{i}}}_{interval-censored} \underbrace{\left[\exp(-H_{\mathbf{\theta},i}(t_{i}))h_{\mathbf{\theta},\tilde{p}_{i} }(t_{i})\right]^{E_{i}}}_{exact},\] where \[H_{\mathbf{\theta},i}(x_{i})=\sum_{U_{x_{i}}(p)}\int_{\max\{a_{p_{i}},0\}}^{\min\{ x_{i},a_{p_{i}}+l_{p}\}}h_{\mathbf{\theta},p}(u)du,\] and \(h_{\mathbf{\theta},p}(u)\) is a period-specific hazard function for a specified distribution. To obtain survey-weighted estimates, we obtain pseudo-MLEs (Binder, 1983) of the distribution-specific parameters by maximizing the sum of log likelihood contributions for each individual observation multiplied by their survey weights. To obtain finite population variance estimates, we use a trick where we treat our estimator as a weighted total, and use R's survey package. The details of this calculation are given in Appendix C.2. Many common parametric distributions used in survival analysis are currently implemented in the R package pssst, including: exponential, Weibull, piecewise exponential with arbitrary cutpoints, generalized Gamma, lognormal, Gompertz, and the distribution characterized by the exponentially-truncated power shifted family of hazards defined in Scholey (2019). The proposed parametric method may be used with any parametric form of period-specific hazard, and so can be readily extended. As our proposed methodology focuses on providing a continuous, age-specific mortality curve for children under the age of 5, we focus on three existing methods that can provide this, modulo a few assumptions: the log-quad model (Guillot et al., 2022), the discrete hazards model (Li et al., 2019; Wu et al., 2021), and the SVD model (Clark, 2019). The latter two require the assumption that the discrete hazards \({}_{n}q_{x}\) estimated for each \(x\) are constant within the interval \([x,x+n)\) in order to obtain a full survival curve. Of note, Scholey (2019) recently proposed a continuous, parametric approach model for infant mortality. There are similarities between it and our proposed approach, notably the use of a continuous hazard to assist in defining a survival curve for children. The method differs in its focus on the pattern of infant mortality as opposed to U5MR, the use of daily observed deaths from a high-income country which removes the need to account for interval-censored observations, and the use of data that does not come from a survey and therefore does not need to account for the survey design. The methods described in Scholey (2019) serve as high income country analogues to our proposed methods, and we consider one family of hazards they deem best-fitting to U.S. data in our proposed methodology. ### Log-quad model The log-quad model described in Guillot et al. (2022) builds on the log-quad approach in Wilmoth et al. (2012), and can provide an estimate of a continuous survival curve from ages 0 to 5 using only an observed or previously estimated \({}_{60}\hat{q}_{0}\). Other optional inputs to the log-quad model include values \({}_{x}q_{0}\) for different ages \(x\). Following Clark (2019), we call the model "empirical" because the coefficients input to the model are not estimated during the modeling process, but instead are computed beforehand using data from the Under-5 Mortality Database (U5MD) (Guillot et al., 2022). The model specifies \[\log(_{x}q_{0})=a_{x}+b_{x}\log(_{60}\hat{q}_{0})+c_{x}\log(_{60}\hat{q}_{0})^{2 }+v_{x}k,\] where \(x\) takes on one of the 22 values {\(7d\), \(14d\), \(21d\), \(28d\), \(2m\), \(3m\), \(4m\), \(5m\), \(6m\), \(7m\), \(8m\), \(9m\), \(10m\), \(11m\), \(12m\), \(15m\), \(18m\), \(21m\), \(2y\), \(3y\), \(4y\), \(5y\)}. The age-specific coefficients {\(a_{x},b_{x},c_{x},v_{x}\)} are provided in the U5MD, \({}_{60}\hat{q}_{0}\) is input to the model as a fixed covariate, and the parameter \(k\) is an optional parameter describing whether the age pattern of mortality is "early" or "late." By early, we mean that NMR and IMR are higher than what is usually observed when compared to U5MR, and by late we mean that NMR and IMR are lower than what is usually observed when compared to U5MR, based on the patterns of mortality before the age of 5 in countries with highly reliable child mortality data, such as those included in the U5MD. When all 22 possible values for \(x\) are supplied to the model, Guillot et al. (2022) propose an uncertainty band around the estimated survival curve. Details of the uncertainty band calculation can be found in Appendix C.3. Multiple follow-up papers (e.g. Eilerts et al. (2021); Verhulst et al. (2022)), as well as Guillot et al. (2022), note that the log-quad model is generally unsuitable for use in LMICs, or in countries with (broadly) early or late patterns of child mortality. This is unsurprising given that the coefficients in the U5MD are estimated from high-income countries which likely have differing health care systems, and structural and programmatic support for decreasing child mortality. Guillot et al. (2022) also note that there are known biases in the data sources available in LMICs. One of these issues, age-heaping, can be addressed by excluding data. In DHS surveys especially, age heaping typically occurs at age 12 months. Rather than input all \(22\) possible age groups into the model for estimating the \(k\) parameter, the user may instead leave out a range of ages (Guillot et al. (2022) suggest 9 to 21 months) that they believe covers the ages where data is heaped. Note that this is distinct from treating deaths between the ages of 9 and 21 months as interval censored. The rationale for this approach is that in removing those deaths, the estimated curve will essentially smooth over any age heaping that occurs. A downside to this approach is that it involves throwing away useful information about the pattern of U5MR. While the log-quad model can address age-heaping, it has additional characteristics that may be unsuitable in LMICs. Due to its formulation, the log-quad model's prediction of U5MR is identical to the value of U5MR input as a covariate _with zero uncertainty_ (when \(x=5y\), the age-specific coefficients from the model are estimated as \(\{a_{x},b_{x},c_{x},v_{x}\}\,=\,\{0,1,0,0\}\)). In settings with reliable data, this may be a reasonable (even desirable) property. However, in LMICs where U5MR is estimated with considerable uncertainty, we do not necessarily want our predicted value of U5MR to align perfectly with a point estimate, but rather to lie within a range of reasonable values defined by the confidence interval for U5MR. ### Discrete hazards approach The discrete hazards approach described in Allison (2014) (as well as Mercer et al. (2015); Li et al. (2019); Wu et al. (2021)) formulates child mortality data in an explicit survival framework. This framework is currently used by the UN and DHS for estimating subnational U5MR in LMICs (Li et al., 2019; Wu et al., 2021). The discrete hazards model splits the time before the age of 60 months into \(J\) discrete intervals \([x_{1},x_{2})\),\([x_{2},x_{3})\),..., \([x_{J},x_{J+1})\) where \(x_{j+1}=x_{j}+n_{j}\), \(x_{1}=0\). Then U5MR can be computed as \[{}_{60}q_{0}=1-\prod_{j=1}^{J}(1-{}_{x_{j}}q_{n_{j}}). \tag{1}\] Mercer et al. (2015) divide the first 60 months of life for individuals into six intervals, \(J\,=\,6\): \([0,1)\), \([1,12)\), \([12,24)\), \([24,36)\), \([36,48)\), \([48,60)\), where \((x_{1},\ldots,x_{6})\,=\,\,(0,1,12,24,36,48)\), \((n_{1},\ldots,n_{6})\,=\,\,(1,11,12,12,12,12)\). Data is tabulated into binomial counts indexed by age group \(j\), and potentially indexed by time period \(p\) as well, where the number of observations corresponds to the number of deaths observed in that age group and time period, and the number at risk \(N_{jp}\) corresponds to the number of children alive in that age group and time period. Note that by construction of the age intervals, we can also estimate NMR and IMR from this model. Mercer et al. (2015) then fit a logistic regression model, \[y_{jp}\mid N_{jp},\eta_{jp} \sim\text{Binomial}(N_{jp},\,n_{j}q_{x_{j},p}),\] \[\text{logit}(_{n_{j}}q_{x_{j},p}) =\beta_{jp},\] where \(\beta_{jp}\) is an age-period specific intercept. Pseudo-MLEs of \(\beta_{jp}\) are obtained by fitting this model in R's survey package, using the svyglm() function. By writing the likelihood as a product of binomial likelihoods, we can use the pseudo-MLEs estimated from the logistic regression model to construct estimates of \({}_{60}q_{0}\) using Equation (1). Although the binomial likelihood does not reflect the exact data generating mechanism, many sampling schemes in LMICs (including that used by the DHS) allow data to be aggregated to binomial counts by cluster. The discrete hazards approach assumes a constant hazard within the specified age groups. Therefore, while we can estimate a full survival curve for children under the age of 5, we know its shape will not be realistic, as the probability of survival should change smoothly with age rather than make discrete jumps. To obtain a more continuous survival curve, we could have 60 age groups for each 1-month breakdown in the discrete hazards approach. There is a balance here between flexibility and parsimony: the model fitted with more age groups better reflects the underlying smooth changes in hazard, but each hazard estimate is less precise than we might get fitting a more parsimonious model (if that model is appropriate). Differently than the log-quad model, age-heaping can be handled in the discrete hazards model by construction of the age intervals. For example, one could consider age intervals \((J=7)\): \([0,1)\), \([1,9)\), \([9,21)\), \([21,24)\), \([24,36)\), \([36,48)\), \([48,60)\), where we group deaths recorded between the ages of 9 and 21 months into a single age group. Additional notes on the discrete hazards model in conjunction with DHS surveys are in Appendix C.4. ### SVD approach A third approach for estimating a full mortality schedule across an entire lifetime is Singular Value Decomposition (SVD), as described in Clark (2019, 2015), and Alexander et al. (2017) among others. This approach is intended to provide full mortality schedules separately for binarized sex (male/female) rather than a full mortality schedule for both binarized sexes combined. We point the reader to Clark (2019) for an in-depth review of older demographic methods using SVDs and the related method of principal components analysis. The general idea of the SVD approach to modeling full mortality schedules is to incorporate external summaries of observed demographic patterns. These summaries are typically left singular values (LSVs) from a singular value decomposition of historic life tables from the HMD. Clark (2019) suggest that four LSVs are typically enough to accurately capture a full mortality schedule across all ages for a variety of countries. To maintain consistency with the demographic literature, we will hereafter refer to a model that takes advantage of SVDs, such as that in Clark (2019), as an SVD model. Of note, as the life tables available in the HMD are calculated for 1 year period by 1 year age groups at the finest level, the SVD models in Clark (2019) only predict mortality schedules at that level. As a consequence, these models may not accurately estimate NMR, or any metric that is not defined on a yearly scale. The modeling steps described in Clark (2019) that are relevant to our applications in LMICs, where only an estimate \({}_{60}\hat{q}_{0}\) is available for each sex rather than both \({}_{60}\hat{q}_{0}\) and \({}_{540}\hat{q}_{180}\), are detailed in Appendix C.5. One approach to fitting the SVD model for estimating U5MR in LMICs would be to follow these steps, using a survey-weighted estimate for \({}_{60}q_{0,z}\), then to average the resulting curves over sex using population-level sex weights to obtain an estimate for both sexes combined. A second approach is to use the general structure of the SVD method in a model similar to the discrete hazards approach, and this is what we propose in Appendix C.5.1. This approach allows for straightforward uncertainty quantification and can provide an estimate for males and females combined, without a secondary aggregation step. Application We apply our proposed, parametric pseudo-likelihood approach to child mortality data from Burkina Faso, Malawi, Senegal, and Namibia. We chose single DHS surveys from each of these countries, and used the proposed approach to obtain continuous survival curves for the time periods \([2000,2005)\) and \([2005,2010)\) to demonstrate the ability of our approach to produce period estimates throughout time. The data used in the application is described in detail in Section 4.1, and all parametric models considered are noted in Section 4.2. We additionally fit a survey-weighted version of the Turnbull estimator, with boostrapped confidence bands, to validate the parametric approaches, as described in Section 4.3. For further comparison, we compare our approach to estimates from the log-quad model using all 22 age inputs (calculated from the Turnbull estimate), the discrete hazards model from Mercer et al. (2015), and the proposed SVD approach described in Section C.5.1. For all parametric approaches, we estimate the survival curves in each time period, uncertainty bands surrounding each survival curve (95% confidence bands based on finite population variances for all approaches other than log-quad, and the derived uncertainty band from Guillot et al. (2022) for the log-quad approach), and estimates of NMR, IMR, and U5MR from these survival curves. We note that the uncertainty band for the log-quad model does not have a clear interpretation, and point readers to the derivation in the Supplement of Guillot et al. (2022) for details. ### Data All data used in our application comes from the Demographic and Health Surveys (DHS) programme. The DHS programme is one of the largest producers of surveys in LMICs, and includes many health indicators including child mortality. Child death data is collected via interviewing mothers, and asking them the birth and death dates of all children they have had. We treat deaths prior to one month as exact, and interval-censored afterwards with the interval given as a single month or a single year depending on when the child died (see Section 2). It has previously been noted that DHS surveys are subject to potential biases that may negatively impact the resulting estimates of child mortality (Hill, 1995; Lawn et al., 2008; Guillot et al., 2022). The main concern for estimates of mortality under the age of 5 years is age-heaping at age 12 months, where more children are recorded as having died at 12 months than would otherwise be expected. Lawn et al. (2008) additionally note that age-heaping in DHS surveys may occur at 7 days, 14 days, and 1 month. In our application, we address age-heaping at 12 months by interval-censoring all observations recorded as having died between 6 and 18 months for that entire 12 month period \([6,18)\). We chose this window to capture a wide range of potential age-heaping surrounding 12 months, but other windows could instead be chosen, depending on assumptions about when age-heaping occurred. In aggregating our data over these 12 months, we will lose some precision in our estimate of the survival curve but should decrease bias. We emphasize that the benefit of this straightforward approach to addressing age-heaping is that the assumptions involved _are made clear_, in this case, that the only age-heaping in our data occurs between 6 and 18 months. Incorporating additional assumptions about where age-heaping occurs is straightforward; we include additional intervals surrounding the dates where age-heaping is thought to occur (for example, ages 3-10 days for age-heaping at 7 days). Additional details relating to DHS survey design can be found in Appendix A.1. ### Parametric Models The parametric distributions considered for our proposed approach are listed in Table 1. The exponentially-truncated shifted-power (ETSP) family of hazards we consider is slightly different than that considered in Scholey (2019), as we set \(c=0\) as opposed to estimating it via profile likelihood. In Scholey (2019)'s applications, \(c\) was estimated to be very close to zero, typically around \(6\times 10^{-4}\). The generalized Gamma distribution is parametrized as in the flexsurv package in R, as it is more numerically stable than the original parameterization (Prentice, 1974). ### Model Validation To assist with model validation, we fit a survey-weighted version of the Turnbull estimator, with bootstrapped confidence bands, to provide a guideline for how well each of the parametric distributions is able to capture the underlying survival curve in each time period. This is treated as a reasonable reference point for the underlying survival curve as it is free of parametric assumptions. However, despite our use of bootstrapped CIs there are no well-justified variance estimates for the Turnbull estimator (Section 3.1), making our comparisons to the Turnbull estimator only crudely calibrated. Let a sample \(k\) from the bootstrapped distribution of the Turnbull estimate at age \(x\) be denoted \(\tilde{\theta}_{x}^{(k)}\), and a sample \(k\) from the asymptotic distribution of the parametric survival curve at age \(x\) be denoted \(\hat{\theta}_{x}^{(k)}\). We obtain \(k=1,\ldots,500\) samples, and compute \(\hat{\theta}_{x}^{(k)}-\tilde{\theta}_{x}^{(k)}\) to obtain samples from the empirical distribution of the difference between the Turnbull and parametric distribution at a given age \(x\). We calculate the proportion of uncertainty intervals derived from \(\hat{\theta}_{x}^{(k)}-\tilde{\theta}_{x}^{(k)}\) at ages \(x\) that contain \(0\) as a rough estimate of how closely each parametric model aligns with the Turnbull estimate. _This is not a formal hypothesis test_, but rather a means of assessing how close the paramet \begin{table} \begin{tabular}{l|c|c} Distribution & Characterization & Parameters \\ \hline Exponential & \(f(x)=\beta e^{-\beta x}\) & \(\beta\) \\ Piecewise Exponential & \(f(x)=\beta_{0}e^{-\beta_{0}x}I[x<1]+\beta_{1}e^{-\beta_{1}x}I[1\leq x<12]+\beta _{2}e^{-\beta_{2}x}I[x\geq 12]\) & \(\beta_{0},\beta_{1},\beta_{2}\) \\ Weibull & \(f(x)=\beta k(\beta x)^{k-1}e^{-(\beta x)^{k}}\) & \(\beta,k\) \\ Generalized Gamma & \(f(x)=\frac{[Q(Q^{-2})^{Q-2}}{\sigma xI(Q^{-2})}e^{Q^{-2}(Q\omega-e^{Q\omega})}\) & \(Q,\sigma,\omega\) \\ Lognormal & \(f(x)=\frac{1}{2\sigma\sqrt{2}}e^{-\frac{1}{2\sigma^{2}}(\log(x)-\mu)^{2}}\) & \(\sigma,\mu\) \\ Gompertz & \(f(x)=\beta ke^{k+\beta x-ke^{k\varkappa}}\) & \(\beta,k\) \\ Exponentially-truncated & \(h(x)=a(x+c)^{-p}e^{-bx}\) & \(a,b,c,p\) \\ shifted power (ETSP)* & & \\ \end{tabular} \end{table} Table 1: Parametric distributions considered and their characterizations in terms of a probability density function \(f(x)\) or hazard \(h(x)\), with relationship \(h(x)\ \ =\ \ f(x)/(1\ -\ F(x))\). The ETSP hazard as described in Scholey (2019) contains four parameters, but in our applications we set \(c=0\). ric estimate is to the Turnbull estimate while accounting for uncertainty in _both_ estimates. ## 5 Results In this section, we display a subset of results from the application of the seven parametric models, log-quad model, proposed SVD approach, and discrete hazards model to DHS data from Burkina Faso, Malawi, Senegal, and Namibia. Additional results can be found in Appendix D, with comparisons to models where the data is not adjusted for age-heaping in Appendix E. In the top row of Figure 2 we display the fitted survival curves for the Weibull model in both time periods for Malawi, and compare them to the Turnbull estimator, log-quad model, proposed SVD approach, and discrete hazards model. Compared to the Turnbull estimator the Weibull model tends to estimate higher survivorship at younger ages, and lower survivorship at older ages. In the bottom row of Figure 2, we display the same comparison but for the lognormal model. The lognormal model captures the sharp increase in mortality within the first 12 months of life more accurately than the Weibull model. In Figure 3 we compare estimated lognormal survival curves across all countries in our application and both time periods. Figure 2: Top: Estimated Weibull survival curves for time periods \([2000,2005)\) (left) and \([2005,2010)\) (right) for Malawi. Bottom: Estimated lognormal survival curves for time periods \([2000,2005)\) (left) and \([2005,2010)\) (right) for Malawi. Showing just summary measures of mortality (NMR, IMR, U5MR), we see the same patterns in Figure 4. The Weibull model in each time periods underestimates NMR, and overestimates U5MR, relative to the Turnbull estimator, particularly for the period \([2000,2005)\). In contrast, the lognormal model confidence intervals cover NMR, IMR, and U5MR in both time periods, with the exception of IMR in \([2005,2010)\) where only the Weibull model captures the Turnbull estimate. As seen in Table 2, the differences between the Weibull estimates and Turnbull estimates capture zero for 41.7% and 60.7% of ages where the Turnbull estimate is defined, prior to age 60 months, for \([2000,2005)\) and \([2005,2010)\), respectively. In contrast, the differences between the lognormal estimates and Turnbull estimates capture zero for 91.7% and 77.4% of ages. This aligns with the visualizations to suggest that the lognormal model is a better parametric fit for the mortality curve for children under the age of 5 than the Weibull model. Figure 3: Estimated lognormal survival curves for time periods \([2000,2005)\) (left) and \([2005,2010)\) (right) for Burkina Faso, Malawi, Namibia, and Senegal. The proposed SVD approach performs adequately in all countries in terms of capturing IMR and U5MR, but underestimates NMR in all time periods and countries. This makes sense, as the proposed SVD approach assumes a constant hazard between ages 0 and 12 months. Hence, unless mortality is decreasing linearly in the first year of life, the SVD approach will always underestimate NMR; if not linear, the survival curve will be convex. If finer scale life tables were available (for example, monthly life tables), this might be ameliorated. Figure 4: Estimates of NMR, IMR, and U5MR for Malawi in periods \([2000,2005)\) (top) and \([2005,2010)\) (bottom). Turnbull point estimates are denoted by vertical black lines, with dashed vertical black lines denoting the 95% uncertainty interval estimated from the bootstrap samples. Horizontal error bars are blue if the interval captures the Turnbull point estimate, or red if the interval does not capture the Turnbull point estimate. All 95% confidence intervals are based on finite population variances, with the exception of the log-quad model where uncertainty is calculated as in Guillot et al. (2022). The log-quad approach performs adequately in general as well, with a few key caveats. First, U5MR is assumed to be estimated with no uncertainty. This is not a desirable property of this approach since our estimates of U5MR that are input to the log-quad model are themselves estimated with uncertainty. Second, we note that the uncertainty bands around the log-quad point estimates are in general much wider than the confidence bands for the parametric models. The confidence bands surrounding the parametric models may be interpreted at each age \(x\) with a 95% confidence interval interpretation based on resampling observations from the finite population, whereas the uncertainty surrounding the log-quad model does not have as straightforward of an interpretation. Furthermore, out of all of the analyses conducted, only the log-quad models for Namibia (both time periods) provided estimates and confidence bands that would be considered reasonable by Guillot et al. (2022). All other countries either had estimated values for certain parameters outside the range suggested by Guillot et al. (2022), or an increasing hazard by age in the uncertainty interval computed, which is unrealistic. In general, the discrete hazards approach performed well, though perhaps not sufficiently better than some of the proposed parametric models (such as lognormal or piecewise exponential) to justify the need for six parameters in estimating the survival curve. Futhermore, assuming a con \begin{table} \begin{tabular}{l|c|r|r|r|r|r|r|r} Country & Period & Weibull & Piecewise & Generalized & \multirow{2}{*}{Lognormal} & \multirow{2}{*}{Gompertz} & \multirow{2}{*}{ETSP} & Discrete \\ & & Exponential & Gamma & & & & & Hazards \\ \hline \multirow{2}{*}{Burkina Faso} & [2000, 2005] & 17 & 37 & **93** & 54 & 12 & 34 & 48 \\ \cline{2-9} & [2005, 2010] & 14 & 39 & **71** & 66 & 8 & 60 & 60 \\ \hline \multirow{2}{*}{Malawi} & [2000, 2005] & 42 & **76** & **95** & **92** & 18 & **94** & **73** \\ \cline{2-9} & [2005, 2010] & 61 & 63 & **82** & **77** & 12 & **80** & 70 \\ \hline \multirow{2}{*}{Senegal} & [2000, 2005] & 30 & **73** & **85** & **83** & 17 & **84** & **72** \\ \cline{2-9} & [2005, 2010] & 36 & **73** & 64 & **85** & 12 & **91** & **72** \\ \hline \multirow{2}{*}{Namibia} & [2000, 2005] & 56 & **86** & **100** & **100** & 23 & **99** & **86** \\ \cline{2-9} & [2005, 2010] & 53 & **86** & **100** & **100** & 27 & **99** & **85** \\ \end{tabular} \end{table} Table 2: Model validation results. Percentage of samples (out of 500) from \(\hat{\theta}~{}~{}-~{}~{}\tilde{\theta}\) that contain 0 for all parametric models, countries, and periods. Results that contain more than 70% of samples noted in bold. stant hazard over certain age intervals is not necessarily an assumption we wish to make, as it is unrealistic even at a fine scale of age groups. Additional comments on the results of the application can be found in Appendix D. ## 6 Discussion Our application suggests that there are potentially very large differences in model fit between parametric distributions, with the Weibull distributions and Gompertz distributions generally providing the worst fit compared to the Turnbull estimator, in terms of capturing the survival curve under the age of 5. In general, the lognormal model seems to fit the countries in our application reasonably well. We note that two of the three parameter models we compared, the piecewise exponential and ETSP model, also adequately captured the survival curve provided by the Turnbull estimator, though the piecewise exponential model has the undesirable property of assuming constant hazards within prespecified age groups and the ETSP model is computationally challenging to fit. We conclude that for our application, the lognormal model outperforms other parametric models in terms of the ability to capture the point estimate provided by the Turnbull estimator while only requiring two parameters to define the survival curve. The benefits of a parametric approach to under-5 mortality estimation, and in particular to estimating the full survival curve for children under the age of 5, are many. As laid out in Scholey (2019), correctly specified parametric assumptions about the shape of mortality may greatly assist estimation of the survival curve under the age of 5 in situations with little data. This becomes particularly relevant in a small area setting, where often little data are available at small administrative regions (Wakefield et al., 2020). As such, the methods proposed in this paper may serve as a guideline, or as prior information in a Bayesian setting, for small area estimation problems of child mortality when a full survival curve is desired. Further benefits of a continuous, parametric approach involve interpretability and parsimony. The Heligman-Pollard model (Heligman and Pollard, 1980), a well-known parametric, demographic model for mortality estimation, provides informative interpretations of the parameters involved in the model, and the same is true of the models we propose. Of course, we rely on the assumption that the parametric distribution used is _correctly_ specified, which likely is not the case. In fact, it is likely that there is _no_ parametric distribution that can perfectly capture the age trend in mortality for every country. However, especially in scenarios with little data, _reasonable_ parametric assumptions may still be useful. Hence it is important to observe and test these parametric models in settings with more data, such as the national setting we use in our application. A meaningful question is: Is the fit of a continuous parametric model better than the 6-parameter discrete hazards model currently used by the UN IGME and DHS? When comparing to the Turbull estimator, the lognormal model does outperform the 6-parameter discrete hazards model in terms of our model performance metric (see Table 2). Limitations of our approach to extend the Turnbull estimator include the lack of a well-justified variance estimate. As previously noted, a variance estimate is not readily available due to the non-Gaussian, cubed-root asymptotics, and a bootstrap estimate of the variance is not applicable for similar reasons. More work needs to be done before comparisons between the nonparametric and parametric approaches (and model validation procedures) can be made with some degree of calibration. In conclusion, we have provided a method for obtaining a complete, continuous survival curve for children under the age of 5 using assumed parametric models. Our method enables estimations using interval-censored, left-truncated observations, as is required for period estimates of mortality from DHS data. Furthermore, aspects of survey design, which are particularly relevant in LMICs, may be directly incorporated into our modeling framework to provide design-consistent estimates of mortality with finite population variances. All software for fitting the models proposed in this paper is available in the R package pssst, available at [https://github.com/pssst](https://github.com/pssst).
2302.11146
A unified treatment of the redshift, the Doppler effect, and the time dilation in general relativity
We present a unified treatment of the gravitational and cosmological redshift, the Doppler effect due to the moving observer or light source, and the time dilation in the gravitational field in the framework of general relativity. The primary purpose of this paper is to extend the description of Narlikar (1994) on the unified approach towards the redshifts and the Doppler effect in a more generalized form, with the help of the four facts extracted from the comprehensive review article by Ellis (1971). We apply it to the cases of moving observer or light source in the gravitational field and obtain the Doppler effect term, in addition to the standard gravitational or cosmological redshift. The secondary purpose is to explicitly show that the time dilation of a moving clock in the gravitational field can also be understood within the same framework of the unified treatment. We examine the time dilation of the moving clock on geodesic in the gravitational field. We also derive the time dilation of the moving clock on elliptical orbit, based on the same unified treatment. The tertiary purpose is to show that we can understand special-relativistic effects without using the Lorentz transformation. We derive the special-relativistic formulae such as the Doppler effect and aberration of light, the kinetic time dilation, and the Lorentz contraction in the general-relativistic framework.
Masumi Kasai
2023-02-22T04:57:27Z
http://arxiv.org/abs/2302.11146v4
A unified treatment of the redshift, the Doppler effect, and the time dilation in general relativity and its applications ###### Abstract We present a unified treatment of the gravitational and cosmological redshift, the Doppler effect due to the moving observer or light source, and the time dilation in the gravitational field in the framework of general relativity. We apply it to the cases of moving observer or light source in the gravitational field and obtain the Doppler effect formula, in addition to the standard gravitational or cosmological redshift. In particular, the longitudinal and the transverse Doppler effects are explicitly given which hold in fully general-relativistic situations. We also examine the time dilation of the moving clock on geodesic in the gravitational field. We confirm that the ratio of the elapsed times \(\Delta\bar{T}\) of the moving clock on circular orbit with radius \(r\) and \(\Delta T_{1}\) of the observer at rest \(r=r_{1}\) is \(\Delta\bar{T}/\Delta T_{1}=\sqrt{1-\frac{3}{2}\frac{r_{g}}{r}}\Big{/}\sqrt{1- \frac{r_{g}}{r_{1}}}\), where \(r_{g}\) is the Schwarzschild radius, which exactly holds without approximation. We also derive the time dilation of the moving clock on elliptical orbit with the semi-major axis \(a\). The ratio of the elapsed times, after the time average per cycle, is \(\langle\Delta\bar{T}\rangle/\Delta T_{1}\simeq\sqrt{1-\frac{3}{2}\frac{r_{g}} {a}}\Big{/}\sqrt{1-\frac{r_{g}}{r_{1}}}\), which holds up to the first order of \(r_{g}\). E00, E01 ## 1 Introduction The Doppler effect of light is usually explained using the Lorentz transformation in special relativity. In the presence of gravity, however, the Lorentz transformation does not hold. We present a unified treatment of the gravitational and cosmological redshift, the Doppler effect due to the moving observer or source, and the time dilation in the gravitational field in the framework of general relativity. The unified treatment is simply based on the following two principles: 1. Light obeys the null geodesic equation, i.e., the propagation 4-vector \(k^{\mu}\) satisfies \(k^{\mu}_{\ ;\nu}k^{\nu}=0,\ \ k_{\mu}k^{\mu}=0\). 2. The frequency of light measured by an observer with 4-velocity \(u^{\mu}\) is \(\omega=-k_{\mu}u^{\mu}\). We apply it to the cases of moving observer or light source in the gravitational field, and investigate the Doppler effect due to the moving observer or source. We also investigate the time dilation of a moving clock on geodesic in the gravitational field. We examine the time dilation of the the moving clocks on radial orbit, circular orbit, and non-circular elliptical orbit. We use the unit \(c=1\). ## 2 Basic principles and equations We briefly summarize the basic principles and equations for light ray observation in the gravitational field. Most of them are described in [1]. ### The null geodesic equation for light rays Let us define the propagation 4-vector \(k^{\mu}\). The light rays whose tangent vector is \(k^{\mu}\) are null geodesics [2]: \[k^{\mu} \equiv \frac{dx^{\mu}}{dv}, \tag{2.1}\] \[k^{\mu}_{\ ;\nu}k^{\nu} = 0\,,\] (2.2) \[k_{\mu}k^{\mu} = 0\,, \tag{2.3}\] where \(v\) is an affine parameter along the null geodesic. It is sometimes more convenient to use the geodesic equation for the covariant components \(k_{\mu}=g_{\mu\nu}k^{\nu}\). From \(k_{\mu;\nu}k^{\nu}=0\), we obtain \[\frac{dk_{\mu}}{dv}=\frac{1}{2}g_{\alpha\beta,\mu}k^{\alpha}k^{\beta}\,. \tag{2.4}\] It is convenient in the following sense: if the metric does not depend on some coordinate, say, \(x^{0}\), (2.4) immediately shows us the existence of the conserved quantity: \[\mbox{if}\ \ g_{\alpha\beta,0} = 0\,, \tag{2.5}\] \[\mbox{then}\ \ \frac{dk_{0}}{dv} = \frac{1}{2}g_{\alpha\beta,0}k^{\alpha}k^{\beta}=0\,,\] (2.6) \[\therefore\ \ k_{0} = \mbox{const.} \tag{2.7}\] ### The geodesic equation for observers Let us define the tangent 4-vector \(u^{\mu}\) of the world line of an observer. If the observer is moving in the gravitational field without any other forces except gravity, \(u^{\mu}\) obeys the geodesic equation: \[u_{\mu} = g_{\mu\nu}u^{\nu}=g_{\mu\nu}\frac{dx^{\nu}}{d\tau}\,, \tag{2.8}\] \[\frac{du_{\mu}}{d\tau} = \frac{1}{2}g_{\alpha\beta,\mu}u^{\alpha}u^{\beta}\,,\] (2.9) \[u_{\mu}u^{\mu} = -1\,, \tag{2.10}\] where \(\tau\) is the proper time as an affine parameter along the geodesic. ### The composition rule of 4-velocities Let us consider two observers \(A\) and \(B\), whose 4-velocities are \(u^{\mu}_{A}\equiv u^{\mu}\) and \(u^{\mu}_{B}\equiv\bar{u}^{\mu}\) respectively. They are at the same point in the spacetime and observer \(B\) is moving from observer \(A\) with relative velocity \(V\). We can write the following composition rule [3]: \[\bar{u}^{\mu} = \frac{u^{\mu}+Ve^{\mu}}{\sqrt{1-V^{2}}}\,, \tag{2.11}\] \[e_{\mu}e^{\mu} = 1\,,\] (2.12) \[e_{\mu}u^{\mu} = 0\,, \tag{2.13}\] where the unit space-like vector \(e^{\mu}\) represents the direction of observer \(B\)'s motion in the observer \(A\)'s rest frame. A simple proof of the composition rule is given in Appendix A. The inverse relation of the composition rule, which is based on the observer \(B\)'s rest frame, is \[u^{\mu} = \frac{\bar{u}^{\mu}-V\bar{e}^{\mu}}{\sqrt{1-V^{2}}}\,, \tag{2.14}\] \[\bar{e}_{\mu}\bar{e}^{\mu} = 1\,,\] (2.15) \[\bar{e}_{\mu}\bar{u}^{\mu} = 0\,, \tag{2.16}\] where \(\bar{e}^{\mu}\) represents the direction of observer \(A\)'s motion in the observer \(B\)'s rest frame. Actually, (2.14) shows that observer \(A\) is moving in the direction \(-\bar{e}^{\mu}\) with relative velocity \(V\) in the observer \(B\)'s rest frame. Using (2.11) and (2.14) and eliminating \(\bar{u}^{\mu}\), we obtain \[\bar{e}^{\mu} = \frac{e^{\mu}+Vu^{\mu}}{\sqrt{1-V^{2}}}\,. \tag{2.17}\] In the same way, we can also obtain \[e^{\mu} = \frac{\bar{e}^{\mu}-V\bar{u}^{\mu}}{\sqrt{1-V^{2}}}\,. \tag{2.18}\] ### The Lorentz factor From (2.11) and (2.17), we can directly calculate the Lorentz factor \(\gamma\): \[\gamma\equiv\frac{1}{\sqrt{1-V^{2}}}=-u_{\mu}\bar{u}^{\mu}=e_{\mu}\bar{e}^{\mu }\,. \tag{2.19}\] The Lorentz factor is the invariant 4-scalar which is calculated from the inner product of the 4-vectors. ### The decomposition of the propagation 4-vector of light We introduce the following decomposition of the propagation 4-vector \(k^{\mu}\) of light with respect to the observer's 4-velocity. Consider the observer \(A\) with 4-velocity \(u^{\mu}\). Using \(u^{\mu}\), \(k^{\mu}\) is decomposed into [4] \[k^{\mu} = \omega\left(u^{\mu}+\gamma^{\mu}\right)\,, \tag{2.20}\] \[\gamma_{\mu}\gamma^{\mu} = 1\,,\] (2.21) \[\gamma_{\mu}u^{\mu} = 0\,, \tag{2.22}\] where \[\omega\equiv-k_{\mu}u^{\mu} \tag{2.23}\] is the frequency measured by observer \(A\), and the space-like unit vector \(\gamma^{\mu}\) represents the direction of light in the observer \(A\)'s rest frame. The decomposition can also be made with respect to the observer \(B\)'s 4-velocity \(\bar{u}^{\mu}\): \[k^{\mu} = \bar{\omega}\left(\bar{u}^{\mu}+\bar{\gamma}^{\mu}\right)\,, \tag{2.24}\] \[\bar{\gamma}_{\mu}\bar{\gamma}^{\mu} = 1\,,\] (2.25) \[\bar{\gamma}_{\mu}\bar{u}^{\mu} = 0\,, \tag{2.26}\] where \[\bar{\omega}\equiv-k_{\mu}\bar{u}^{\mu} \tag{2.27}\] is the frequency of the same light denoted by \(k^{\mu}\) measured by the moving observer \(B\), and \(\bar{\gamma}^{\mu}\) represents the direction of light in the observer \(B\)'s rest frame. ### The Doppler effect and the aberration of light The relation between \(\omega\) of (2.23) and \(\bar{\omega}\) of (2.27) can be obtained in the following way. Using (2.14) and (2.24), \[\omega = -k_{\mu}u^{\mu} \tag{2.28}\] \[= -\bar{\omega}\left(\bar{u}_{\mu}+\bar{\gamma}_{\mu}\right)\frac{ \bar{u}^{\mu}-V\bar{e}^{\mu}}{\sqrt{1-V^{2}}}\] (2.29) \[= \bar{\omega}\frac{1+V\cos\bar{\theta}}{\sqrt{1-V^{2}}}\,, \tag{2.30}\] where \(\cos\bar{\theta}\equiv\bar{\gamma}_{\mu}\bar{e}^{\mu}\), namely, \(\bar{\theta}\) represents the angle between the direction of motion and the direction of light propagation in observer \(B\)'s rest frame. The Doppler formula for the moving observer is then \[\bar{\omega}=\omega\frac{\sqrt{1-V^{2}}}{1-V\cos\bar{\vartheta}}\,, \tag{2.31}\] where \(\bar{\vartheta}\equiv\pi-\bar{\theta}\) is the angle of incidence in the observer \(B\)'s rest frame. Actually, \(\bar{\vartheta}\) is the angle between \(\bar{e}^{\mu}\) and the direction of the source \(-\bar{\gamma}^{\mu}\). The formula for the observer \(A\) at rest can also be obtained in the following way. Using (2.11) and (2.20), \[\bar{\omega} = -k_{\mu}\bar{u}^{\mu} \tag{2.32}\] \[= -\omega\left(u^{\mu}+\gamma^{\mu}\right)\frac{u^{\mu}+Ve^{\mu}}{ \sqrt{1-V^{2}}}\] (2.33) \[= \omega\frac{1-V\cos\theta}{\sqrt{1-V^{2}}}\,, \tag{2.34}\] where \(\cos\theta\equiv\gamma_{\mu}e^{\mu}\) and \(\theta\) represents the angle between the direction of motion and the direction of light propagation in observer \(A\)'s rest frame. The Doppler formula for the moving source is then \[\omega=\bar{\omega}\frac{\sqrt{1-V^{2}}}{1+V\cos\vartheta}\,, \tag{2.35}\] where \(\vartheta\equiv\pi-\theta\) is the angle of incidence in observer \(A\)'s rest frame. Using (2.31) and (2.35) and eliminating \(\bar{\omega}/\omega\), we obtain the formula for the aberration of light: \[\cos\bar{\vartheta}=\frac{\cos\vartheta+V}{1+V\cos\vartheta}\,. \tag{2.36}\] A useful inequality for \(0<\vartheta<\pi\), \(0<\bar{\vartheta}<\pi\) is \[\cos\bar{\vartheta}-\cos\vartheta = \frac{V\sin^{2}\vartheta}{1+V\cos\vartheta}>0\,, \tag{2.37}\] \[\therefore \bar{\vartheta} < \vartheta\,. \tag{2.38}\] The formulae (2.31), (2.35), and (2.36) look quite well known and seem nothing new. However, we emphasize that all quantities \(\omega,\bar{\omega},\vartheta,\bar{\vartheta}\), and \(V\) are defined as the invariant 4-scalars. The formulae hold in any coordinate systems in any spacetime, in general relativity as well as in special relativity. ## 3 The gravitational redshift and the Doppler effect in the Schwarzschild spacetime ### The metric of the Schwarzschild spacetime The metric of the Schwarzschild spacetime is \[ds^{2}=-\left(1-\frac{r_{g}}{r}\right)dt^{2}+\frac{dr^{2}}{1-\frac{r_{g}}{r}}+ r^{2}(d\theta^{2}+\sin^{2}\theta\,d\phi^{2})\,, \tag{3.1}\] where \(r_{g}\equiv 2GM\) is the Schwarzschild radius. ### The solution of the null geodesic equation Thanks to the spherically symmetric property of the Schwarzschild spacetime, we may consider the orbit of light to be confined to the equatorial plane. Then, \[x^{2} = \theta=\frac{\pi}{2}\,, \tag{3.2}\] \[k^{2} = \frac{dx^{2}}{dv}=0\,. \tag{3.3}\] The Schwarzschild metric (3.1) does not depend on \(x^{0}=t\), \(x^{3}=\phi\), then the following conserved quantities are immediately obtained from (2.4). \[k_{0} = \mbox{const.}\equiv-\omega_{c}\,,\ \ \therefore\ \ k^{0}=\frac{\omega_{c}}{1-\frac{r_{g}}{r}}\,, \tag{3.4}\] \[k_{3} = \mbox{const.}\equiv l\,,\ \ \therefore\ \ k^{3}=\frac{l}{r^{2}}\,. \tag{3.5}\] Finally, the null condition \(k_{\mu}k^{\mu}=0\) provides the equation for \(k^{1}\) as follows: \[\left(k^{1}\right)^{2}=\left(\frac{dr}{dv}\right)^{2}=\omega_{c}^{2}-\left(1- \frac{r_{g}}{r}\right)\frac{l^{2}}{r^{2}}\,. \tag{3.6}\] Particularly for radially propagating light, setting \(l=0\) yields \[k_{\mu}=\left(k_{0},k_{1},0,0\right)=\left(-\omega_{c},\pm\frac{\omega_{c}}{1 -\frac{r_{g}}{r}},0,0\right)\,. \tag{3.7}\] ### The 4-velocity of the observer at rest If an observer is at rest in the Schwarzschild spacetime, the 4-velocity is written as \[u^{\mu}=\left(\frac{1}{\sqrt{1-\frac{r_{g}}{r}}},0,0,0\right)\,. \tag{3.8}\] ### The 4-velocity of the moving observer on the geodesic In order to distinguish a moving observer from an observer at rest, we express the 4-velocity of the moving observer with a bar (\(\bar{}\)), as \(\bar{u}^{\mu}\). If the observer is moving in the gravitational field without any other force except gravity, it obeys the geodesic equation (2.9). Again, we may consider the trajectory to be confined to the equatorial plane. Then, \[x^{2} = \theta=\frac{\pi}{2}\,, \tag{3.9}\] \[\bar{u}^{2} = \frac{dx^{2}}{d\tau}=0\,, \tag{3.10}\] and the following conserved quantities are immediately obtained from (2.9) \[\bar{u}_{0} = \mbox{const.}\equiv-\epsilon\,,\;\;\therefore\;\;\;\bar{u}^{0}=\frac {\epsilon}{1-\frac{r_{g}}{r}}\,, \tag{3.11}\] \[\bar{u}_{3} = \mbox{const.}\equiv\ell\,,\;\;\therefore\;\;\;\bar{u}^{3}=\frac{ \ell}{r^{2}}\,. \tag{3.12}\] The condition \(\bar{u}_{\mu}\bar{u}^{\mu}=-1\) provides the equation for \(\bar{u}^{1}\) as \[\left(\bar{u}^{1}\right)^{2}=\left(\frac{dr}{d\tau}\right)^{2}=\epsilon^{2}- \left(1-\frac{r_{g}}{r}\right)\left(1+\frac{\ell^{2}}{r^{2}}\right)\,. \tag{3.13}\] #### 3.4.1 The 4-velocity of the moving observer on the radial geodesic. If the observer is moving in radial direction, \(\ell=0\). Then, (3.13) is \[\left(\bar{u}^{1}\right)^{2}=\epsilon^{2}-\left(1-\frac{r_{g}}{r}\right)\,. \tag{3.14}\] As the initial condition, we impose \(\bar{u}^{1}=0\) at \(r=r_{i}\). Then, \[\epsilon=\sqrt{1-\frac{r_{g}}{r_{i}}}\,. \tag{3.15}\] Therefore, the 4-velocity \(\bar{u}^{\mu}\) of the moving observer on the radial geodesic is \[\bar{u}^{\mu}=\left(\bar{u}^{0},\bar{u}^{1},0,0\right)=\left(\frac{\sqrt{1- \frac{r_{g}}{r_{i}}}}{1-\frac{r_{g}}{r}},-\sqrt{\frac{r_{g}}{r}-\frac{r_{g}}{ r_{i}}},0,0\right)\,. \tag{3.16}\] For later use, we calculate the Lorentz factor of the radial geodesic motion: \[\gamma\equiv\frac{1}{\sqrt{1-V^{2}}}=-u_{\mu}(r)\bar{u}^{\mu}(r)=\sqrt{\frac {1-\frac{r_{g}}{r_{i}}}{1-\frac{r}{r}}}\,. \tag{3.17}\] #### 3.4.2 The 4-velocity of the moving observer on the circular geodesic. For circular motion, \(r=\mathrm{const.}\), (3.13) reads \[\epsilon^{2}=\left(1-\frac{r_{g}}{r}\right)\left(1+\frac{\ell^{2}}{r^{2}}\right)\,. \tag{3.18}\] Differentiating it with \(r\), we obtain \[\frac{r_{g}}{r}-\frac{\ell^{2}}{r^{2}}\left(2-3\frac{r_{g}}{r}\right)=0\,. \tag{3.19}\] Solving the simultaneous equations (3.18) and (3.19), we obtain \[\frac{\ell}{r} = \sqrt{\frac{\frac{1}{2}\frac{r_{g}}{r}}{1-\frac{3}{2}\frac{r_{g}} {r}}}\,, \tag{3.20}\] \[\epsilon = \frac{1-\frac{r_{g}}{r}}{\sqrt{1-\frac{3}{2}\frac{r_{g}}{r}}}\,. \tag{3.21}\] Therefore, the 4-velocity \(\bar{u}^{\mu}\) of the moving observer on the circular geodesic is \[\bar{u}^{\mu}=(\bar{u}^{0},0,0,\bar{u}^{3})=\left(\frac{1}{\sqrt{1-\frac{3}{ 2}\frac{r_{g}}{r}}},0,0,\frac{1}{r}\frac{\sqrt{\frac{1}{2}\frac{r_{g}}{r}}}{ \sqrt{1-\frac{3}{2}\frac{r_{g}}{r}}}\right)\,. \tag{3.22}\] For later use, we calculate the Lorentz factor of the circular geodesic motion: \[\gamma\equiv\frac{1}{\sqrt{1-V^{2}}} = -u_{\mu}(r)\bar{u}^{\mu}(r) \tag{3.23}\] \[= \frac{\sqrt{1-\frac{r_{g}}{r}}}{\sqrt{1-\frac{3}{2}\frac{r_{g}}{ r}}}\,. \tag{3.24}\] #### 3.4.3 The 4-velocity of the moving observer on the non-circular geodesic. For non-circular motion, the geodesic equation cannot be solved analytically. As long as the geodesic motion is on the bound orbit, however, there must be the maximum and minimum values of \(r\), \(r_{\mathrm{max}}\) and \(r_{\mathrm{min}}\) at the points of \(\frac{dr}{d\tau}=0\). Then, from (3.13), \[\epsilon^{2}-\left(1-\frac{r_{g}}{r_{\mathrm{max}}}\right)\left( 1+\frac{\ell^{2}}{r_{\mathrm{max}}^{2}}\right) = 0\,, \tag{3.25}\] \[\epsilon^{2}-\left(1-\frac{r_{g}}{r_{\mathrm{min}}}\right)\left( 1+\frac{\ell^{2}}{r_{\mathrm{min}}^{2}}\right) = 0\,. \tag{3.26}\] For later use, we solve these equations with respect to \(\epsilon\) up to the linear order of \(r_{g}\): \[\epsilon\simeq 1-\frac{r_{g}}{2(r_{\mathrm{max}}+r_{\mathrm{min}})}=1-\frac{r_{ g}}{4a}\,, \tag{3.27}\] where \[a\equiv\frac{r_{\mathrm{max}}+r_{\mathrm{min}}}{2} \tag{3.28}\] corresponds to the semi-major axis in the Newtonian theory. ### The gravitational redshift (or blueshift) Consider a light source at \(r=r_{1}\) and an observer at \(r=r_{2}\). Both source and observer are at rest. The source emits light denoted by \(k_{\mu}\) at \(r_{1}\), and it is received by the observer at rest at \(r=r_{2}\). Using (3.7) and (3.8), the frequency at the source is \[\omega_{1}=-k_{\mu}(r_{1})u^{\mu}(r_{1})=\frac{\omega_{c}}{\sqrt{1-\frac{r_{s} }{r_{1}}}}\,. \tag{3.29}\] The frequency of the same light denoted by \(k_{\mu}\), received at \(r=r_{2}\) by the observer at rest, is \[\omega_{2}=-k_{\mu}(r_{2})u^{\mu}(r_{2})=\frac{\omega_{c}}{\sqrt{1-\frac{r_{s }}{r_{2}}}}\,. \tag{3.30}\] The ratio is \[\frac{\omega_{2}}{\omega_{1}}=\frac{\sqrt{1-\frac{r_{s}}{r_{1}}}}{\sqrt{1- \frac{r_{s}}{r_{2}}}}\,\left\{\begin{array}{l}<1\,\,\,\,\,(\mbox{for $r_{g}<r_{1}<r_{2}$})\\ >1\,\,\,\,\,(\mbox{for $r_{g}<r_{2}<r_{1}$})\end{array}\right.\,. \tag{3.31}\] Both source and observer are at rest, and there is no relative motion. Still, the observed frequency at \(r=r_{2}\) is different from that of the source at \(r=r_{1}\). For \(r_{g}<r_{1}<r_{2}\), the observed frequency is smaller and it is called the gravitational redshift. On the other hand, for \(r_{g}<r_{2}<r_{1}\), it is called the gravitational blueshift. ### The Doppler effect due to moving observer on the radial geodesic #### 3.6.1 The case of the observer approaching to the source. Consider a light source at rest at \(r=r_{1}\). The source emits light radially outward denoted by \(k_{\mu}\) at \(r_{1}\), and it is received by the observer in radial geodesic motion with \(\bar{u}^{\mu}\) at \(r=r_{2}>r_{1}\). From (3.7), \(k_{\mu}\) for outgoing light is \[k_{\mu}=\left(-\omega_{c},+\frac{\omega_{c}}{1-\frac{r_{s}}{r}},0,0\right)\,. \tag{3.32}\] Using (3.16), the frequency \(\bar{\omega}_{2}\) observed by the moving observer at \(r=r_{2}\) is \[\bar{\omega}_{2} = -k_{\mu}(r_{2})\bar{u}^{\mu}(r_{2}) \tag{3.33}\] \[= \frac{\omega_{c}}{1-\frac{r_{s}}{r_{2}}}\left(\sqrt{1-\frac{r_{g} }{r_{i}}}+\sqrt{\frac{r_{g}}{r_{2}}-\frac{r_{g}}{r_{i}}}\right)\,, \tag{3.34}\] where we have assumed \(r_{i}>r_{2}\). The ratio of the source and the observed frequencies can be divided into two parts: \[\frac{\bar{\omega}_{2}}{\omega_{1}} = \frac{\omega_{2}}{\omega_{1}}\cdot\frac{\bar{\omega}_{2}}{\omega_ {2}} \tag{3.35}\] \[= \frac{\sqrt{1-\frac{r_{s}}{r_{1}}}}{\sqrt{1-\frac{r_{s}}{r_{2}}}} \cdot\frac{1}{\sqrt{1-\frac{r_{s}}{r_{2}}}}\left(\sqrt{1-\frac{r_{g}}{r_{i}}} +\sqrt{\frac{r_{g}}{r_{2}}-\frac{r_{g}}{r_{i}}}\right)\] (3.36) \[> \frac{\sqrt{1-\frac{r_{s}}{r_{1}}}}{\sqrt{1-\frac{r_{g}}{r_{2}}} }\,, \tag{3.37}\] where the first part is the gravitational redshift, and the second part denotes the Doppler effect due to the observer's motion towards the source. #### 3.6.2 The case of the observer moving away from the source. Consider a light source at rest at \(r=r_{1}\). The source emits light radially inward denoted by \(k_{\mu}\) at \(r_{1}\), and it is received by the observer in radial geodesic motion with \(\bar{u}^{\mu}\) at \(r=r_{2}<r_{1}\). From (3.7), \(k_{\mu}\) for ingoing light is \[k_{\mu}=\left(-\omega_{c},-\frac{\omega_{c}}{1-\frac{r_{g}}{r}},0,0\right)\,. \tag{3.38}\] Using (3.16), the frequency \(\bar{\omega}_{2}\) observed by the moving observer at \(r=r_{2}\) is \[\bar{\omega}_{2} = -k_{\mu}(r_{2})\bar{u}^{\mu}(r_{2}) \tag{3.39}\] \[= \frac{\omega_{c}}{1-\frac{r_{g}}{r_{2}}}\left(\sqrt{1-\frac{r_{g} }{r_{i}}}-\sqrt{\frac{r_{g}}{r_{2}}-\frac{r_{g}}{r_{i}}}\right)\,, \tag{3.40}\] where we have assumed \(r_{i}>r_{2}\). The ratio of the frequencies is divided into two parts: \[\frac{\bar{\omega}_{2}}{\omega_{1}} = \frac{\omega_{2}}{\omega_{1}}\cdot\frac{\bar{\omega}_{2}}{\omega _{2}} \tag{3.41}\] \[= \frac{\sqrt{1-\frac{r_{g}}{r_{1}}}}{\sqrt{1-\frac{r_{g}}{r_{2}}} }\cdot\frac{1}{\sqrt{1-\frac{r_{g}}{r_{2}}}}\left(\sqrt{1-\frac{r_{g}}{r_{i}} }-\sqrt{\frac{r_{g}}{r_{2}}-\frac{r_{g}}{r_{i}}}\right)\] (3.42) \[< \frac{\sqrt{1-\frac{r_{g}}{r_{1}}}}{\sqrt{1-\frac{r_{g}}{r_{2}}} }\,, \tag{3.43}\] where the first part is the gravitational blueshift, and the second part denotes the Doppler effect due to the moving observer away from the source. ### The Doppler effect due to the moving source on the radial geodesic #### 3.7.1 The case of the source approaching to the observer. Consider two sources. One is at rest at \(r=r_{1}\) and the emitted light is denoted by \(k_{\mu}\) of (3.38). The other is moving with 4-velocity (3.16) towards the observer and emits light inward, which is denoted by \(k^{\prime}_{\mu}\) as follows: \[k^{\prime}_{\mu}=\left(-\omega^{\prime}_{c},-\frac{\omega^{\prime}_{c}}{1- \frac{r_{g}}{r}},0,0\right)\,. \tag{3.44}\] The frequencies observed in the source rest frames, respectively at \(r=r_{1}\), are \[\omega_{1} = -k_{\mu}(r_{1})u^{\mu}(r_{1})=\frac{\omega_{c}}{\sqrt{1-\frac{r_ {g}}{r_{1}}}}\,, \tag{3.45}\] \[\bar{\omega}^{\prime}_{1} = -k^{\prime}_{\mu}(r_{1})\bar{u}^{\mu}(r_{1})=\frac{\omega^{\prime }_{c}}{1-\frac{r_{g}}{r_{1}}}\left(\sqrt{1-\frac{r_{g}}{r_{i}}}-\sqrt{\frac{r_ {g}}{r_{1}}-\frac{r_{g}}{r_{i}}}\right) \tag{3.46}\] We assume the two frequencies are the same at \(r=r_{1}\). Namely, we start with the lights with the same frequency, irrespective of the motion of the sources. Then, \[\omega_{1} \equiv \bar{\omega}^{\prime}_{1}\,, \tag{3.47}\] \[\therefore\ \ \omega^{\prime}_{c} = \frac{\omega_{c}}{\sqrt{1-\frac{r_{g}}{r_{1}}}}\left(\sqrt{1- \frac{r_{g}}{r_{i}}}+\sqrt{\frac{r_{g}}{r_{1}}-\frac{r_{g}}{r_{i}}}\right)\,. \tag{3.48}\] The sources emit lights radially inward, and are received by the observer at rest at \(r=r_{2}<r_{1}\). The observed frequencies at \(r=r_{2}\) are \[\omega_{2} = -k_{\mu}(r_{2})u^{\mu}(r_{2})=\frac{\omega_{c}}{\sqrt{1-\frac{r_{ \sigma}}{r_{2}}}}\,, \tag{3.49}\] \[\omega_{2}^{\prime} = -k_{\mu}^{\prime}(r_{2})u^{\mu}(r_{2})=\frac{\omega_{c}^{\prime} }{\sqrt{1-\frac{r_{\sigma}}{r_{2}}}}\,. \tag{3.50}\] The ratio of the source and the observed frequencies of the light emitted from the moving source is \[\frac{\omega_{2}^{\prime}}{\bar{\omega}_{1}^{\prime}} = \frac{\omega_{2}^{\prime}}{\omega_{1}} \tag{3.51}\] \[= \frac{\omega_{2}}{\omega_{1}}\cdot\frac{\omega_{2}^{\prime}}{ \omega_{2}}=\frac{\omega_{2}}{\omega_{1}}\cdot\frac{\omega_{c}^{\prime}}{ \omega_{c}}\] (3.52) \[= \frac{\sqrt{1-\frac{r_{\sigma}}{r_{1}}}}{\sqrt{1-\frac{r_{ \sigma}}{r_{2}}}}\cdot\frac{1}{\sqrt{1-\frac{r_{\sigma}}{r_{1}}}}\left(\sqrt{ 1-\frac{r_{\sigma}}{r_{1}}}+\sqrt{\frac{r_{\sigma}}{r_{1}}-\frac{r_{\sigma}} {r_{i}}}\right)\] (3.53) \[> \frac{\sqrt{1-\frac{r_{\sigma}}{r_{1}}}}{\sqrt{1-\frac{r_{\sigma} }{r_{2}}}}\,. \tag{3.54}\] The ratio is divided into two parts: the first part is the gravitational blueshift, and the second part denotes the Doppler effect due to the moving source towards the observer. #### 3.7.2 The case of the source moving away from the observer Consider two sources. One is at rest at \(r=r_{1}\) and the emitted light is denoted by \(k_{\mu}\) of (3.32). The other is moving with 4-velocity (3.16) away from the observer, and emits light outward which is denoted by \(k_{\mu}^{\prime}\) as follows: \[k_{\mu}^{\prime}=\left(-\omega_{c}^{\prime},+\frac{\omega_{c}^{ \prime}}{1-\frac{r_{\sigma}}{r}},0,0\right)\,. \tag{3.55}\] The frequencies observed in the source rest frames, respectively at \(r=r_{1}\), are \[\omega_{1} = -k_{\mu}(r_{1})u^{\mu}(r_{1})=\frac{\omega_{c}}{\sqrt{1-\frac{r_ {\sigma}}{r_{1}}}}\,, \tag{3.56}\] \[\bar{\omega}_{1}^{\prime} = -k_{\mu}^{\prime}(r_{1})\bar{u}^{\mu}(r_{1})=\frac{\omega_{c}^{ \prime}}{1-\frac{r_{\sigma}}{r_{1}}}\left(\sqrt{1-\frac{r_{\sigma}}{r_{i}}}+ \sqrt{\frac{r_{\sigma}}{r_{1}}-\frac{r_{\sigma}}{r_{i}}}\right)\,. \tag{3.57}\] We assume the two frequencies are the same at \(r=r_{1}\). Then, \[\omega_{1} \equiv \bar{\omega}_{1}^{\prime}\,, \tag{3.58}\] \[\therefore\ \ \omega_{c}^{\prime} = \frac{\omega_{c}}{\sqrt{1-\frac{r_{\sigma}}{r_{1}}}}\left(\sqrt{ 1-\frac{r_{\sigma}}{r_{i}}}-\sqrt{\frac{r_{\sigma}}{r_{1}}-\frac{r_{\sigma}}{r _{i}}}\right)\,. \tag{3.59}\] The sources emit lights radially outward, and are received by the observer at rest at \(r=r_{2}>r_{1}\). The observed frequencies at \(r=r_{2}\) are \[\omega_{2} = -k_{\mu}(r_{2})u^{\mu}(r_{2})=\frac{\omega_{c}}{\sqrt{1-\frac{r_{ \mu}}{r_{2}}}}\,, \tag{3.60}\] \[\omega_{2}^{\prime} = -k_{\mu}^{\prime}(r_{2})u^{\mu}(r_{2})=\frac{\omega_{c}^{\prime}} {\sqrt{1-\frac{r_{\mu}}{r_{2}}}}\,. \tag{3.61}\] The ratio of the source and the observed frequencies of the light emitted from the moving source is \[\frac{\omega_{2}^{\prime}}{\bar{\omega}_{1}^{\prime}} = \frac{\omega_{2}^{\prime}}{\omega_{1}} \tag{3.62}\] \[= \frac{\omega_{2}}{\omega_{1}}\cdot\frac{\omega_{2}^{\prime}}{ \omega_{2}}=\frac{\omega_{2}}{\omega_{1}}\cdot\frac{\omega_{c}^{\prime}}{ \omega_{c}}\] (3.63) \[= \frac{\sqrt{1-\frac{r_{g}}{r_{1}}}}{\sqrt{1-\frac{r_{g}}{r_{2}}} }\cdot\frac{1}{\sqrt{1-\frac{r_{g}}{r_{1}}}}\left(\sqrt{1-\frac{r_{g}}{r_{i}} }-\sqrt{\frac{r_{g}}{r_{1}}-\frac{r_{g}}{r_{i}}}\right)\] (3.64) \[< \frac{\sqrt{1-\frac{r_{g}}{r_{1}}}}{\sqrt{1-\frac{r_{g}}{r_{2}}}}\,. \tag{3.65}\] The ratio is divided into two parts: the first part is the gravitational redshift, and the second part is the Doppler effect due to the moving source away from the observer. ### The transverse Doppler effect due to moving source on the circular geodesic Let us consider two light sources. One is at rest at \(r=r_{1}\) and emits light radially with \(k_{\mu}\) of (3.32). The other is in circular geodesic motion of radius \(r_{1}\) with 4-velocity \(\bar{u}^{\mu}\) of (3.22), and it also emits light with \(\tilde{k}_{\mu}\) of (3.55). The frequencies at \(r=r_{1}\) in the source rest frames are \[\omega_{1} = -k_{\mu}(r_{1})u^{\mu}(r_{1})=\frac{\omega_{c}}{\sqrt{1-\frac{r_ {g}}{r_{1}}}}\,, \tag{3.66}\] \[\bar{\omega}_{1}^{\prime} = -k_{\mu}^{\prime}(r_{1})\bar{u}^{\mu}(r_{1})=\frac{\omega_{c}^{ \prime}}{\sqrt{1-\frac{3}{2}\frac{r_{g}}{r_{1}}}}\,. \tag{3.67}\] We assume the two frequencies are the same at \(r=r_{1}\). Then, \[\omega_{1} \equiv \bar{\omega}_{1}^{\prime}\,, \tag{3.68}\] \[\therefore\ \ \omega_{c}^{\prime} = \omega_{c}\frac{\sqrt{1-\frac{3}{2}\frac{r_{g}}{r_{1}}}}{\sqrt{1- \frac{r_{g}}{r_{1}}}}\,. \tag{3.69}\] The observer at rest at \(r=r_{2}\) receives the lights. For the sake of simplicity, here we only consider the case \(r_{1}<r_{2}\). The observed frequencies are \[\omega_{2} = -k_{\mu}u^{\mu}(r_{2})=\frac{\omega_{c}}{\sqrt{1-\frac{r_{s}}{r_{2 }}}}\,, \tag{3.70}\] \[\omega_{2}^{\prime} = -k_{\mu}^{\prime}u^{\mu}(r_{2})=\frac{\omega_{c}^{\prime}}{\sqrt{ 1-\frac{r_{s}}{r_{2}}}}\,. \tag{3.71}\] The ratio of the frequencies of light emitted from the source in circular geodesic motion is \[\frac{\omega_{2}^{\prime}}{\bar{\omega}_{1}^{\prime}} = \frac{\omega_{2}^{\prime}}{\omega_{1}} \tag{3.72}\] \[= \frac{\omega_{2}}{\omega_{1}}\cdot\frac{\omega_{2}^{\prime}}{ \omega_{2}}=\frac{\omega_{2}}{\omega_{1}}\cdot\frac{\omega_{c}^{\prime}}{ \omega_{c}}\] (3.73) \[= \frac{\sqrt{1-\frac{r_{s}}{r_{1}}}}{\sqrt{1-\frac{r_{s}}{r_{2}}} }\cdot\frac{\sqrt{1-\frac{3}{2}\frac{r_{s}}{r_{1}}}}{\sqrt{1-\frac{r_{s}}{r_ {1}}}}\] (3.74) \[< \frac{\sqrt{1-\frac{r_{s}}{r_{1}}}}{\sqrt{1-\frac{r_{s}}{r_{2}}} }\,. \tag{3.75}\] The ratio is divided into two parts. The first part is the gravitational redshift as usual. For the observer at rest, the direction of the source's circular motion \(e^{\mu}\) is perpendicular to the radial direction of the emitted light \(\gamma^{\mu}\), hence \(\cos\vartheta=0\). Therefore, the second part can be regarded as the transverse Doppler effect due to the moving source. ### Compatibility with the Doppler formula We already derived the formula for the Doppler effect in Sec. 2.6 for the relative velocity \(V\) between the source and the observer. On the other hand, we used the solutions of the geodesic equation in this section. Then the frequencies are expressed as the functions of \(r\), and \(V\) does not appear explicitly. Here we examine the compatibility with the Doppler formula and the results obtained in the above Sec. 3.6-3.8. #### 3.9.1 The cases of the longitudinal Doppler effect. The Doppler term for the the observer approaching radially to the source at \(r=r_{2}\) is derived in (3.36): \[\frac{\bar{\omega}_{2}}{\omega_{2}}=\frac{1}{\sqrt{1-\frac{r_{s}}{r_{2}}}} \left(\sqrt{1-\frac{r_{g}}{r_{i}}}+\sqrt{\frac{r_{g}}{r_{2}}-\frac{r_{g}}{r_{ i}}}\right)\,. \tag{3.76}\] Using the Doppler formula (2.31) and setting \(\bar{\vartheta}=0\), the Doppler effect is \[\frac{\bar{\omega}}{\omega}=\sqrt{\frac{1+V}{1-V}}\,. \tag{3.77}\] The explicit form of \(V\) in terms of \(r_{2}\) can be obtained from the definition of the Lorentz factor (3.17). \[\gamma\equiv\frac{1}{\sqrt{1-V^{2}}} = -u_{\mu}(r_{2})\bar{u}^{\mu}(r_{2}) \tag{3.78}\] \[= \sqrt{\frac{1-\frac{r_{g}}{r_{i}}}{1-\frac{r_{g}}{r_{2}}}}\,,\] (3.79) \[\therefore V = \sqrt{\frac{\frac{r_{g}}{r_{2}}}{1-\frac{r_{g}}{r_{i}}}{r_{i}}}\,. \tag{3.80}\] Inserting (3.80) into (3.77), we can obtain the right-hand-side of (3.76). Therefore, the longitudinal Doppler effect (3.76) is compatible with the Doppler formula (3.77). In the same way, we can also show the longitudinal Doppler effects (3.42), (3.53), and (3.64) are compatible with the Doppler formula. #### 3.9.2 The case of the transverse Doppler effect. The transverse Doppler effect due to the moving source is derived in (3.74): \[\frac{\omega_{c}^{\prime}}{\omega_{c}}=\frac{\sqrt{1-\frac{3}{2}\frac{r_{g}}{ r_{1}}}}{\sqrt{1-\frac{r_{g}}{r_{1}}}}\,. \tag{3.81}\] Using the Doppler formula (2.35) and setting \(\vartheta=\pi/2\), the transverse Doppler effect is \[\frac{\omega}{\tilde{\omega}}=\sqrt{1-V^{2}}\,. \tag{3.82}\] The explicit form of \(V\) in terms of \(r_{1}\) can be obtained from the definition of the Lorentz factor (3.24). \[\gamma\equiv\frac{1}{\sqrt{1-V^{2}}} = -u_{\mu}(r_{1})\bar{u}^{\mu}(r_{1}) \tag{3.83}\] \[= \frac{\sqrt{1-\frac{r_{g}}{r_{1}}}}{\sqrt{1-\frac{3}{2}\frac{r_{ g}}{r_{1}}}}\,. \tag{3.84}\] Inserting (3.84) into (3.82), it is apparent that the transverse Doppler effect (3.81) is compatible with the Doppler formula (3.82). ## 4 The time dilation in the Schwarzschild spacetime ### The gravitational time dilation Consider two clocks at rest at different positions in the gravitational field. The elapsed time \(\Delta T\) of clock is defined to be inversely proportional to the frequency \(\omega\) of a particular light or electromagnetic wave observed in the clock's rest frame: \[\Delta T\propto\frac{1}{\omega}\,. \tag{4.1}\] Then, the ratio of the elapsed times of the clocks at rest at \(r_{1}\), and \(r_{2}\) is \[\frac{\Delta T_{1}}{\Delta T_{2}}=\frac{\omega_{2}}{\omega_{1}}=\frac{\sqrt{1 -\frac{r_{g}}{r_{1}}}}{\sqrt{1-\frac{r_{g}}{r_{2}}}}\,. \tag{4.2}\] Therefore, \(\Delta T_{1}<\Delta T_{2}\) for \(r_{g}<r_{1}<r_{2}\). Clocks in the strong gravitational gravitational field (i.e., near the strong gravitational source) tick slowly. This is called the gravitational time dilation. ### The kinetic time dilation Consider two clocks at the same point in the gravitational field. Assume that clock \(A\) is at rest with \(u^{\mu}\) and ticks \(\Delta T\), and the other clock \(B\) is moving with \(\bar{u}^{\mu}\) of (2.11) relative to clock \(A\), ticking \(\Delta\bar{T}\). As is explained in Appendix B.2, the ratio of the elapsed times is \[\frac{\Delta\bar{T}}{\Delta\bar{T}}=\sqrt{1-V^{2}}=\frac{1}{-u_{\mu}\bar{u}^{ \mu}}\,. \tag{4.3}\] \(\Delta\bar{T}<\Delta T\). Clocks in motion tick slowly. This is called the kinetic (or special relativistic) time dilation. ### Time dilation of the moving clock on the radial orbit Assume that clock \(A\) is at rest at \(r=r_{i}\) and ticking \(\Delta T_{i}\), and clock \(B\) is moving on the radial geodesic with the initial condition \(u^{1}=0\) at \(r=r_{i}\), ticking \(\Delta\bar{T}\) at \(r\). The ratio of the elapsed time is \[\frac{\Delta\bar{T}}{\Delta T_{i}} = \frac{\Delta T}{\Delta T_{i}}\cdot\frac{\Delta\bar{T}}{\Delta T} \tag{4.4}\] \[= \frac{\sqrt{1-\frac{r_{g}}{r}}}{\sqrt{1-\frac{r_{g}}{r_{i}}}} \cdot\frac{1}{-u_{\mu}(r)\bar{u}^{\mu}(r)}\] (4.5) \[= \frac{1-\frac{r_{g}}{r}}{1-\frac{r_{g}}{r_{i}}}\,, \tag{4.6}\] where we have used (3.17). ### Time dilation of the moving clock on the circular orbit Assume that clock \(A\) is at rest at \(r=r_{1}\) and ticking \(\Delta T_{1}\), and clock \(B\) is moving on the circular orbit of radius \(r\), ticking \(\Delta\bar{T}\). The ratio of the elapsed time is \[\frac{\Delta\bar{T}}{\Delta T_{1}} = \frac{\Delta T}{\Delta T_{1}}\frac{\Delta\bar{T}}{\Delta T} \tag{4.7}\] \[= \frac{\sqrt{1-\frac{r_{g}}{r}}}{\sqrt{1-\frac{r_{g}}{r_{1}}}} \cdot\frac{1}{-u_{\mu}(r)\bar{u}^{\mu}(r)}\] (4.8) \[= \frac{\sqrt{1-\frac{3}{2}\frac{r_{g}}{r}}}{\sqrt{1-\frac{r_{g}}{ r_{1}}}}\,, \tag{4.9}\] where we have used (3.24). The time dilation formula (4.9) looks quite well known and seems nothing new. However, we would like to point out that this equation exactly holds without approximation in general relativity. Just for reference, a conventional derivation based on the Newtonian analogy is given in Appendix C. ### Time dilation of the moving clock on the elliptical orbit Assume that clock \(A\) is at rest at \(r=r_{1}\) and ticking \(\Delta T_{1}\), and clock \(B\) is moving on the non-circular bound orbit, ticking \(\Delta\bar{T}\) at \(r\) changing from time to time. The Lorentz factor is \[\gamma=-u_{\mu}(r)\bar{u}^{\mu}(r)=\frac{\epsilon}{\sqrt{1-\frac{r_{g}}{r}}}\,. \tag{4.10}\] Then the ratio of the elapsed times is \[\frac{\Delta\bar{T}}{\Delta T_{1}} = \frac{\Delta T}{\Delta T_{1}}\frac{\Delta\bar{T}}{\Delta T} \tag{4.11}\] \[= \frac{\sqrt{1-\frac{r_{g}}{r}}}{\sqrt{1-\frac{r_{g}}{r_{1}}}} \cdot\frac{1}{-u_{\mu}(r)\bar{u}^{\mu}(r)}\] (4.12) \[= \frac{1-\frac{r_{g}}{r}}{\epsilon\sqrt{1-\frac{r_{g}}{r_{1}}}}\,. \tag{4.13}\] For non-circular motion, the geodesic equation cannot be solved analytically. Using (3.27), the following expression is valid up to the linear order of \(r_{g}\): \[\frac{\Delta\bar{T}}{\Delta T_{1}}\simeq\frac{1}{\sqrt{1-\frac{r_{g}}{r_{1}}} }\left(1+\frac{r_{g}}{4a}-\frac{r_{g}}{r}\right)\,. \tag{4.14}\] For non-circular bound orbits, \(r\) changes from time to time. Up to the linear order of \(r_{g}\), however, we can treat \(r\) as the elliptical orbit. Then, the time average of the elliptical orbit per cycle is \[\left\langle\frac{1}{r}\right\rangle=\frac{1}{a}\,, \tag{4.15}\] where \(a\) is the semi-major axis of the elliptical orbit. Using this result, the ratio of the elapsed times, after the time average per cycle, is \[\frac{\left\langle\Delta\bar{T}\right\rangle}{\Delta T_{1}} \simeq \frac{1}{\sqrt{1-\frac{r_{g}}{r_{1}}}}\left(1+\frac{r_{g}}{4a}- \left\langle\frac{r_{g}}{r}\right\rangle\right) \tag{4.16}\] \[\simeq \frac{\sqrt{1-\frac{3}{2}\frac{r_{g}}{a}}}{\sqrt{1-\frac{r_{g}}{r _{1}}}}\,, \tag{4.17}\] which is valid up to the linear order of \(r_{g}\). The time dilation of the moving clock on elliptical orbit, after averaging per cycle, depends only on the semi-major axis, irrespective of the eccentricity. It is also quite impressive to compare this result (4.17) with (4.9). Replacing the circular radius \(r\) in (4.9) with the semi-major axis \(a\) reproduces the result (4.17). ## 5 The gravitational redshift and the Doppler effect in the expanding universe ### The metric of the expanding universe The Friedmann-Lemaitre-Robertson-Walker (FLRW) metric which describes the expanding universe is \[ds^{2}=-a^{2}(\eta)d\eta^{2}+a^{2}(\eta)\Big{(}d\chi^{2}+\sigma^{2}(\chi)\left( d\theta^{2}+\sin^{2}\theta\,d\phi^{2}\right)\Big{)}\,, \tag{5.1}\] where we use the conformal time coordinate \(\eta\), and \[\sigma(\chi)=\left\{\begin{array}{ll}\frac{\sin(\sqrt{k}\chi)}{\sqrt{k}}&(k>0) \,,\\ \chi&(k=0)\,,\\ \frac{\sinh\left(\sqrt{|k|}\chi\right)}{\sqrt{|k|}}&(k<0)\,,\end{array}\right. \tag{5.2}\] and \(k\) is the curvature constant of the homogeneous and isotropic space. ### The solution of the null geodesic equation Although the metric depends on \(x^{0}=\eta\), we can show \(k_{0}\) is constant: \[\frac{dk_{0}}{dv} = \frac{1}{2}g_{\alpha\beta,0}k^{\alpha}k^{\beta}=\frac{1}{a}\frac {da}{d\eta}g_{\alpha\beta}k^{\alpha}k^{\beta}=0\,, \tag{5.3}\] \[\therefore\;\;k_{0} = \mbox{const.}\equiv-\omega_{c}\,, \tag{5.4}\] where the null condition \(g_{\alpha\beta}k^{\alpha}k^{\beta}=0\) is used. Because of the homogeneous and isotropic property of the FLRW universe, it is sufficient to consider radially propagating null geodesics. Then, \(k^{2}=k^{3}=0\), and the null condition provides the relation \(k^{1}=\pm k^{0}\). Finally, \(k_{\mu}\) for the radially propagating light in the FLRW universe is \[k_{\mu}=\left(-\omega_{c},\pm\omega_{c},0,0\right). \tag{5.5}\] ### The 4-velocity of the comoving observer The spatial components of the comoving observer's 4-velocity vanish, \(u^{i}=0\) by the definition. Using the condition \(u_{\mu}u^{\mu}=-1\), we obtain \[u^{\mu}=\left(\frac{1}{a},0,0,0\right)\,. \tag{5.6}\] ### The 4-velocity of the moving observer with peculiar velocity The 4-velocity of the moving observer \(\bar{u}^{\mu}\) with peculiar velocity \(V\) relative to the comoving observer is expressed by (2.11) or (2.14). ### The cosmological redshift The frequency of light \(\omega\) observed by the comoving observer is, from (5.5) and (5.6), \[\omega\equiv-k_{\mu}u^{\mu}=\frac{\omega_{c}}{a}\,, \tag{5.7}\] which depends only on the time through the scale factor \(a(\eta)\). The ratio of the frequencies at emission \(\omega_{e}\) and at observation \(\omega_{0}\) is \[\frac{\omega_{o}}{\omega_{e}}=\frac{a(\eta_{e})}{a(\eta_{0})}<1\;\mbox{ for }\eta_{e}<\eta_{0}\,. \tag{5.8}\] The redshift \(z\) of a source is defined in terms of the wavelengths, which are inversely proportional to the frequencies, by, \[1+z\equiv\frac{\lambda_{o}}{\lambda_{e}}=\frac{\omega_{e}}{\omega_{o}}=\frac{ a(\eta_{0})}{a(\eta_{e})}>1\,. \tag{5.9}\] This is the well-known formula for the cosmological redshift. ### The Doppler effect due to the moving observer The redshift \(\bar{z}\) measured by the moving observer with \(\bar{u}^{\mu}\) is, with the help of (2.31), \[1+\bar{z} \equiv \frac{\omega_{e}}{\bar{\omega}_{o}} \tag{5.10}\] \[= \frac{\omega_{e}}{\omega_{o}}\cdot\frac{\omega_{o}}{\bar{\omega}_ {o}}\] (5.11) \[= (1+z)\cdot\frac{1-V\cos\bar{\vartheta}}{\sqrt{1-V^{2}}}\,, \tag{5.12}\] where, as explained in Sec. 2.6, \(\bar{\vartheta}\) is the angle of incidence in the moving observer's rest frame. The dipole anisotropy, which is proportional to \(\cos\bar{\vartheta}\), naturally appears as the Doppler effect due to the observer's peculiar motion. Note that the amplitude for the longitudinal Doppler effect when \(\bar{\vartheta}=0\) is \(V/\sqrt{1-V^{2}}\). We also observe the transverse Doppler effect \[1+\bar{z}=(1+z)\cdot\frac{1}{\sqrt{1-V^{2}}}>1+z\quad\mbox{for $\bar{\vartheta}= \frac{\pi}{2}$}\,. \tag{5.13}\] ## 6 Conclusion We have presented a unified treatment of the gravitational and cosmological redshift, the Doppler effect due to the moving observer or source, and the time dilation in the gravitational field in the framework of general relativity. The unified treatment is simply based on the following two principles: 1. Light obeys the null geodesic equation, i.e., \(k^{\mu}\,_{;\nu}k^{\nu}=0,\ k_{\mu}k^{\mu}=0\). 2. The frequency of light measured by an observer with 4-velocity \(u^{\mu}\) is \(\omega=-k_{\mu}u^{\mu}\). We have applied it to the cases of moving observer or light source in the gravitational field, and obtained the Doppler effect formula with the velocity \(V\) of the observer or the source, in addition to the standard gravitational or cosmological redshift. In particular, the longitudinal and the transverse Doppler effects have explicitly been given which hold in fully general-relativistic situations. We have also examined the time dilation of the moving clock in the gravitational field. We have confirmed that the ratio of the elapsed times \(\Delta\bar{T}\) of the moving clock on circular orbit with radius \(r\) and \(\Delta T_{1}\) of the observer at rest \(r=r_{1}\) is \[\frac{\Delta\bar{T}}{\Delta T_{1}}=\frac{\sqrt{1-\frac{3}{2}\frac{r_{s}}{r}} }{\sqrt{1-\frac{r_{s}}{r_{1}}}}\,,\] which exactly holds without approximation. We have also derived the time dilation of the moving clock on the elliptical orbit with the semi-major axis \(a\). The ratio of the elapsed times, after the time average per cycle, is \[\frac{\langle\Delta\bar{T}\rangle}{\Delta T_{1}}\simeq\frac{\sqrt{1-\frac{3} {2}\frac{r_{s}}{a}}}{\sqrt{1-\frac{r_{s}}{r_{1}}}}\,,\] which holds up to the first order of \(r_{g}\). We have applied our unified treatment to the cosmological redshift and obtained the Doppler effect formulae which exactly hold in the general relativistic framework. We have observed the existence of the transverse Doppler effect due to the observer's peculiar motion in the expanding universe. Needless to say, the unified treatment presented in this paper can also be applied to the special relativistic cases. It means that the special relativistic effects can also be understood without the Lorentz transformation, which are summarized in Appendix B for reader's convenience.
2303.06713
Uniqueness for the Dafermos regularization viscous wave fan profiles for Riemann solutions of scalar hyperbolic conservation laws
We prove the uniqueness of solutions to the Dafermos regularization viscous wave fan profiles for Riemann solutions of scalar hyperbolic conservation laws. We emphasize that our results are not restricted to the small self-similar viscosity regime. We rely on suitable adaptations of Serrin's sweeping principle and the sliding method from the qualitative theory of semilinear elliptic PDEs. In order to illustrate the delicacy of our result, we prove the existence of an unbounded solution in the case of Burgers equation. Lastly, we can combine aspects of these results in order to give a precise description of the Dafermos regularization of rarefaction waves of Burgers equation.
Christos Sourdis
2023-03-12T17:34:28Z
http://arxiv.org/abs/2303.06713v2
Uniqueness for the Dafermos Regularization Viscous Wave Fan Profiles for Riemann Solutions of Scalar Hyperbolic Conservation Laws ###### Abstract. We prove the uniqueness of solutions to the Dafermos regularization viscous wave fan profiles for Riemann solutions of scalar hyperbolic conservation laws. We emphasize that our results are not restricted to the small self-similar viscosity regime. We rely on suitable adaptations of Serrin's sweeping principle and the sliding method from the qualitative theory of semilinear elliptic PDEs. In order to illustrate the delicacy of our result, we prove the existence of an unbounded solution in the case of Burgers equation. Lastly, we can combine aspects of these results in order to give a precise description of the Dafermos regularization of rarefaction waves of Burgers equation. ## 1. Introduction The Riemann problem for a single conservation law in a single space variable is to determine a self-similar (generally weak) solution \(U\) of \[U_{t}+f(U)_{x}=0,\ \ x\in\mathbb{R},\ t>0, \tag{1}\] with the initial condition \[U(x,0)=\left\{\begin{array}{ll}u_{L}&\mbox{for $x<0$},\\ u_{R}&\mbox{for $x>0$},\end{array}\right. \tag{2}\] where \(f\in C^{1}\) (see for instance [7, Ch. IX] or [15, Ch. 16]). A solution is self-similar if \[U(x,t)=u(\xi),\ \xi=x/t. \tag{3}\] The corresponding Dafermos regularization [6, 8, 16] is \[U_{t}+f(U)_{x}=\varepsilon tU_{xx},\ \ x\in\mathbb{R},\ t>0, \tag{4}\] together with (2), where \(\varepsilon>0\) (typically small). Self-similar solutions of (4) with respect to the scaling in (3) represent viscous wave fan profiles for Riemann solutions. The corresponding ODE problem for \(u\) is \[\varepsilon u_{\xi\xi}=\left(f^{\prime}(u)-\xi\right)u_{\xi},\ \xi\in\mathbb{R};\ u(- \infty)=u_{L},\ u(+\infty)=u_{R}. \tag{5}\] Throughout this paper, solutions to the above ODE are understood in the classical sense (i.e. \(u\in C^{2}(\mathbb{R})\)). Existence of a solution to (5) is known to hold for any \(\varepsilon>0\) by a fixed point argument (see [6, 17]) and such solutions are monotone. In fact, their ###### Abstract We consider the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the _discrete_ version of the proposition which is of independent interest and will be of use in the subsequent Proposition 2 for the study of the singular perturbation problem (6). **Proposition 1**.: _There exists a unique, increasing solution to the ODE in (6) with \(\varepsilon=1\), i.e._ \[u_{\xi\xi}=(u-\xi)u_{\xi},\ \xi\in\mathbb{R}, \tag{7}\] _such that_ \[\max\{0,\xi\}<u(\xi)<\max\{0,\xi\}+Ce^{-c|\xi|}\ \text{in}\ \mathbb{R}, \tag{8}\] _for some constants \(c,C>0\). Actually, the constant \(c\) in the exponent can be taken equal to \(1\) (see Remark 2 for more details)._ Armed with the above proposition, we can employ once more the sliding method in order to establish the following result which refines [10, Thm. 3.1]. **Proposition 2**.: _The solution of (6) with \(u_{L}<u_{R}\) satisfies_ \[u_{\varepsilon}(\xi)=\sqrt{\varepsilon}\text{U}\!\left(\frac{\xi-u_{L}}{ \sqrt{\varepsilon}}\right)+u_{L}+\sqrt{\varepsilon}O\left(e^{-1/\sqrt{ \varepsilon}}\right),\ \xi\leq\frac{u_{L}+u_{R}}{2},\] _and_ \[u_{\varepsilon}(u_{L}+u_{R}-\xi)+u_{\varepsilon}(\xi)=u_{L}+u_{R},\ \xi\in \mathbb{R}, \tag{9}\] _where U is as in Proposition 1, and the (normalized) remainder \(e^{1/\sqrt{\varepsilon}}O\left(e^{-1/\sqrt{\varepsilon}}\right)\) is uniformly bounded in \(\mathbb{R}\) as \(\varepsilon\to 0\)._ The solutions provided by the above proposition converge to _rarefaction waves_ for (1)-(2) as \(\varepsilon\to 0\). On the other hand, if \(u_{L}>u_{R}\) then (1)-(2) admits the following _shock wave_ solution: \[u^{*}(\xi)=\left\{\begin{array}{ll}u_{L},&\xi\leq(u_{L}+u_{R})/2,\\ u_{R},&\xi>(u_{L}+u_{R})/2.\end{array}\right.\] The corresponding solutions to (5) can be constructed using well known arguments of geometric singular perturbation theory (see for example [9]). We point out that in that case there is no loss of normal hyperbolicity as opposed to the case in Proposition 2 (see [10]). Our method of proof of Theorem 1 is based on Serrin's sweeping principle (see [11, 12]) and the famous sliding method (see for instance [3, 5] and the references therein) from the qualitative theory of elliptic PDEs. In the same spirit, the proof of Proposition 1 is based on the method of super and subsolutions. Finally, the proof of Proposition 2 is highly motivated by the sliding argument in the proof of Theorem 1. The rest of the paper is devoted to the proofs of the above results and to some related remarks. ## 2. Proofs of the main results ### Proof of Theorem 1 Proof.: As we have already mentioned, existence of a solution to (5) is known to hold for any \(\varepsilon>0\). Moreover, by the classical uniqueness result for the initial value problem associated to the first order linear ODE for \(u_{\xi}\), it follows that solutions of (5) are strictly monotone unless they are identically constant. We will distinguish the following three cases. \(\underline{u_{L}=u_{R}}.\) As we have just mentioned, in that case the only solution is the constant one. \(\underline{u_{L}<u_{R}}.\) Let \(u,v\) be two solutions of (5). We will adapt the famous sliding method (see for instance [5, Thm. 1]) in order to show that \(u\geq v\). Once this is established, the desired uniqueness property will follow simply by exchanging the roles of \(u\) and \(v\). We note that in the problem at hand, in contrast to [5] and related references where the sliding method is employed, the aforementioned monotonicity property \[u_{\xi},\ v_{\xi}>0,\ \xi\in\mathbb{R}, \tag{10}\] will play an important role throughout the proof. Let us consider the translations \[u_{\lambda}(\xi)=u(\xi+\lambda),\ \ \lambda\geq 0. \tag{11}\] We note that, thanks to (10), \(u_{\lambda}\) is strictly increasing with respect to \(\lambda\geq 0\). The main observation is that, for \(\lambda>0\), we have \[\varepsilon(u_{\lambda})_{\xi\xi}<\left(f^{\prime}(u_{\lambda})-\xi\right)(u _{\lambda})_{\xi},\ \xi\in\mathbb{R};\ u_{\lambda}(-\infty)=u_{L},\ u_{\lambda}(+\infty)=u_{R}, \tag{12}\] i.e., \(u_{\lambda}\) is a strict supersolution of (5). Indeed, setting \(\xi+\lambda\) with \(\lambda>0\) in place of \(\xi\) in (5) gives \[\varepsilon u_{\xi\xi}(\xi+\lambda)=\left(f^{\prime}\left(u(\xi+\lambda) \right)-\xi-\lambda\right)u_{\xi}(\xi+\lambda)\stackrel{{\eqref{eq:10}}}{{<}} \left(f^{\prime}\left(u(\xi+\lambda)\right)-\xi\right)u_{\xi}(\xi+\lambda),\] and the desired relation (12) follows readily. To start the sliding process, we will show that there exists a large \(\Lambda>0\) such that \[u_{\Lambda}>v\ \text{in}\ \mathbb{R}. \tag{13}\] To this end, we first note that, given any \(M>0\), by the asymptotic behaviour of \(u\) from (5) there clearly exists a \(\Lambda(M)>0\) sufficiently large such that \[u_{\lambda}>v,\ \xi\in[-M,M],\ \text{if}\ \lambda\geq\Lambda(M). \tag{14}\] Then, using (5) for \(v\) and (12), we will infer that the above strict inequality continues to hold for \(|\xi|>M\), provided that \(M\) is chosen sufficiently large. Indeed, from the aforementioned relations, we see that \[\psi=u_{\lambda}-v\] satisfies \[\varepsilon\psi_{\xi\xi}<\left(f^{\prime}(u_{\lambda})-\xi\right)\psi_{\xi}+v_{ \xi}\left(f^{\prime}(u_{\lambda})-f^{\prime}(v)\right),\ \xi\in\mathbb{R}.\] We can equivalently rewrite the above relation as \[L(\psi)=\varepsilon\psi_{\xi\xi}-\left(f^{\prime}(u_{\lambda})-\xi\right)\psi_ {\xi}-v_{\xi}Q(\xi)\psi<0,\ \xi\in\mathbb{R}, \tag{15}\] where \[Q(\xi)=\left\{\begin{array}{ll}\frac{f^{\prime}(u_{\lambda}(\xi))-f^{\prime }(v(\xi))}{u_{\lambda}(\xi)-v(\xi)}&\mbox{if $u_{\lambda}(\xi)\neq v(\xi)$},\\ 0&\mbox{if $u_{\lambda}(\xi)=v(\xi)$}.\end{array}\right.\] Since \(f^{\prime}\) is Lipschitz continuous on \([u_{L},u_{R}]\), we have that \(Q\) is bounded in absolute value by the Lipschitz constant of \(f^{\prime}\) over \([u_{L},u_{R}]\). Summarizing, we have \[L(\psi)<0\ \mbox{if $|\xi|>M$};\ \psi(\pm M)>0,\ \psi\to 0\ \mbox{super-exponentially fast as $|\xi|\to\infty$}, \tag{16}\] (for the last property we refer to (3.9) in [6]). We will show that if \(M>0\) is sufficiently large then \[\psi>0\ \mbox{for $|\xi|>M$} \tag{17}\] by a maximum principle type argument, despite of the fact that \(Q\) may be sign-changing. To this end, by (1.6) in [4], it suffices to prove that there exists a \(g\in C^{2}\) such that \[L(g)\leq 0\ \mbox{if $|\xi|>M$},\ g>0\ \mbox{if $|\xi|\geq M$ and }\lim_{|\xi|\to\infty}\frac{\psi}{g}=0. \tag{18}\] It turns out that \[g(\xi)=e^{-|\xi|}\] satisfies (18), provided that \(M>0\) is chosen sufficiently large. Indeed, by virtue of the asymptotic behaviour of \(\psi\) from (16), it remains to verify the first property in (18). We will do this only for \(\xi>M\) as the case \(\xi<-M\) can be treated similarly. A simple calculation shows that \[L(g)=\left(\varepsilon+f^{\prime}(u_{\lambda})-\xi-v_{\xi}Q(\xi)\right)e^{- \xi}<0\ \mbox{if $\xi>M$},\] provided that \(M\) is chosen sufficiently large (since \(f^{\prime}(u_{\lambda}),v_{\xi},Q\) are bounded). In fact, let us from now on choose \[M=1+\varepsilon+\|f^{\prime}\|_{L^{\infty}(u_{L},u_{R})}+\|v_{\xi}\|_{L^{ \infty}(\mathbb{R})}\|f^{\prime}\|_{C^{0,1}(u_{L},u_{R})},\] which clearly satisfies the above properties. The desired relation (13) follows at once by combining (14) and (17). We can now define \(\lambda_{0}\geq 0\) by \[\lambda_{0}=\inf\left\{\lambda\in(0,\Lambda]\ :\ u_{\lambda}\geq v\ \mbox{in $ \mathbb{R}$}\right\}. \tag{19}\] Obviously, by continuity, we have \[u_{\lambda_{0}}\geq v\ \mbox{in $\mathbb{R}$}. \tag{20}\] We will establish that \(\lambda_{0}=0\). To this end, arguing by contradiction, let us suppose that \(\lambda_{0}>0\). If \[\min_{[-M,M]}(u_{\lambda_{0}}-v)=0,\] in light of (20), we can easily reach a contradiction by applying the strong maximum principle (see for instance [11, Thm. 2.8.4]) in the strict differential inequality (15) (with \(\lambda=\lambda_{0}\)). It remains to consider the case where \[\min_{[-M,M]}(u_{\lambda_{0}}-v)>0.\] However, since \(u_{\lambda}\) is continuous with respect to \(\lambda\), the above relation implies that there exists some small \(\epsilon\in(0,\lambda_{0})\) such that \[\min_{[-M,M]}(u_{\lambda_{0}-\epsilon}-v)>0.\] In turn, by the maximum principle as before we get \[u_{\lambda_{0}-\epsilon}>v\text{ in }\mathbb{R},\] which contradicts the definition of \(\lambda_{0}\). We thus conclude that \(\lambda_{0}=0\). Hence, we infer from (20) that \(v\leq u\) as desired. \(\underline{u_{L}>u_{R}}.\) The proof is analogous to the previous case, in the sense that it is based on a continuity argument and the maximum principle. However, here solutions satisfy \[u_{\xi}<0,\ \xi\in\mathbb{R}, \tag{21}\] and we cannot use the previous sliding argument since we have the 'wrong' inequality in (12). Instead, we will employ Serrin's sweeping principle (see [11, Thm. 2.3.5] or [12]). More precisely, we will'sweep' with the family \[u_{\lambda}(\xi)=u(\xi-2K\lambda)+\lambda,\ \ \lambda\geq 0, \tag{22}\] where \(K>0\) denotes the Lipschitz constant of \(f^{\prime}\), i.e. \[|f^{\prime}(u_{1})-f^{\prime}(u_{2})|\leq K|u_{1}-u_{2}|,\ u_{1},u_{2}\in[u_{R },u_{L}]. \tag{23}\] We note in passing that (22) is motivated by Remark 3 below for Burgers equation. We claim that \(u_{\lambda}\) is a strict supersolution to (5) for \(\lambda>0\), that is \[\varepsilon(u_{\lambda})_{\xi\xi}<\left(f^{\prime}(u_{\lambda})-\xi\right)(u _{\lambda})_{\xi},\ \xi\in\mathbb{R};\ u_{\lambda}(-\infty)=u_{L}+\lambda,\ u_{\lambda}(+\infty)=u_ {R}+\lambda. \tag{24}\] Indeed, setting \(\xi-2K\lambda\) with \(\lambda>0\) in place of \(\xi\) in (5), we find \[\varepsilon u_{\xi\xi}(\xi-2K\lambda) = \left[f^{\prime}\left(u(\xi-2K\lambda)\right)+2K\lambda-\xi\right] u_{\xi}(\xi-2K\lambda)\] \[\text{via }(\ref{eq:1}),(\ref{eq:23}): \leq \left[f^{\prime}\left(u(\xi-2K\lambda)+\lambda\right)-K\lambda+2K \lambda-\xi\right]u_{\xi}(\xi-2K\lambda)\] \[\text{again thanks to }(\ref{eq:1}): < \left[f^{\prime}\left(u(\xi-2K\lambda)+\lambda\right)-\xi\right]u_ {\xi}(\xi-2K\lambda),\] and the desired relation (24) follows at once. Armed with the above information, the proof proceeds along the same lines as in the previous case. We just point out that, thanks to the asymptotic behaviour of \(u_{\lambda}\) from (24), the continuity argument is actually simpler and there is no need for a splitting of the real line here. We omit the details. **Remark 1**.: _It is easy to see that the proof of Theorem 1 in the case where \(u_{L}<u_{R}\) applies under the weaker assumption that \(f\in C^{1}(u_{L},u_{R})\), provided that \(f\) is convex near \(u_{L}\) and \(u_{R}\). Indeed, the last assumption implies that \(Q(\xi)\geq 0\) if \(|\xi|\) is sufficiently large, and one can plainly take \(g\equiv 1\)._ _If \(u_{L}>u_{R}\) and \(f\in C^{1}(u_{R},u_{L})\) is concave in \((u_{R},u_{L})\), then one can show that there is uniqueness as in the last part of the proof of Theorem 1 by noting that_ \[u_{\lambda}=u+\lambda,\ \lambda\geq 0,\] _is a family of supersolutions to (5)._ **Proof of Proposition 1**.: Proof.: The function \[\underline{u}(\xi)=\max\{0,\xi\}\] provides a weak subsolution to (7) (see for example [2]). Indeed, \(\underline{u}\) satisfies (7) for \(\xi\neq 0\) and \(\underline{u}_{\xi}(0^{-})\leq\underline{u}_{\xi}(0^{+})\). We claim that \[\bar{u}(\xi)=\left\{\begin{array}{ll}\int_{-\infty}^{\xi}e^{-\frac{t^{2}}{2}} dt,&\xi\leq 0,\\ \\ \xi+\int_{-\infty}^{0}e^{-\frac{t^{2}}{2}}dt,&0<\xi\leq 1,\\ \\ \xi+\left(\int_{-\infty}^{0}e^{-\frac{t^{2}}{2}}dt\right)e^{-\frac{\xi-1}{L}},& \xi>1,\end{array}\right.\] is a weak supersolution to (7), provided that \(L>0\) is chosen sufficiently large. Indeed, letting \(I=\int_{-\infty}^{0}e^{-t^{2}/2}dt\), we have \[-\bar{u}_{\xi\xi}+(\bar{u}-\xi)\bar{u}_{\xi}=\xi e^{-\frac{\xi^{2}}{2}}+(\bar{u}-\xi)e^{-\frac{\xi^{2} }{2}}=\bar{u}e^{-\frac{\xi^{2}}{2}}>0,\ \xi<0;\] \[-\bar{u}_{\xi\xi}+(\bar{u}-\xi)\bar{u}_{\xi}=\int_{-\infty}^{0}e^{-\frac{t^{2 }}{2}}dt>0,\ 0<\xi<1;\] \[-\bar{u}_{\xi\xi}+(\bar{u}-\xi)\bar{u}_{\xi} = -\frac{I}{L^{2}}e^{-\frac{\xi-1}{L}}+Ie^{-\frac{\xi-1}{L}}\left(1 -\frac{I}{L}e^{-\frac{\xi-1}{L}}\right)\] \[= Ie^{-\frac{\xi-1}{L}}\left(-\frac{1}{L^{2}}+1-\frac{I}{L}e^{- \frac{\xi-1}{L}}\right)>0,\] \(\xi>1\), provided that \(L\) is chosen sufficiently large. Moreover, note that \[\bar{u}_{\xi}(0^{-})=\bar{u}_{\xi}(0^{+})\ \mbox{and}\ \bar{u}_{\xi}(1^{-}) \geq\bar{u}_{\xi}(1^{+}).\] The claim that \(\bar{u}\) is a weak supersolution to (7) now follows directly from [2]. Since \(\underline{u}<\bar{u}\), the existence of a solution to (7) such that \(\underline{u}<u<\bar{u}\) follows from [1, 2, 12] (these references consider the case of a bounded domain, nevertheless we can conclude by a standard limiting argument). Clearly, this solution satisfies all the desired properties. We point out that its uniqueness can be shown by a sliding argument as in Theorem 1. **Remark 2**.: _It was observed in [10] that (7) has the following first integral:_ \[H=\frac{1}{2}(u-\xi)^{2}-(u_{\xi}-1)+\ln|u_{\xi}|.\] _Therefore, the solution U provided by Proposition 1 satisfies_ \[\frac{1}{2}(u-\xi)^{2}=(u_{\xi}-1)-\ln u_{\xi},\ \xi\in\mathbb{R}. \tag{25}\] _By Taylor's expansion, we have_ \[v-1-\ln v=\frac{(v-1)^{2}}{2}+O\left((v-1)^{3}\right)\ \text{as}\ v\to 1,\] _where we have employed once more Landau's notation for the remainder. Hence, we deduce from (25) that_ \[\lim_{\xi\to+\infty}\frac{u-\xi}{u_{\xi}-1}=-1,\] _(keep in mind that \(0<U_{\xi}<1\) by the convexity of \(U\)). In turn, by L'Hopital's rule we obtain that_ \[\lim_{\xi\to+\infty}\frac{\ln(u-\xi)}{\xi}=-1.\] _We thus conclude that the constant \(c\) in the exponent in (8) can actually be taken equal to \(1\) as \(\xi\to+\infty\). On the other side, it follows from the proof of Proposition 1 that the corresponding decay rate as \(\xi\to-\infty\) is super-exponentially fast._ **Remark 3**.: _Assume that \(u\) satisfies the ODE in (6). Then_ \[u_{\lambda}(\xi)=u(\xi-\lambda)+\lambda,\ \ \lambda\in\mathbb{R},\] _solves the same ODE. In other words, the ODE in (6) is invariant under the above transformation. To see this, note that setting \(\xi-\lambda\) in place of \(\xi\) in (6) yields_ \[\varepsilon u_{\xi\xi}(\xi-\lambda)=\left(u(\xi-\lambda)+\lambda-\xi\right)u_ {\xi}(\xi-\lambda),\] _and the desired conclusion follows at once._ ### Proof of Proposition 2 Proof.: It follows from the uniqueness property in Theorem 1 and [10, Prop. 2.7] that the solution to (6) satisfies the odd symmetry property (9). Therefore, problem (6) reduces to \[\varepsilon u_{\xi\xi}=(u-\xi)u_{\xi},\ \xi<\frac{u_{L}+u_{R}}{2};\ u(-\infty)=u_{L },\ u\left(\frac{u_{L}+u_{R}}{2}\right)=\frac{u_{L}+u_{R}}{2}. \tag{26}\] It is easy to see that \[v_{\varepsilon}(\xi)=\sqrt{\varepsilon}\mathtt{U}\left(\frac{\xi-u_{L}}{ \sqrt{\varepsilon}}\right)+u_{L} \tag{27}\] satisfies \[\varepsilon v_{\xi\xi}=(v-\xi)v_{\xi},\ \xi<\frac{u_{L}+u_{R}}{2}\ (\text{see also Remark \ref{lem:2})};\] \[v(-\infty)=u_{L},\ v\left(\frac{u_{L}+u_{R}}{2}\right)=\frac{u_{L}+u_{R}}{2} +\sqrt{\varepsilon}O\left(e^{-1/\sqrt{\varepsilon}}\right),\ \text{as}\ \varepsilon\to 0.\] As in the proof of Theorem 1, since \((v_{\varepsilon,\lambda})_{\xi}>0\), we find that \[v_{\varepsilon,\lambda}(\xi)=v_{\varepsilon}(\xi+\lambda) \tag{28}\] is a (strict) subsolution to the ODE in (26) if \(\lambda<0\) and a (strict) supersolution if \(\lambda>0\). Furthermore, \(v_{\varepsilon,\lambda}\) is strictly increasing with respect to \(\lambda\). Moreover, thanks to (8), we have \[v_{\varepsilon,\lambda}(-\infty)=u_{L},\ v_{\varepsilon,\lambda}\left(\frac{ u_{L}+u_{R}}{2}\right)=\frac{u_{L}+u_{R}}{2}+\lambda+\sqrt{\varepsilon}O \left(e^{-1/\sqrt{\varepsilon}}\right)\ \text{as}\ \varepsilon\to 0.\] Hence, as in Theorem 1 or Proposition 2, the solution of (26) must satisfy \[v_{\varepsilon,-\lambda_{\varepsilon}}<u<v_{\varepsilon,\lambda_{\varepsilon} },\ \xi\in\mathbb{R},\ \text{where}\ 0<\lambda_{\varepsilon}<\sqrt{\varepsilon}O\left(e^{-1/\sqrt{ \varepsilon}}\right)\ \text{as}\ \varepsilon\to 0.\] The assertion of the proposition now follows readily from (27), (28) and the fact that \(\mathtt{U}_{\xi}\) is bounded (in fact, \(0<\mathtt{U}_{\xi}<1\) holds due to the convexity of \(\mathtt{U}\)).
2305.05194
The square of every subcubic planar graph of girth at least 6 is 7-choosable
The square of a graph $G$, denoted $G^2$, has the same vertex set as $G$ and has an edge between two vertices if the distance between them in $G$ is at most $2$. Thomassen (2018) and Hartke, Jahanbekam and Thomas (2016) proved that $\chi(G^2) \leq 7$ if $G$ is a subcubic planar graph. A natural question is whether $\chi_{\ell}(G^2) \leq 7$ or not if $G$ is a subcubic planar graph. Cranston and Kim (2008) showed that $\chi_{\ell}(G^2) \leq 7$ if $G$ is a subcubic planar graph of girth at least 7. We prove that $\chi_{\ell}(G^2) \leq 7$ if $G$ is a subcubic planar graph of girth at least 6.
Seog-Jin Kim, Xiaopan Lian
2023-05-09T06:08:47Z
http://arxiv.org/abs/2305.05194v1
# The square of every subcubic planar graph of girth at least \(6\) is \(7\)-choosable ###### Abstract The square of a graph \(G\), denoted \(G^{2}\), has the same vertex set as \(G\) and has an edge between two vertices if the distance between them in \(G\) is at most \(2\). Thomassen [15] and Hartke, Jahanbekam and Thomas [7] proved that \(\chi(G^{2})\leq 7\) if \(G\) is a subcubic planar graph. A natural question is whether \(\chi_{\ell}(G^{2})\leq 7\) or not if \(G\) is a subcubic planar graph. It was showed in [5] that \(\chi_{\ell}(G^{2})\leq 7\) if \(G\) is a subcubic planar graph of girth at least \(7\). We prove that \(\chi_{\ell}(G^{2})\leq 7\) if \(G\) is a subcubic planar graph of girth at least \(6\). ## 1 Introduction The _square_ of a graph \(G\), denoted \(G^{2}\), has the same vertex set as \(G\) and has an edge between two vertices if the distance between them in \(G\) is at most \(2\). We say a graph \(G\) is _subcubic_ if \(\Delta(G)\leq 3\), where \(\Delta(G)\) is the maximum degree in \(G\). The _girth_ of \(G\), denoted \(g(G)\), is the size of smallest cycle in \(G\). Let \(\chi(G)\) be the chromatic number of a graph \(G\). Wegner [16] posed the following conjecture. **Conjecture 1**.: _[_16_]_ _Let \(G\) be a planar graph. The chromatic number \(\chi(G^{2})\) of \(G^{2}\) is at most 7 if \(\Delta(G)=3\), at most \(\Delta(G)+5\) if \(4\leq\Delta(G)\leq 7\), and at most \(\lfloor\frac{3\Delta(G)}{2}\rfloor\) if \(\Delta(G)\geq 8\)._ Conjecture 1 is still wide open. The only case for which we know tight bound is when \(\Delta(G)=3\). Thomassen [15] showed that \(\chi(G^{2})\leq 7\) if \(G\) is a planar graph with \(\Delta(G)=3\), which implies that Conjecture 1 is true for \(\Delta(G)=3\). Conjecture 1 for \(\Delta(G)=3\) is also confirmed by Hartke, Jahanbekam and Thomas [7]. The proof in [15] relies on a detailed structural analysis, and the proof in [7] uses discharging argument with extensive computer case-checking. Many results were obtained with conditions on \(\Delta(G)\). Bousquet, Deschamps, Meyer and Pierron [3] showed that \(\chi(G^{2})\leq 12\) if \(G\) is a planar graph with \(\Delta(G)\leq 4\), and Hou, Jin, Miao, and Zhao [11] showed that \(\chi(G^{2})\leq 18\) if \(G\) is a planar graph with \(\Delta(G)\leq 5\). Also, Bousquet, Deschamps, Meyer and Pierron [2] showed that \(\chi(G^{2})\leq 2\Delta(G)+7\) if \(G\) is a planar graph with \(6\leq\Delta(G)\leq 31\). For general \(\Delta(G)\), best known upper bound is that \(\chi(G^{2})\leq\lceil\frac{5\Delta(G)}{3}\rceil+78\) by Molloy and Salavatipour [14]. On the other hand, Havet, van den Heuvel, McDiarmid, and Reed [10] proved that Conjecture 1 holds asymptotically. One may see detail story on the study of Wegner's conjecture in [4]. A list assignment for a graph is a function \(L\) that assigns each vertex a list of available colors. The graph is \(L\)-colorable if it has a proper coloring \(f\) such that \(f(v)\in L(v)\) for all \(v\). A graph is called \(k\)-choosable if \(G\) is \(L\)-colorable whenever all lists have size \(k\). The list chromatic number \(\chi_{\ell}(G)\) is the minimum \(k\) such that \(G\) is \(k\)-choosable. Since it was known in [15] that \(\chi(G^{2})\leq 7\) if \(G\) is a subcubic planar graph, the following natural question was raised in [5]. **Question 2**.: _[_5_]_ _Is it true that \(\chi_{\ell}(G^{2})\leq 7\) if \(G\) is a subcubic planar graph?_ For general upper bound on \(\chi_{\ell}(G^{2})\) for a subcubic graph \(G\), Cranston and Kim [5] proved that \(\chi_{\ell}(G^{2})\leq 8\) if \(G\) is a connected graph (not necessarily planar) with \(\Delta(G)=3\) and if \(G\) is not the Petersen graph. And Cranston and Kim [5] proved that \(\chi_{\ell}(G^{2})\leq 7\) if \(G\) is a subcubic planar graph with \(g(G)\geq 7\). For a subcubic planar graph \(G\) and for \(k\in\{4,5,6,7,8\}\), the best girth conditions known to imply that \(\chi_{\ell}(G^{2})\leq k\) are as follows ([4]). \[\begin{array}{c|cccccc}\chi_{\ell}(G^{2})\leq&\mid&8&7&6&5&4\\ \hline g(G)\geq&\mid&3&7&9&13&24\end{array} \tag{1}\] Cranston and Kim [5] and Havet [9] showed that \(\chi_{\ell}(G^{2})\leq 6\) if \(G\) is a subcubic planar graph with \(g(G)\geq 9\). Havet [9] showed that \(\chi_{\ell}(G^{2})\leq 5\) if \(G\) is a subcubic planar graph with \(g(G)\geq 13\). And Borodin, Ivanova, and Neustroeva [1] and Havet [9] showed that \(\chi_{\ell}(G^{2})\leq 4\) if \(G\) is a subcubic planar graph with \(g(G)\geq 24\). In this paper, we consider subcubic planar graphs. To deduce that \(\chi_{\ell}(G^{2})\leq 7\), we improve the hypothesis \(g(G)\geq 7\), shown in table (1), to \(g(G)\geq 6\). We prove the following main theorem. **Theorem 3**.: _If \(G\) is a subcubic planar graph with girth at least 6, then \(\chi_{\ell}(G^{2})\leq 7\)._ Note that motivated by the List Total Coloring Conjecture, the following interesting conjecture was proposed in [13]. **List Square Coloring Conjecture**[13] For every graph \(G\), we have \(\chi_{\ell}(G^{2})=\chi(G^{2})\). Thus Question 2 was naturally asked in [5]. However, the List Square Coloring Conjecture was disproved in [12]. Note that a positive result for the List Square Coloring Conjecture for a special class of graphs is still interesting. It was conjectured in [10] that \(\chi_{\ell}(G^{2})=\chi(G^{2})\) if \(G\) is a planar graph. But, recently Hasanvand [8] proved that there exists a cubic claw-free planar graph \(G\) such that \(\chi(G^{2})=4<\chi_{\ell}(G^{2})\). **Remark 4**.: In [5], Cranston and Kim showed that if \(G\) is a minimal counterexample to Theorem 3, then the maximum average degree \((mad(G))\) is at least \(\frac{14}{5}\), and then showed that the girth of \(G\) is at most 6, which leads a contradiction. But, applying the maximum average degree of \(G\) to prove Theorem 3 is not helpful since \(G\) is subcubic. We use recoloring method to prove the main lemma (Lemma 5). The procedure of proof of Lemma 5 is as follows. If \(G\) is a minimal counterexample to Theorem 3 and \(G\) has a 6-cycle \(C\) containing a 2-vertex \(u\), then obtain a proper subgraph \(H=G-u\) by removing the 2-vertex \(u\). Next, we consider a coloring on \(H^{2}\), and uncolor the vertices on the 6-cycle \(C\), and then recolor the vertices on \(C\) to have a proper coloring of \(G^{2}\), which is a contradiction. ## 2 Proof of Theorem 3 In this section, let \(G\) be a minimal counterexample to Theorem 3. It means that for any proper subgraph \(H\) of \(G\), \(\chi_{\ell}(H^{2})\leq 7\), but \(\chi_{\ell}(G^{2})>7\). A vertex of degree \(k\) is called a \(k\)-vertex. First, we prove the following main lemma. **Lemma 5**.: \(G\) _has no 6-cycle which contains a 2-vertex._ Proof.: Suppose that \(G\) has a 6-cycle \(C\) which contains a 2-vertex. We denote \(V(C)=\{v_{1},v_{2},v_{3},v_{4},v_{5},v_{6}\}\) where \(v_{6}\) is the 2-vertex. (see Figure 1.) Let \(L\) be a list assignment with lists of size 7 for each vertex in \(G\). Let \(H=G-v_{6}\). Then since \(G\) is a minimal counterexample to Theorem 3, the square of \(H\) has a proper coloring \(\phi\) such that \(\phi(v)\in L(v)\) for each vertex \(v\in V(H)\). If \(\phi(v_{1})\neq\phi(v_{5})\), then since \(v_{6}\) has only 6 neighbors in \(G^{2}\), we can complete a proper coloring for \(G^{2}\) by coloring \(v_{6}\) by a color in \(L(v_{6})\). This is a contradiction since \(G\) is a counterexample to Theorem 3. Therefore, we can assume that \(\phi(v_{1})=\phi(v_{5})=\alpha\) for some color \(\alpha\). And we assume that \[\phi(v_{2})=a,\ \phi(v_{3})=b,\ \phi(v_{4})=c. \tag{2}\] Note that \(|\{a,b,c,\alpha\}|=4\) since \(\phi\) is a proper coloring of the square of \(H\). Now uncolor the vertices in \(V(C)\setminus\{v_{6}\}=\{v_{1},v_{2},v_{3},v_{4},v_{5}\}\), and investigate the color lists which are available at each vertex \(v\) in \(V(C)\). For each \(v\in V(C)\), let \(C(v)\) be the color list which is available at \(v\). If we denote by \(|C(v)|\) the number of available colors after uncoloring the vertices of \(V(C)\), then we have the following information. \[|C(v_{1})|\geq 3,\ |C(v_{2})|\geq 2,\ |C(v_{3})|\geq 2,|C(v_{4})|\geq 2,|C(v_{5 })|\geq 3,|C(v_{6})|\geq 5.\] First, to investigate \(C(v_{1})\) and \(C(v_{5})\), we claim the following holds. **Claim 6**.: _By the coloring \(\phi\) on the square of \(H\), we have that_ \[L(v_{1}) =\{\phi(y):y\in V(G)\cap V(H)\text{ and }v_{1}y\in E(G^{2})\},\] \[L(v_{5}) =\{\phi(y):y\in V(G)\cap V(H)\text{ and }v_{5}y\in E(G^{2})\}.\] Suppose to the contrary that \(L(v_{1})\neq\{\phi(y):y\in V(G)\cap V(H)\) and \(v_{1}y\in E(G^{2})\}\). Let \(\gamma\in L(v_{1})\setminus\{\phi(y):y\in V(G)\cap V(H)\) and \(v_{1}y\in E(G^{2})\}\). Then we recolor vertex \(v_{1}\) by \(\gamma\) and greedily color vertex \(v_{6}\). This contradicts the fact that \(\chi_{\ell}(G)>7\). Therefore, \(L(v_{1})=\{\phi(y):y\in V(G)\cap V(H)\) and \(v_{1}y\in E(G^{2})\}\). By the same argument, we can show that \(L(v_{5})=\{\phi(y):y\in V(G)\cap V(H)\) and \(v_{5}y\in E(G^{2})\}\). Hence Claim 6 holds. Therefore, by the definition of \(C(v)\) for \(v\in V(C)\) and Claim 6, we have that \[C(v_{1})=\{a,b,\alpha\},\ C(v_{5})=\{b,c,\alpha\}\ \text{for some color}\ \alpha. \tag{3}\] Next, we investigate \(C(v_{2})\), \(C(v_{3})\) and \(C(v_{4})\). Observe that, by the definition of \(C(v)\) for \(v\in V(C)\), \(\phi(v)\in C(v)\). Then we prove the following claim. **Claim 7**.: \(C(v_{2})\subseteq\{a,b,c\}\)_, \(C(v_{3})\subseteq\{a,b,c,\alpha\}\), and \(C(v_{4})\subseteq\{a,b,c\}\)._ Suppose to the contrary that \(C(v_{2})\setminus\{a,b,c\}\neq\emptyset\). Then there is a color \(\gamma\in C(v_{2})\setminus\{a,b,c\}\). So, we recolor \(v_{2}\) by color \(\gamma\) and recolor \(v_{1}\) by color \(a\in C(v_{1})\). Then we greedily color \(v_{6}\). Thus, we have a proper coloring of \(G^{2}\). This is a contradiction. Therefore, \(C(v_{2})\subseteq\{a,b,c\}\). By the same argument, we can show that \(C(v_{3})\subseteq\{a,b,c,\alpha\}\), and \(C(v_{4})\subseteq\{a,b,c\}\). Hence Claim 7 holds. Since \(\phi(v)\in C(v)\) and \(|C(v)|\geq 2\) for \(v\in\{v_{2},v_{3},v_{4}\}\), by Claim 7, it suffices to consider that the possible lists for \(v_{2},v_{3},v_{4}\) are as follows. \[\begin{array}{ccc}C(v_{2})&C(v_{3})&C(v_{4})\\ \{a,b\}&\{b,a\}&\{c,a\}\\ \{a,c\}&\{b,c\}&\{c,b\}\\ &\{b,\alpha\}\end{array} \tag{4}\] Now, we recolor the vertices in \(V(C)=\{v_{1},v_{2},v_{3},v_{4},v_{5},v_{6}\}\) from \(C(v)\) for \(v\in V(C)\) and obtain a proper coloring of \(G^{2}\), which leads to a contradiction. From now on, we denote the new coloring by \(f\). Hence, \[\begin{array}{ll}(a)\ f(v)=\phi(v),&\text{if}\ v\in V(G)\setminus V(C),\\ (b)\ f(v)\in C(v),&\text{if}\ v\in V(C).\end{array}\] Figure 1: 6-cycle \(C\) and coloring of the square of \(H=G-v_{6}\) Now we will show that we have a proper coloring of \(G^{2}\) from the lists in (3) and the table (4). Since \(|C(v_{6})|\geq 5\), we can first color each vertex in \(V(C)\setminus\{v_{6}\}=\{v_{1},v_{2},v_{3},v_{4},v_{5}\}\) and then we choose a color from \(C(v_{6})\setminus\{f(v):v\in\{v_{1},v_{2},v_{4},v_{5}\}\}\) for vertex \(v_{6}\). Note that \(v_{1}v_{4},\ v_{2}v_{5}\notin E(G^{2})\) since \(g(G)\geq 6\). So, we can assign a same color at \(v_{1}\) and \(v_{4}\) (\(v_{2}\) and \(v_{5}\), respectively) in the new coloring \(f\). **Case 1:**\(C(v_{2})=\{a,b\}\) Subcase 1.1. When \(C(v_{4})=\{c,a\}\) We have the following recoloring on \(\{v_{1},v_{2},v_{3},v_{4},v_{5}\}\) which produces a proper coloring of \(G^{2}\). \[\begin{array}{ccccc}C(v_{2})&C(v_{3})&C(v_{4})&&\\ &\{b,a\}&&\rightarrow&f(v_{1})=\alpha,f(v_{2})=b,f(v_{3})=a,f(v_{4})=c,f(v_{5} )=b\\ \{a,b\}&\{b,c\}&\{c,a\}&\rightarrow&f(v_{1})=a,f(v_{2})=b,f(v_{3})=c,f(v_{4})=a,f(v_{5})=\alpha\\ &\{b,\alpha\}&&\rightarrow&f(v_{1})=a,f(v_{2})=b,f(v_{3})=\alpha,f(v_{4})=c,f( v_{5})=b\end{array}\] Subcase 1.2. When \(C(v_{4})=\{c,b\}\) We have the following recoloring on \(\{v_{1},v_{2},v_{3},v_{4},v_{5}\}\) which produces a proper coloring of \(G^{2}\). \[\begin{array}{ccccc}C(v_{2})&C(v_{3})&C(v_{4})&&\\ &\{b,a\}&&\rightarrow&f(v_{1})=\alpha,f(v_{2})=b,f(v_{3})=a,f(v_{4})=c,f(v_{5} )=b\\ \{a,b\}&\{b,c\}&\{c,b\}&\rightarrow&f(v_{1})=b,f(v_{2})=a,f(v_{3})=c,f(v_{4})=b,f(v_{5})=\alpha\\ &\{b,\alpha\}&&\rightarrow&f(v_{1})=a,f(v_{2})=b,f(v_{3})=\alpha,f(v_{4})=c,f( v_{5})=b\end{array}\] **Case 2:**\(C(v_{2})=\{a,c\}\) Subcase 2.1. When \(C(v_{4})=\{c,a\}\) We have the following recoloring on \(\{v_{1},v_{2},v_{3},v_{4},v_{5}\}\) which produces a proper coloring of \(G^{2}\). \[\begin{array}{ccccc}C(v_{2})&C(v_{3})&C(v_{4})&&\\ &\{b,a\}&&\rightarrow&f(v_{1})=a,f(v_{2})=c,f(v_{3})=b,f(v_{4})=a,f(v_{5})= \alpha\\ &\{b,\alpha\}&&\rightarrow&f(v_{1})=a,f(v_{2})=c,f(v_{3})=b,f(v_{4})=a,f(v_{5 })=\alpha\\ \end{array}\] Subcase 2.2. When \(C(v_{4})=\{c,b\}\) We have the following recoloring on \(\{v_{1},v_{2},v_{3},v_{4},v_{5}\}\) which produces a proper coloring of \(G^{2}\). \[\begin{array}{ccccc}C(v_{2})&C(v_{3})&C(v_{4})&&\\ &\{b,a\}&&\rightarrow&f(v_{1})=\alpha,f(v_{2})=c,f(v_{3})=a,f(v_{4})=b,f(v_{5 })=c\\ \{a,c\}&\{b,c\}&\{c,b\}&\rightarrow&f(v_{1})=b,f(v_{2})=a,f(v_{3})=c,f(v_{4})=b,f(v_{5})=\alpha\\ &\{b,\alpha\}&&\rightarrow&f(v_{1})=a,f(v_{2})=c,f(v_{3})=\alpha,f(v_{4})=b,f( v_{5})=c\end{array}\] Thus, from Case 1 and Case 2, we have a proper coloring for \(G^{2}\), which is a contradiction. Hence, \(G\) has no 6-cycle which contains a 2-vertex. This completes the proof of Lemma We can see easily that the following lemma holds. **Lemma 8**.: \(G\) _has no 1-vertex._ The following lemma was proved in [5]. We include it here for readers. **Lemma 9**.: _(Lemma 13 [5]) Let \(G\) be a minimal graph such that \(\chi_{\ell}(G^{2})>7\). For each vertex \(v\), let \(M_{1}(v)\) and \(M_{2}(v)\) be the number of 2-vertices at distance 1 and distance 2 from \(v\). If \(v\) is a 3-vertex, then \(2M_{1}(v)+M_{2}(v)\leq 2\). If \(v\) is a 2-vertex, then \(2M_{1}(v)+M_{2}(v)=0\)._ From Lemma 9, we have the following lemma for 2-vertices. **Lemma 10**.: _For every cycle \(C\) in \(G\), the distance between any two 2-vertices on \(C\) is at least 4._ Proof.: Let \(x\) and \(y\) be two 2-vertices. Then \(2M_{1}(x)+M_{2}(x)=0\) by Lemma 9. So, the distance \(x\) and \(y\) must be at least 3. If the distance between \(x\) and \(y\) is 3, then we have an \(x,y\)-path, \(xw_{1}w_{2}y\), where \(w_{1}\) and \(w_{2}\) are 3-vertices. But, in this case, \(2M_{1}(w_{1})+M_{2}(w_{1})\geq 3>2\), which violates Lemma 9. Thus the distance between \(x\) and \(y\) is at least 4. Before we prove Theorem 3, we prove the following lemma. **Lemma 11**.: _If \(u\) is a 2-vertex in \(G\), then \(u\) is not a cut-vertex in \(G\)._ Proof.: Suppose that \(u\) is a cut-vertex in \(G\). Then \(G-u\) is not a connected graph. Let \(N_{G}(u)=\{x,y\}\) and let \(H=G-u+xy\). That is, \(H\) is the resulting graph obtained from \(G\) by removing \(u\) and making \(x\) and \(y\) adjacent. Note that \(H\) is still subcubic and the girth of \(H\) is at least 6. Then since \(|V(H)|=|V(G)|-1\) and \(G\) is a minimal counterexample to Theroem 3, \(H^{2}\) has a proper coloring \(\phi\). Note that \(\phi(x)\neq\phi(y)\). Since \(u\) has at most 6 neighbors in \(G^{2}\), we have a proper coloring of \(G^{2}\) by coloring greedily \(u\). This is a contradiction. Hence a 2-vertex \(u\) is not a cut-vertex in \(G\). Now we prove the main theorem. **Theorem 3**.: If \(G\) is a subcubic planar graph with girth at least 6, then \(\chi_{\ell}(G^{2})\leq 7\). Proof.: Let \(G\) be a minimal counterexample to the theorem and let \(G\) be a plane graph drawn on the plane without crossing edges. Let \(F(G)\) be the set of faces of \(G\). For a face \(C\in F(G)\), let \(d(C)\) be the length of \(C\). We assign \(2d(x)-6\) to each vertex \(x\in V(G)\) and \(d(x)-6\) for each face \(x\in F(G)\) as an original charge function \(\omega(x)\) of \(x\). According to Euler's formula \(|V(G)|-|E(G)|+|F(G)|=2\), we have \[\sum_{x\in V(G)\cup F(G)}\omega(x)=\sum_{v\in V(G)}(2d(v)-6)+\sum_{f\in F(G)} (d(f)-6)=-12. \tag{5}\] We next design some discharging rules to redistribute charges along the graph with conservation of the total charge. Let \(\omega^{\prime}(x)\) be the charge of \(x\in V(G)\cup F(G)\) after the discharge procedure such that \(\sum_{x\in V(G)\cup F(G)}\omega(x)=\sum_{x\in V(G)\cup F(G)}\omega^{\prime}(x)\). Next, we will show that \(\omega^{\prime}(x)\geq 0\) for all \(x\in V(G)\cup F(G)\), which leads the following contradiction. \[0\leq\sum_{x\in V(G)\cup F(G)}\omega^{\prime}(x)=\sum_{x\in V(G)\cup F(G)}\omega( x)=\sum_{v\in V(G)}(2d(v)-6)+\sum_{f\in F(G)}(d(f)-6)=-12.\] Observe that \(G\) has no 1-vertex by Lemma 8. Thus, we have the following discharging rule. **The discharging rule:** 1. If a 2-vertex \(u\) is on a face \(C\), then \(C\) gives charge 1 to \(u\). Now, we will show that the new charge \(\omega^{\prime}(x)\geq 0\) for every \(x\in V(G)\cup F(G)\). (1) When \(x\in V(G)\) If \(d(x)=2\), then \(\omega(x)=-2\). By (R1), \(x\) receives charge 1 from each of its incident faces. So, \(\omega^{\prime}(x)=0\). Note that every 2-vertex \(x\) is incident to two faces by Lemma 11. If \(d(x)=3\), then \(\omega(x)=\omega^{\prime}(x)=0\). (2) When \(x\in F(G)\) Here we denote \(x\) by a face \(C\). If \(C\) is a 6-cycle, then \(C\) has no 2-vertex by Lemma 5. So \(\omega(C)=\omega^{\prime}(C)=0\). To complete the case (2), we first prove the following claim. **Claim 12**.: _For every face \(C\) of \(G\), there are at most at most \(\lfloor\frac{d(C)}{4}\rfloor\) 2-vertices on the boundary of \(C\)._ Proof.: Let \(W\) be the closed walk which is the boundary of \(C\). Let \(D_{1},\ldots,D_{k}\) be the cycles of \(G\) contained in \(W\). Thus, by Lemma 10, for each \(i\in\{1,\ldots,k\}\), \(D_{i}\) contains at most \(\lfloor\frac{d(D_{i})}{4}\rfloor\) 2-vertices. Since \(G\) is a subcubic graph, by Lemma 11, the 2-vertices are only possibly contained in the cycles \(D_{1},\ldots,D_{k}\). Hence, the number of 2-vertices contained in \(C\) is at most \(\sum_{i=1}^{k}\lfloor\frac{|E(D_{i})|}{4}\rfloor\leq\lfloor\frac{\sum_{i=1}^{ k}|E(D_{i})|}{4}\rfloor\leq\lfloor\frac{|E(W)|}{4}\rfloor=\lfloor\frac{d(C)}{4}\rfloor\). Therefore, there are at most \(\lfloor\frac{d(C)}{4}\rfloor\) 2-vertices on the boundary of \(C\). Thus Claim 12 holds. Now, we consider the case when \(d(C)\geq 7\). If \(C\) is a 7-cycle, then \(C\) has at most one 2-vertex. So, \(\omega^{\prime}(C)\geq d(C)-6-1=0\). If \(C\) is a cycle of length at least 8, then \(\omega^{\prime}(x)\geq 0\) since \(d(C)-6-\lfloor\frac{d(C)}{4}\rfloor\geq 0\). Hence \(\omega^{\prime}(x)\geq 0\) for every \(x\in F(G)\). Hence, by (1) and (2), we have that \(\omega^{\prime}(x)\geq 0\) for every \(x\in V(G)\cup F(G)\). This completes the proof of Theorem 3. Future work We proved that \(\chi_{\ell}(G^{2})\leq 7\) if \(G\) is a subcubic planar graph of girth at least 6. But, we do not know yet whether Question 2 is true or not. Hence answering Question 2 is interesting. Or as a weaker version, we can ask the following problem. **Problem 13**.: _Is it true that \(\chi_{\ell}(G^{2})\leq 7\) if \(G\) is a subcubic planar graph of girth at least 4?_ ## Acknowledgments We thank Daniel Cranston for his very helpful comments and suggestions. The first author is supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT)(No.NRF-2021R1A2C1005785), and the second author is partially supported by the National Natural Science Foundation of China (No. 12161141006).
2302.01355
Full Counting Statistics of Charge in Chaotic Many-body Quantum Systems
We investigate the full counting statistics of charge transport in $U(1)$-symmetric random unitary circuits. We consider an initial mixed state prepared with a chemical potential imbalance between the left and right halves of the system, and study the fluctuations of the charge transferred across the central bond in typical circuits. Using an effective replica statistical mechanics model and a mapping onto an emergent classical stochastic process valid at large onsite Hilbert space dimension, we show that charge transfer fluctuations approach those of the symmetric exclusion process at long times, with subleading $t^{-1/2}$ quantum corrections. We discuss our results in the context of fluctuating hydrodynamics and macroscopic fluctuation theory of classical non-equilibrium systems, and check our predictions against direct matrix-product state calculations.
Ewan McCulloch, Jacopo De Nardis, Sarang Gopalakrishnan, Romain Vasseur
2023-02-02T19:00:05Z
http://arxiv.org/abs/2302.01355v2
# Full Counting Statistics of Charge in Chaotic Many-body Quantum Systems ###### Abstract We investigate the full counting statistics of charge transport in \(U(1)\)-symmetric random unitary circuits. We consider an initial mixed state prepared with a chemical potential imbalance between the left and right halves of the system, and study the fluctuations of the charge transferred across the central bond in typical circuits. Using an effective replica statistical mechanics model and a mapping onto an emergent classical stochastic process valid at large onsite Hilbert space dimension, we show that charge transfer fluctuations approach those of the symmetric exclusion process at long times, with subleading \(t^{-1/2}\) quantum corrections. We discuss our results in the context of fluctuating hydrodynamics and macroscopic fluctuation theory of classical non-equilibrium systems, and check our predictions against direct matrix-product state calculations. **Introduction** - The long-time dynamics of generic many-body quantum systems is expected to be effectively classical. Starting from a pure initial state, the local properties of chaotic systems quickly thermalize: the expectation value of local operators can described by an effective Gibbs ensemble with spatially-varying Lagrange multipliers such as temperature. The resulting evolution from local to global equilibrium is then described by the classical equations of _hydrodynamics_. However, the advent of quantum simulator platforms such as cold atoms [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13], trapped ions [14; 15; 16] or superconducting arrays [17; 18; 19; 20] has made it possible to measure not only local expectation values, but also their full quantum statistics. Whether there exists an emergent classical description of such fluctuations in generic, chaotic many-body quantum systems is an open question. Consider a one-dimensional quantum system with a conserved charge, that is prepared with a domain-wall chemical potential imbalance across the central bond \(\mu_{L}=-\mu_{R}=\mu\). By measuring the charge in the right half of the system at times \(0\) and \(t\), experiments reveal "quantum snapshots" of the charge transfer \(Q\) across the central bond (from the left to right). By repeating the experiment, one has access the full distribution of measurement outcomes \(P_{t}(Q)\). While the average of that distribution is described by hydrodynamics - which in the case of a single conserved charge simply reduces to a diffusion equation - higher cumulants describe spin current fluctuations and the full counting statistics (FCS) of charge transport [21; 22; 23; 24; 25; 26; 27]. Computing the FCS in many-body quantum systems is a formidable task, and exact or mean field results have only been achieved in a few cases, notably in non-interacting fermion models [28; 29; 30; 31; 32; 33], integrable systems [34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45] and in quantum dots/few qubit models [46; 47; 48; 49; 50; 51; 52; 53]. While there is currently no exact result pertaining to chaotic many-body quantum systems, charge current fluctuations are expected to be subject to the large deviation principle [54; 55; 56]: all cumulants of charge transfer should scale in the same way with time, as \(\sqrt{t}\) for a diffusive system. In the context of classical stochastic models, the emergence of the large deviation principle is understood within a general formalism known as macroscopic fluctuation theory (MFT) [57]. MFT is a toolbox for solving the noisy diffusion equation obtained from promoting the hydrodynamic equation to a non-linear _fluctuating hydrodynamic_ theory by adding a noise term to the current, whose strength are determined by the fluctuation-dissipation theorem. MFT has been very successful in describing stochastic classical systems, and has recently been used to compute the FCS of a paradigmatic integrable Markov chain, the symmetric exclusion process (SEP) [58; 59]. Quantum systems have intrinsic quantum fluctuations, and it is natural to wonder whether they can be captured by an emergent classical description such as MFT. In this letter, we investigate the FCS in an ensemble of diffusive chaotic models - random unitary circuits with a conserved \(U(1)\) charge [60; 61]. Quantum systems with a conserved charge are endowed with current fluctuations and counting statistics. In random circuits, these fluctuations are controlled by the local Hilbert space dimension \(q\), which plays the role of a large-\(N\)-like perturbative parameter. While the quantum many-body dynamics of individual circuit realizations is generally inaccessible, by ensemble averaging, we will study the dynamics of typical circuit realizations. In order to capture typical current fluctuations within a single circuit realization, circuit averaging must be performed at the level of cumulants, which are polynomial in the system's density matrix. By doing so, we map the problem of computing cumulants onto that of expectation values in replica statistical mechanics (SM) models. By simulating the SM time evolution using matrix-product states, and separately, by introducing an effective stochastic model of coupled SEP chains, we show that the quantum corrections to the higher order cumulants are sub-leading. This leads to a late-time FCS consistent with a simple fluctuating hydrodynamics for the coarse grained charge density \(\rho(x,\tau)\)[58] with re-scaled space-time coordinates \(x=j/\ell\) and \(\tau=t/\ell^{2}\), \[\partial_{\tau}\rho=-\partial_{x}j,\ j=-D(\rho)\partial_{x}\rho+\sqrt{\frac{2 \sigma(\rho)}{\ell}}\xi, \tag{1}\] where \(\xi(x,\tau)\) is a Gaussian white noise with zero mean and unit variance, and \(\ell\) is the size of the hydrodynamic cells over which \(\rho\) is coarse-grained. The only micro-scopic input is this equation are the diffusion constant \(D(\rho)=1\) and the conductivity \(\sigma(\rho)=D(\rho)\chi_{s}(\rho)\) with \(\chi_{s}(\rho)=\rho(1-\rho)\), which characterize both random quantum circuits and SEP. The noise term in eq. (1) is set by the fluctuation-dissipation theorem to preserve equilibrium charge fluctuations, making this equation a natural candidate for a fluctuating hydrodynamic theory of random quantum circuits. We confirm this result by computing the FCS of individual quantum circuits using matrix product state techniques [62] (Fig. 1) as an independent check to our effective stochastic theory. Our results establish the emergence of "classicality" at long times in quantum systems, even at the level of fluctuations. **The model and measurement scheme** - We work with a one dimensional chain, in which each site is comprised of a charged qubit with basis states \(\ket{q=0,1}\), and a neutral qudit of dimension \(d\), yielding a single-site Hilbert space \(\mathcal{H}_{\rm loc}\equiv\mathbb{C}^{2}\otimes\mathbb{C}^{d}\). The system evolves via the application of layers of random nearest-neighbor unitary gates in a brick-wall pattern (see Fig. 1). The unitary gates conserve the total charge on the two sites, but are otherwise Haar random [60; 61]. Unitary evolution and projective measurement ensures that the system's charge dynamics is endowed with current fluctuations. We will investigate the charge transfer \(Q\) across the central bond in a time window \([0,t]\) by following the two-time projective measurement protocol [64; 65; 66; 67; 68; 69] in Fig. 1, i.e., measuring the operator \(\hat{Q}_{R}\) for the charge in the right half of the system at times \(0\) and \(t\). The FCS for this measurement setup is characterized by the cumulant generating function \(\chi(\lambda)\equiv\log(e^{i\lambda Q})_{t}\), where the average \(\langle f(Q)\rangle_{t}=\sum_{Q}P_{t}(Q)f(Q)\) is over repetitions of the measurement protocol and \(P_{t}(Q)\) is the probability to measure a charge transfer \(Q\). As shown in [46], writing \(P_{t}(Q)\) in terms of Born probabilities enables us to write the average over measurements as a quantum expectation value [62] \[\langle e^{i\lambda Q}\rangle_{t}=\langle\mathcal{T}e^{i\lambda\Delta\hat{Q}_ {R}}\rangle^{\prime}\equiv\mathrm{Tr}\left[\mathcal{T}e^{i\lambda\Delta\hat{Q} _{R}}\hat{\rho}^{\prime}\right], \tag{2}\] where \(\Delta\hat{Q}_{R}\equiv\hat{Q}_{R}(t)-\hat{Q}_{R}(0)\) and \(\hat{Q}_{R}(t)\equiv U(t)\hat{Q}_{R}U(t)^{\dagger}\) is the Heisenberg evolved charge operator. The non-commutativity of quantum dynamics requires the use of the time-ordering \(\mathcal{T}\)[70; 22; 71]. The density matrix \(\hat{\rho}^{\prime}\) is related to the initial state \(\hat{\rho}\) by the quantum channel \(\hat{\rho}^{\prime}=\sum_{Q}P_{q}\hat{\rho}P_{q}\), where \(P_{q}\) are projectors onto the charge sector \(\hat{Q}_{R}=q\). For initial states with a chemical potential imbalance, \(\hat{\rho}\propto\exp\!\left[\mu\hat{Q}_{L}-\mu\hat{Q}_{R}\right]\), we simply have \(\hat{\rho}^{\prime}=\hat{\rho}\). The circuit averaged charge dynamics maps onto that of a discrete-time symmetric simple exclusion process [72] with a brick-wall geometry, i.e., \(\overline{P_{t}(Q)}=P_{t,\rm SEP}(Q)\) where \(\overline{O}\) refers to the averaging \(O\) over circuits - all of the quantum fluctuations are lost in the circuit averaged moments of charge transfer. To capture the behavior of typical quantum circuits, we focus on self-averaging quantities and work with cumulants directly. The cumulants are related to the generating function by \(C_{m}(t)\equiv(-i\partial_{\lambda})^{m}\chi(\lambda)|_{\lambda=0}\). To compute the \(n\)-th cumulant, we introduce an often-used \(n\)-replica statistical mechanics model [73; 74; 75; 76; 77; 78; 60], expressing each cumulant as a statistical expectation value. **Mapping to a statistical mechanics model** - By circuit averaging, we reduce the size of the state space needed to describe the replicated model. The Haar average of a replicated gate, \(\overline{U}\equiv\overline{U\otimes n}\otimes U^{*\otimes n}\), projects onto a smaller space of states characterized by only the local charge degrees of freedom and a permutation degree of freedom \(\sigma\in S_{n}\) that defines a pairing between the \(n\) replicas at each site (specifically, between the \(n\) conjugated and un-conjugated replicas). The circuit average of the replicated circuit is equivalent to a statistical mechanics model [79; 73; 74; 75; 76; 77; 78; 60] with the permutation degrees of freedom living on the vertices and the charge configurations on the edges. The Figure 1: (a) A two-time measurement protocol for charge transfer across the central bond in a random unitary circuit with a \(U(1)\) conserved charge. The charge in the right half of the system is measured at times \(0\) and \(t\). (b) The cumulant generating function \(\chi(\lambda,t)\) with a step initial state (\(\mu=\infty\)) at times \(t=10\) and \(t=25\) for different circuit realizations (multi-colored) from TEBD simulations, the circuit averaged CGF with 35 samples (red dashed) and the late time analytical prediction for the SEP CGF [63] (black solid). The two FCS snapshots show self-averaging of the FCS; circuit-to-circuit fluctuations in the rescaled CGF \(\chi/\sqrt{t}\) decay as \(\mathcal{O}(1/t)\)[62]. partition function for this statistical mechanics model is given by a sum over the charge configurations and permutations (compatible with the charges) with statistical weights associated with each edge [82, 83, 60]. In the SM model, \(d\to\infty\) locks together neighboring permutations, and together with the initial and final boundary conditions \(\sigma_{0}=\sigma_{t}=\mathbb{1}\), the \(n\)-replica model decouples into \(n\) independent discrete-time SEP chains. Letting \(d\) be large but finite allows different permutations to appear during the dynamics; domain walls between domains of different permutations \(\sigma\) and \(\tau\) have an energy cost of \(\mathcal{O}\big{(}|\sigma\tau^{-1}|\log(q)\big{)}\) per unit length of domain wall [74] (\(|\sigma|\) is the transposition distance of \(\sigma\) from \(\mathbb{1}\)). This is the basis of a large-\(d\) expansion that is the focus of the next section. We use the time-evolving block decimation (TEBD) algorithm [84, 85, 86] to apply the \(n=2\) SM transfer matrix, and compute exactly the charge transfer variance, \(\overline{C}_{2}\), which is given as a SM expectation value. Denoting the \(n\)-replica expectation value by \(\langle\cdot\rangle_{n\text{-rep}}\), and using superscripts to indicate in which replica an observable acts, the variance is given by \[\overline{C_{2}}(t)=\langle\mathcal{T}\Delta\hat{Q}_{R}^{(1)2}-\Delta\hat{Q}_ {R}^{(1)}\Delta\hat{Q}_{R}^{(2)}\rangle_{2\text{-rep}}. \tag{3}\] Using maximum bond dimension \(\chi=1500\), we compute \(\overline{C}_{2}\) for different initial chemical potential imbalances \(\mu\) and for local Hilbert space dimensions \(q\equiv 2d=3,4,6,8\)[87]. The results for \(\mu=0.1\) are shown in first panel of Fig. 2 and results for \(\mu=2\) and \(\infty\) can be found in the supplementary materials [62]. By subtracting the variance for \(q=\infty\) (i.e., the SEP variance), we isolate the quantum contributions to \(\overline{C}_{2}\), which we call \(\Delta C_{2}\), and find that these decay as \(t^{-1/2}\) for all \(q\) (inset of panel 1, Fig. 2). The \(n\)-replica SM model requires a local state space of dimension \(2^{n}n!\), putting higher cumulants beyond reach with TEBD. In order to access the higher cumulants, and to find a theoretical explanation for the approach to SEP at \(n=2\), we develop an effective stochastic model for the charge dynamics in the replicated SM models. **An effective stochastic model** - At large \(d\), the lowest energy contributions to the SM free energy come from dilute configurations of small domains of single transpositions in an 'all-identity' background. The smallest of these domains - or _bubbles_ - have the lowest possible energy cost of \(4\log(d)\). All configurations of these bubbles can be counted in the brick-wall circuit picture by inserting a projector \(P_{\mathbb{1}}\) onto the identity permutation sub-space in-between every replicated gate \(\overline{\mathcal{U}}\). Upon doing this, we can replace \(\overline{\mathcal{U}}\) with a gate \(G_{(n)}\) that explores only the \(\sigma=\mathbb{1}\) subspace but has a modified charge dynamics [62] \[\boxed{\begin{aligned} &\boxed{P_{\mathbb{1}}}& \boxed{P_{\mathbb{1}}}&\boxed{P_{\mathbb{1}}}\\ &\boxed{\overline{U^{\otimes n}\otimes U^{\otimes n}}}& =\boxed{\overline{P_{\mathbb{1}}}}&\boxed{P_{\mathbb{1}}}\\ &\boxed{P_{\mathbb{1}}}&\boxed{P_{\mathbb{1}}}& \boxed{P_{\mathbb{1}}}\\ \end{aligned}}. \tag{4}\] The result is an effective Markov process described by an \(n\)-chain ladder with hard-core random walkers on each chain and a hopping rate that is conditional on the local occupancy of the other chains. More concretely, the model is that of \(n\) discrete-time SEP chains with pairwise local interactions between chains - when two chains have the same (different) local occupations at a pair of neighboring sites, the interaction biases transitions in favor of states in which both chains have the same local (different) occupations. The derivation of the Markov process is described in detail in the supplementary materials [62]. This effective model inherits an \(n\)-fold \(SU(2)\) invariance (one for each chain) from the SM model, allowing for Figure 2: Circuit averaged charge transfer cumulants \(\overline{C}_{n}\) for \(U(1)\) charge conserving random unitary circuits at different local Hilbert space dimension \(q=3,4,6,8\) and in a discrete-time symmetric simple exclusion process, computed using TEBD applied to the SM transfer matrix: (a) the variance at chemical potential imbalance \(\mu=0.1\) (main) and the difference from SEP \(\Delta C_{2}\) (inset) with data from a replica statistical mechanics model and an effective stochastic process; (b) the third cumulant (rescaled by the inter-chain coupling \(a(d)\)) for a softened stochastic model with Hamiltonian \(H_{3}\) (see eq. (6)) (main) and the approach to SEP of the circuit averaged third cumulant (inset); (c) a proxy for the excess Kurtosis showing a \(t^{-1/2}\) approach to a Gaussian \(\kappa=3\) (main), and the approach to SEP of the circuit averaged fourth cumulant at equilibrium (inset). arbitrary rotations of the charge basis \(\left|q=0,1\right\rangle\) in each chain (see supplementary materials for details). Choosing a rotated basis (\(\left(\left|\uparrow\right\rangle\propto\left|0\right\rangle+\left|1\right\rangle, \left|\downarrow\right\rangle\propto\left|0\right\rangle-\left|1\right\rangle\right)\)), the \(n\)-th cumulant can be written in terms of matrix elements of the \(n\)-chain transfer matrix, \(T_{n}\), with the initial and final states having at most \(n\) magnons (overturned spins). This reduces the problem of calculating \(C_{n}\) to the diagonalization of an \(L^{n}\times L^{n}\) matrix. **Results** - By applying the Markov process transfer matrix exactly, we calculate the second and third cumulants at different biases and the fourth cumulant in equilibirum. We find that in all cases, the effective evolution approaches SEP as \(\Delta C_{n}\equiv\overline{C}_{n}-C_{n}^{\rm SEP}\sim a(d)t^{-1/2}\) where \(a(d)=[4d^{4}-1]^{-1}\) (see the insets in Fig. 2 and [62]). The variance data shows excellent agreement between the SM model and the effective model. In chaotic models at equilibrium (no bias, \(\mu=0\)), we expect that the distribution \(P_{t}(Q)\) will approach a Gaussian at late times. However, even long-time deviations from Gaussianity are _universal_ and are captured by an effective classical stochastic model - SEP in the case of random circuits. For example, using standard SEP results [63], we find that at half-filling, the average equilibrium excess Kurtosis decays in a universal way as \[\kappa-3=\frac{(4-3\sqrt{2})\sqrt{\pi}}{2\sqrt{t}}+\ldots \tag{5}\] independently of the value of \(q\). By using a proxy for the Kurtosis that avoids taking a replica limit, \(\bar{\kappa}\equiv\frac{\overline{p_{4}}}{\sigma^{4}}\) (\(\mu_{4}\) is the fourth central moment and \(\sigma\) is the standard deviation), we find the same \(t^{-1/2}\) approach to a Gaussian, \(\kappa=3\), for different \(q\) (panel 3 of Fig. 2). We have accentuated the variations between models by using unphysical local Hilbert space dimensions \(q\). **Effective Hamiltonian** - To understand the approach to SEP at long times, we can map the effective \(n\)-chain Markov processes to an effective ferromagnetic Hamiltonian. We do this by softening the transfer matrix, \(T_{n}\to e^{-H_{n}}\). The effective Hamiltonian is given by \[H_{n}\equiv\sum_{j}\sum_{\alpha=1}^{n}P_{j,j+1}^{(\alpha)}-a(d)\sum_{j}\sum_{ \alpha<\beta}P_{j,j+1}^{(\alpha)}P_{j,j+1}^{(\beta)}, \tag{6}\] where the superscripts indicate in which chain an operator acts and where the second term contains a sum over distinct pairs of chains. We have dropped sub-leading \(\mathcal{O}\big{(}1/d^{8}\big{)}\) terms. In terms of Heisenberg spin interactions, the projector \(P\) is given by \(P_{j,j+1}=\frac{1}{4}-\mathbf{S}_{j}\cdot\mathbf{S}_{j+1}\). The imaginary time dynamics is then dominated at late times by the low energy physics of (6). We study the low energy spectrum for \(n=2\) using standard spin-wave methods [62] and find that, at late times, the quantum contribution to the charge transfer variance is \[\Delta C_{2}^{H}\approx\frac{a\tanh(\mu/2)^{2}}{16\sqrt{\pi t}}, \tag{7}\] where the superscript \(H\) indicates that this prediction is for the continuous time stochastic model with imaginary time Hamiltonian dynamics [62]. We also consider the third cumulant in the softened stochastic model, finding the familiar \(t^{-1/2}\) decay of quantum fluctuations (Fig. 2 panel 2) from numerics and theoretical predictions in the linear response regime (\(\mu\ll 1\)[62]). This general scaling can be generalized to higher cumulants using a simple renormalization group (RG) argument based on power-counting: because of the imaginary time evolution, the long-time dynamics is controlled by the low energy-properties of eq. (6). Using standard spin-coherent state path integral techniques, it is straightforward to show that the perturbation coupling the replicas with strength \(a(d)\) has scaling dimension \(\Delta=4\), and is thus irrelevant in the RG sense. At long-times, we thus expect the different replicas (SEP chains) to be effectively decoupled so that \(\langle O\rangle_{n-\rm chain}=\langle O\rangle_{\rm SEP}(1+\mathcal{O}\big{(}t ^{-1}\big{)})\) where we have used the \(z=2\) (diffusive) dynamics of the unperturbed Hamiltonian. The asymptotic decoupling between replicas also establishes that circuit-to-circuit fluctuations are suppressed at long times, so that the FCS of individual quantum circuits approaches the SEP predictions as \[\chi(\lambda)/\sqrt{t}=\overline{\chi(\lambda)}/\sqrt{t}+\mathcal{O}(1/t), \tag{8}\] with \(\overline{\chi}/\sqrt{t}\rightarrow\chi_{\rm SEP}/\sqrt{t}\) as \(t\rightarrow\infty\). Our results are thus expected to apply to individual realizations of random quantum circuits, and more broadly to all chaotic many-body quantum systems. To verify this prediction, we have computed the FCS of individual random quantum circuits for a domain wall initial state (\(\mu=\infty\)) using standard counting field techniques [62] (Fig. 1). We find that the rescaled CGF \(\chi(\lambda)/\sqrt{t}\) is indeed self-averaging with \(\mathcal{O}(1/t)\) fluctuations, and does approach the SEP predictions at long times [62]. **Discussion** - Our main result is that charge transfer fluctuations in random charge-conserving quantum circuits is controlled by an effective SEP stochastic model at long times: \(C_{n}=C_{n}^{\rm SEP}+\mathcal{O}\big{(}t^{-1/2}\big{)}\). The full cumulant generating function of individual random circuits \(\chi(\lambda)\approx\overline{\chi(\lambda)}\) must then take the same form as that of SEP at late times, \(\overline{\chi(\lambda)}\equiv\overline{\log\langle e^{i\lambda Q}\rangle} \approx\chi_{\rm SEP}(\lambda)\). The symmetric exclusion process generating function is known analytically [63] from integrability, and is given by \[\chi(\lambda)\approx\sqrt{t}F(\omega),\ F(\omega)=\frac{1}{\sqrt{\pi}}\sum_{n= 1}^{\infty}\frac{(-1)^{n+1}}{n^{3/2}}\omega^{n}, \tag{9}\] where \(\omega=\rho_{L}(e^{i\lambda}-1)+\rho_{R}(e^{-i\lambda}-1)+\rho_{L}\rho_{R}(e^{i \lambda}-1)(e^{-i\lambda}-1)\) and \(\rho_{L/R}=\frac{e^{\rho_{L}/R}}{1+e^{\rho_{L}/R}}\) is the initially local charge density in the left (\(L\)) and right (\(R\)) halves of the system [88]. The same FCS was recently shown to emerge from MFT [58] from solving eq. (1) directly. Our results thus establish that the current fluctuations of _individual realizations_ of random quantum circuits are described by the simple fluctuating hydrodynamic equation (1). To fully establish the validity of MFT to many-body quantum systems, it would be interesting to extend our results to ensembles of circuits with more general diffusion constants \(D(\rho)\): there as well we expect a mapping onto effective classical stochastic models, with irrelevant inter-replica couplings as in (6). We leave the study of such generalizations to future work. **Acknowledgements** - We thank Immanuel Bloch, Enej Ilievski, Vedika Khemani, Ziga Krajnik, Alan Morningstar, Andrew Potter, Tomaz Prosen, and Andrea De Luca for helpful discussions. This work was supported by the ERC Starting Grant 101042293 (HEPIQ) (J.D.N.), the National Science Foundation under NSF Grants No. DMR-1653271 (S.G.) and DMR-2104141 (E.M.), the US Department of Energy, Office of Science, Basic Energy Sciences, under Early Career Award No. DE-SC0019168 (R.V.), and the Alfred P. Sloan Foundation through a Sloan Research Fellowship (R.V.).
2310.18415
Overture POI data for the United Kingdom: a comprehensive, queryable open data product
Point of Interest data that is comprehensive, globally-available and open-access, is sparse, despite being important inputs for research in a number of application areas. New data from the Overture Maps Foundation offers significant potential in this arena, but accessing the data relies on computational resources beyond the skillset and capacity of the average researcher. In this article, we provide a processed version of the Overture places (POI) dataset for the UK, in a fully-queryable format, and provide accompanying code through which to explore the data, and generate other national subsets. In the article, we describe the construction and characteristics of the dataset, before considering how reliable it is (locational accuracy, attribute comprehensiveness), through direct comparison with Geolytix supermarket data. This dataset can support new and important research projects in a variety of different thematic areas, and foster a network of researchers to further evaluate its advantages and limitations.
Patrick Ballantyne, Cillian Berragan
2023-10-27T18:15:48Z
http://arxiv.org/abs/2310.18415v1
# Overture POI data for the United Kingdom: a comprehensive, queryable open data product ###### Abstract Point of Interest data that is comprehensive, globally-available and open-access, is sparse, despite being important inputs for research in a number of application areas. New data from the Overture Maps Foundation offers significant potential in this arena, but accessing the data relies on computational resources beyond the skillset and capacity of the average researcher. In this article, we provide a processed version of the Overture places (POI) dataset for the UK, in a fully-queryable format, and provide accompanying code through which to explore the data, and generate other national subsets. In the article, we describe the construction and characteristics of the dataset, before considering how reliable it is (locational accuracy, attribute comprehensiveness), through direct comparison with Geolytix supermarket data. This dataset can support new and important research projects in a variety of different thematic areas, and foster a network of researchers to further evaluate its advantages and limitations. Points of Interest Overture Amazon Web Services ## 1 Introduction Point of Interest (POI) data is an invaluable source of information, acting as a key input to much of the research that has, and continues to be generated in urban analytics and city science. These data provide key locational attributes about a broad variety of social, environmental and economic phenomena, including historical landmarks, parks, hospitals and retailers, and have been vital sources of data for different applications, including health (Green et al. 2018; Hobbs et al. 2019), urban mobility (Graells-Garrido et al. 2021; Jay et al. 2022), retail and location analysis (Ballantyne et al. 2022), transportation (Owen, Arribas-Bel, and Rowe 2023; Credit 2018), and many others. However, a major challenge when working with POI data relates to the coverage and comprehensiveness of these datasets (Ballantyne et al. 2022; Zhang and Pfoser 2019). By this we mean how much the chosen source(s) of POI data restricts the analyses to specific cities or regions (i.e., coverage), and the attributes and characteristics that are provided for each POI (i.e., comprehensiveness). Many POI datasets offer a high level of global coverage and availability, such as OpenStreetMap. However there are problems when considering the coverage and comprehensiveness of OpenStreetMap data at finer spatial resolutions and in areas with less contributors (Haklay 2010), as well as in less developed countries (Mahabir et al. 2017). Similarly, datasets like OpenStreetMap often contain inconsistent attributes for economic activities like retail stores and leisure (Zhang and Pfoser 2019; Ballantyne et al. 2022). Some POI datasets exist which fill this gap, such as the Ordnance Survey 'Points of Interest' data product, which provides a more comprehensive database of economic activities (Haklay 2010 ), but is not openly-available. Other data providers have democratised access to comprehensive POI datasets such as SafeGraph and the Local Data Company, however these datasets exhibit poor global coverage of non-branded POIs (SafeGraph), and a lack of comprehensive coverage in the UK (Delega et al. 2021). As a result, there is a clear gap for data that can address some of these limitations, by providing an openly-available, comprehensive and accurate source of POIs for the UK. In this article, we introduce readers to a processed version of the Overture Maps places (POI) dataset (Overture Maps Foundation 2023), which arguably provides a strong solution to many of these problems, and can facilitate groundbreaking urban analytics research in a number of different application areas. ## 2 Data The data was accessed through the Overture Maps Foundation, which was set up as a collaborative venture to develop reliable, easy-to-use, and interoperable open map data (Overture Maps Foundation 2023). The foundation, which is steered by Amazon, Microsoft, Meta and TomTom, has developed a number of open data products including Buildings, Places, Transportation and Administrative Geographies, all of which are available at global scales and contain a detailed number of attributes (Overture Maps Foundation 2023). Users can access the data parquet files from the cloud using Amazon Athena, Microsoft Synapse or DuckDB, or download them locally. However, a specific challenge for urban analytics researchers and city scientists is that the majority will not have the data engineering skills to query these datasets from the cloud, and process the attributes in their nested JSON format. Furthermore, for those who want to download the files locally, they can be difficult to work with, as the full global places file is over 200 GB. Therefore, our aim is to provide a processed subset of the Overture places dataset for the UK, which bypasses these issues, and creates an open data product for use in research. Overture hosts all data through Amazon Web Services (AWS), which enables a number of query end points to be used to download data subsets. The Overture data schema includes a bounding box structure column to enable efficient spatial SQL queries. To query POI data for the UK, a spatial SQL query was constructed using the DuckDB SQL engine and the UK bounding box, based on EPSG:27700. This query downloaded a GeoPackage file containing all POIs within the UK bounding box, totalling 1.34 GB. This file was then clipped to the administrative boundaries of the United Kingdom, to exclude non-UK places that appeared within the bounding box query. As noted, many of the columns that provide metadata relating to POIs are represented in a nested JSON format (columns containing lists of lists), which are difficult to efficiently parse with traditional tabular data frame libraries. We therefore processed the following columns to ensure the data frame remained two-dimensional: Names, Category, Address and Brand. Following this processing, we spatially joined the 2021 census area geographies for England including Output Areas (OA), Lower layer Super Output Areas (LSOA), Middle layer Super Output Areas (MSOA), and the 2022 Local Authority Districts (LAD). For both Scotland and Northern Ireland, we spatially joined the 2011 Data Zone geographies. We also include the H3 (hexagons) addresses associated with each point for all resolutions between 1 and 9. The resulting dataset is a 358 MB GeoParquet file, hosted as part of a DagsHub data repository, and the final processed data file, comprising the Overture POI subset for the UK can be easily downloaded1. A list of attributes for the data product can be found in Table i (supplementary materials), and as a secondary output of this paper, an example workflow for how to extract Overture places for other study areas has also been produced2. Footnote 1: [https://figshare.com/s/144265a705159c03c08f?file=42761512](https://figshare.com/s/144265a705159c03c08f?file=42761512) Footnote 2: [https://figshare.com/s/144265a705159c03c08f?file=42809656](https://figshare.com/s/144265a705159c03c08f?file=42809656) ## 3 Reliability Analysis - Retail Brands To assess the reliability of Overture places, we compared them with the Geolytix Supermarket Retail Points dataset (Geolytix 2023), which is known to provide reliable information about supermarkets in the UK, and provides a useful 'ground-truth' dataset to test how well Overture represents economic activities. In particular, we examined how many of the Geolytic supermarkets are captured in Overture, the accuracy of the POI coordinates, and how complete the category/brand information is. Table 1 shows that the Overture data aligns well with the Geolytix data, with small differences across the three retailers (< 5%). Table 1 shows that there was a relatively low median distance (metres) between Overture points and their closest Geolytix point, evidencing a relatively high level of accuracy in terms of geographical positioning. This is an important attribute, as incorrect positioning of POI data can have dramatic implications for accessibility measurement (Green et al. 2018; Graells-Garrido et al. 2021), and urban boundary delineation (Ballantyne et al. 2022). In terms of the comprehensiveness of the category and brand information, a large number of the Overture POIs contained missing values for categories or brands (Table 2), making filtering of the dataset to a specific retailer (e.g., Waitrose), slightly less simple. Table 2 displays the complexity of these issues, where different degrees of completeness are apparent when considering the source of the POI (Meta or Microsoft). This has strong implications for how Overture data can and should be used, especially for applications involving specific POI categories or brands. Whilst it is not impossible to extract a complete list of POIs for a retailer, through collective filtering of POI name, brand and categories to collect these features (see supplementary materials), users should be aware of the high level of attribute incompleteness for POIs extracted from Microsoft. Further reliability analysis is beyond the scope of this paper, but there is a clear need for further investigation into how well Overture places captures category and brand information for other non-retail POIs (e.g., GP practices, post offices). ## 4 Application - Mapping supermarkets in the UK To demonstrate how this dataset can be used, an example workflow has been presented which reads in the UK processed version of Overture places, filters to a specific brand of supermarket, and then maps the distribution of these nationally (Figure 1). The purpose of these workflows is to illustrate how easy it is to work with this dataset, and the variety of different POI attributes that are stored within the dataset. Example workflows have been presented for both the Python3 and R4 programming languages, and utilise preferred packages for data manipulation and mapping (e.g., arrow, geopandas) Footnote 3: [https://figshare.com/s/144265a705159c03c087file=42809500](https://figshare.com/s/144265a705159c03c087file=42809500) Footnote 4: [https://figshare.com/s/144265a705159c03c087file=42809452](https://figshare.com/s/144265a705159c03c087file=42809452) ## 5 Conclusion This paper presents a comprehensive, queryable open data product, which represents a processed UK national subset of the Overture places database. This new open data product makes Overture data more accessible for researchers, bypassing the need for advanced data engineering skills and large amounts of memory on which to store the complete database. The potential applications of this data product in a variety of different fields is highly significant (e.g., urban accessibility), given the evidence presented about the coverage, comprehensiveness and locational accuracy of this dataset. At a time where the retail sector is undergoing significant transformations in response to the cost-of-living crisis, such data can provide invaluable insights about the characteristics and performance of the sector (Ballantyne et al. 2022; Dolega et al. 2021), which has historically been a challenge due to the availability of suitable retailer data. However, there are inherent limitations to this dataset, which have been illustrated through direct comparison with Geolytix data. Users need to be cautious about how they are using this data, especially when the POIs they are using are largely sourced from Microsoft. However, it is our hope that by releasing this data into the open domain, a network of researchers will be fostered who can utilise this data for their own research questions, and critically evaluate how the Overture places database represents a variety of different social, economic and environmental activities. ## 6 Data Availability Statement The UK Overture data product (anonymised for peer review) can be downloaded directly from Figshare: [https://figshare.com/s/144265a705159c03c087file=42761512](https://figshare.com/s/144265a705159c03c087file=42761512). The data product can be directly queried from the DagsHub repository, but for the purposes of anonymous peer review, this has not been included in the paper. \begin{table} \begin{tabular}{l c c c} \hline \hline **Retailer** & **Geolytix count** & **Overture count** & **Average distance between points (m)** \\ \hline Waitrose & 422 & 420 & 8.3 \\ Spar & 2,339 & 2,308 & 6.5 \\ Tesco & 2,840 & 2,753 & 6.2 \\ \hline \hline \end{tabular} \end{table} Table 1: Reliability analysis of Overture compared with Geolytix retail points dataset. \begin{table} \begin{tabular}{l|c c|c c} \hline \hline & \multicolumn{4}{c}{**Attribute incompleteness (\%)**} \\ \cline{2-5} & \multicolumn{2}{c|}{**Category information**} & \multicolumn{2}{c}{**Brand information**} \\ \hline **Retailer** & **Meta** & **Microsoft** & **Meta** & **Microsoft** \\ \hline Waitrose & 100 & N/A & 23.33 & N/A \\ Spar & 0.18 & 100.00 & 11.63 & 100.00 \\ Tesco & 0.00 & N/A & 2.29 & N/A \\ \hline \hline \end{tabular} \end{table} Table 2: Overture attributes compared with Geolytix retail points dataset.
2307.12291
TransHuman: A Transformer-based Human Representation for Generalizable Neural Human Rendering
In this paper, we focus on the task of generalizable neural human rendering which trains conditional Neural Radiance Fields (NeRF) from multi-view videos of different characters. To handle the dynamic human motion, previous methods have primarily used a SparseConvNet (SPC)-based human representation to process the painted SMPL. However, such SPC-based representation i) optimizes under the volatile observation space which leads to the pose-misalignment between training and inference stages, and ii) lacks the global relationships among human parts that is critical for handling the incomplete painted SMPL. Tackling these issues, we present a brand-new framework named TransHuman, which learns the painted SMPL under the canonical space and captures the global relationships between human parts with transformers. Specifically, TransHuman is mainly composed of Transformer-based Human Encoding (TransHE), Deformable Partial Radiance Fields (DPaRF), and Fine-grained Detail Integration (FDI). TransHE first processes the painted SMPL under the canonical space via transformers for capturing the global relationships between human parts. Then, DPaRF binds each output token with a deformable radiance field for encoding the query point under the observation space. Finally, the FDI is employed to further integrate fine-grained information from reference images. Extensive experiments on ZJU-MoCap and H36M show that our TransHuman achieves a significantly new state-of-the-art performance with high efficiency. Project page: https://pansanity666.github.io/TransHuman/
Xiao Pan, Zongxin Yang, Jianxin Ma, Chang Zhou, Yi Yang
2023-07-23T10:59:51Z
http://arxiv.org/abs/2307.12291v1
# TransHuman: A Transformer-based Human Representation for ###### Abstract In this paper, we focus on the task of generalizable neural human rendering which trains conditional Neural Radiance Fields (NeRF) from multi-view videos of different characters. To handle the dynamic human motion, previous methods have primarily used a SparseConvNet (SPC)-based human representation to process the painted SMPL. However, such SPC-based representation i) optimizes under the volatile observation space which leads to the pose-misalignment between training and inference stages, and ii) lacks the global relationships among human parts that is critical for handling the incomplete painted SMPL. Tackling these issues, we present a brand-new framework named TransHuman, which learns the painted SMPL under the canonical space and captures the global relationships between human parts with transformers. Specifically, TransHuman is mainly composed of Transformer-based Human Encoding (TransHE), Deformable Partial Radiance Fields (DPaRF), and Fine-grained Detail Integration (FDI). TransHE first processes the painted SMPL under the canonical space via transformers for capturing the global relationships between human parts. Then, DPaRF binds each output token with a deformable radiance field for encoding the query point under the observation space. Finally, the FDI is employed to further integrate fine-grained information from reference images. Extensive experiments on ZJU-MoCap and H36M show that our TransHuman achieves a significantly new state-of-the-art performance with high efficiency. Project page: [https://pansanity666.github.io/TransHuman/](https://pansanity666.github.io/TransHuman/) ## 1 Introduction Rendering free-viewpoint videos of dynamic human performers in high fidelity is vital for many applications such as mixed reality, gaming, and telepresence. Recent works [29, 28, 39, 33] integrate the Neural Radiance Fields (NeRF) [27] technology with parametric human prior models (, SMPL [24]) for handling the dynamic human body and achieve fair novel view synthesis results. However, the tedious per-subject optimization and the requirement of dense training views largely hinder the application of such methods. Targeting these issues and inspired by the recent success of generalizable NeRF [42, 4, 37] on static scenes, the task of generalizable neural human rendering is proposed [18], which trains conditional NeRF across multi-view human videos, and can generalize to a new subject in a single feed-forward manner given sparse reference views as input. Previous methods for generalizable neural human rendering [5, 18] mainly employ the SparseConvNet (SPC) [21]-based human representation (upper row of Fig. Figure 1: **Comparisons between existing SPC-based and our transformer-based human representations.** Given the incomplete painted SMPL, the SPC-based one optimizes under the varying observation space with limited receptive fields from 3D convolution. Instead, our transformer-based one optimizes under the canonical space with global relationships between human parts. 1) which first project deep features from reference images onto the vertices of fitted SMPL and then diffuse them to nearby regions via SPC. The final representation is achieved via the trilinear sampling in the discrete 3D feature volume. Such SPC-based representation mainly suffers from the following two aspects: (i) _Volatile observation learning_. The SPC-based one optimizes under the observation space that contains varying poses. This leads to the pose misalignment during training and inference stages, and therefore limits the generalization ability. (ii) _Limited local receptive fields_. As shown in Fig. 1, due to the heavy self-occlusion of dynamic human bodies, the painted SMPL templates are usually incomplete. While, as a 3D convolution network, the limited local receptive fields of SPC make it sensitive to the incomplete input, especially when the occluded regions are large. To address the aforementioned issues, we propose to first process the painted SMPL with transformers under the _static canonical space_ to remove the pose misalignment between training and inference stages and capture the _global relationships_ between human parts. Then, a deformation from the canonical to the observation space is required to fetch the human representation of a query point (sampling points on rays) under the observation space. Finally, the fine-grained information directly achieved from the observation space should be further included to the coarse human representation to complement the details. Motivated by this, we present the TransHuman, a brand-new framework that shows superior generalization ability with high efficiency. TransHuman is mainly composed of Transformer-based Human Encoding (TransHE), Deformable Partial Radiance Fields (DPaRF), and Fine-grained Detail Integration (FDI). (i) _TransHE_. TransHE is a pipeline that processes the painted SMPL under the canonical space with transformers [9]. The core of this pipeline includes a canonical body grouping strategy for the avoidance of semantic ambiguity, and a canonical learning scheme to ease the learning of global relationships. (ii) _DPaRF_. DPaRF deforms the output tokens of TransHE from the canonical space to the observation space and gets a robust human representation for a query point from marched rays. As shown in Fig. 1, the main idea is to bind each token (representing a certain human part) with a radiance field whose partial coordinate system deforms as the pose changes, and the query point is encoded via the coordinates under the deformed partial coordinate systems. (iii) _FDI_. With TransHE and DPaRF, the human representation contains coarse information with human priors yet limited fine-grained details directly captured from the observation space. Therefore, similar to [18], we propose to further integrate the detailed information from the pixel-aligned features at the guidance of the human representation. Extensive experiments on ZJU-MoCap [29] and H36M [15] demonstrate the superior generalization ability and high efficiency of TransHuman which attains a new state-of-the-art performance and outperforms previous methods by significant margins,, \(+2.20\) PSNR and \(-45\%\) LPIPS on ZJU-MoCap [29] under the pose generalization setting. Our contributions are summarized as follows: * We propose a brand-new framework TransHuman for the challenging generalizable neural human rendering task which attains a significantly new state-of-the-art performance with high efficiency. * We propose to process the painted SMPL under the canonical space to remove the pose misalignment during training and inference stages and deform it back to the observation space via DPaRF for robust query point encoding. * To the best of our knowledge, we make the first attempt to explore the transformers technology around the painted SMPL for capturing the global relationships between human parts. ## 2 Related Work ### Human Performance Capture Synthesizing novel views for human performer is a long-standing topic in computer vision and graphics. Traditional methods [10, 6, 12, 7] typically require expensive hardware like depth sensors for getting reasonable results. With the recent success of Neural Radiance Fields (NeRF) [27, 2], many works [29, 28, 39, 33] have attempted to learn the 3D human representation from image inputs via differentiable rendering. However, they require tedious per-subject optimization on dense training images, and can not generalize to unseen subjects, which largely confines the real-world applications. To tackle this issue and inspired by the recent advances of generalizable NeRF methods [42, 4, 37], the generalizable neural human rendering task is explored [18, 11, 5, 44]. At the core of this task is to properly exploit the human prior from the pre-fitted parametric human model. One line of works [44, 11] take the parametric human model as the medium of the deformation between observation and canonical spaces using blend skinning technology [14, 19, 22], and optimize conditional NeRF under a canonical pose. Instead, another line of works [18, 5] directly diffuse the painted parametric human model under the observation space via SparseConveNet (SPC) [21] for a human representation with approximate priors, and the final condition feature for a query point is the hybrid of human representation and pixel-aligned features. Obviously, a high-quality human representation is critical in this paradigm, yet the SPC-based one optimizes under the varying observation space, lacks the global perspective, and is restricted by the trilinear sampling in discrete 3D volumes. Targeting these issues, we present TransHuman with an advanced human representation based on transformers [36, 35, 9], and outperforms the previous state-of-the-art methods by significant margins. ### Transformers with Neural Radiance Fields With the significant advances of the transformer architecture [8, 9, 3, 30], several works [20, 17, 32, 37, 16, 41] have attempted to introduce it with NeRF technology. Specifically, [20] combines transformers with CNN [13] as a stronger feature extractor for reference images, [17, 32, 37] use transformers as the aggregator of source view features, and [16, 41] introduce the pre-trained transformers [30, 3] as a semantic prior to relieve the dense requirement of training views. Differently, in this paper, we make the first attempt to apply the transformer technology around the surface of painted SMPL for a stronger human representation that captures the global relationship between human parts. ## 3 Method **Overview.** The task of generalizable neural human rendering targets on learning conditional NeRF across multi-view videos of different subjects, which can generalize to unseen subjects in a single feed-forward pass given sparse reference views. At the core of the task is to get a high-quality condition feature that contains accurate subject information for each query point sampled on rays. To this end, we propose a novel framework named TransHuman which shows superior generalization ability. As shown in Fig. 2, TransHuman is mainly composed of three aspects: Transformer-based Human Encoding (TransHE), Deformable Partial Radiance Fields (DPaRF), and Fine-grained Detail Integration (FDI). SS 3.1 introduces the TransHE which builds a pipeline for capturing the global relationships between human parts via transformers under the canonical space. SS 3.2 demonstrates the DPaRF which deforms the processed SMPL back to the observation space and fetch a robust human representation. SS 3.3 presents the FDI module that further gathers the fine-grained information directly from the observation space with the guidance of human representation. After that, we introduce the volume rendering in SS 3.4, and the training and inference pipelines in SS 3.5. ### Transformer-based Human Encoding For simplicity, we start by introducing the process of a single reference image that is applicable for all other views, and the multi-view aggregation will be detailed in SS 3.3. Given a reference images \(I\) for a certain time step and its corresponding pre-fitted SMPL model \(V^{o}\in\mathbb{R}^{6890\times 3}\) under the observation pose +, we first project the \(d_{1}\)-dimensional deep features of \(I\) extracted by CNN to the vertices of \(V^{o}\) based on the camera information, and get the painted SMPL \(F\in\mathbb{R}^{6890\times d_{1}}\). Previous methods [18, 5] have mainly employed the SPC [21] to diffuse the painted SMPL to nearby space (Fig. 1). However, they optimize under the varying observation space which leads to the pose misalign Figure 2: **Overview of TransHuman.** TransHE first builds a pipeline for capturing the global relationships between human parts via transformers under the canonical space. Then, DPaRF deforms the coordinate system from the canonical back to the observation space and encodes a query point as an aggregation of coordinates and condition features. Finally, FDI further gathers the fine-grained information of the observation space from the pixel-aligned appearance feature under the guidance of human representation. ment between training and inference stages, and the limited receptive fields of 3D convolution blocks make it sensitive to the incomplete painted SMPL input caused by the heavy self-occlusions of human bodies. Tackling these issues, we present a pipeline named Transformer-based Human Encoding (TransHE) that captures the global relationships between human parts under the canonical space. The key of TranHE includes a canonical body grouping strategy for avoiding the semantic ambiguity and a canonical learning scheme to ease the optimization and improve the generalization ability. **Canonical Body Grouping.** Directly taking all the vertex features of \(F\) as input tokens of transformers is neither effective considering the misalignment between fitted SMPL and the ground truth body, nor efficient due to the large vertex number, _i.e_., \(6890\). A possible solution is to directly perform the grid voxelization [25] on \(F\) under the observation pose. However, due to the complex human poses, this will lead to the semantic ambiguity issue. More concretely, the gathered vertices in each voxel are highly different as the pose changes (_i.e_., temporal semantic variance), and a voxel might include vertices from dispersed semantic parts (_i.e_., spatial semantic entanglement), as illustrated in Fig. 3. To tackle this issue, we propose that grouping the vertices under the canonical space and then applying this canonical grouping to all the observation poses is a better choice. Compared with the varying observation poses, the canonical pose is both _static_ and more _stretched_, which can largely relieve the semantic ambiguity issue via the consistent split among different poses (_i.e_., temporal semantic consistency) and more disentangled semantics in each voxel (_i.e_., spatial semantic disentanglement), as shown by the right part of Fig. 3. Formally, we first process the canonically posed (T-posed) SMPL \(V^{c}\in\mathbb{R}^{6890\times 3}\) with a clustering algorithm (_e.g_., k-means [1]) based on the 3D coordinates, and get a grouping dictionary \(\mathcal{D}^{c}\) caching the indexes of the SMPL vertices that belong to the same cluster, as illustrated in Fig. 2. Notice that we only need to calculate \(\mathcal{D}^{c}\) once before training. Then, for each iteration, the features from the same cluster are aggregated via average pooling: \[\widehat{F}=\mathcal{G}_{\mathcal{D}^{c}}(F),\quad\widehat{F}\in\mathbb{R}^{N _{t}\times d_{1}}, \tag{1}\] where \(N_{t}\) is the number of clusters (tokens), and \(\mathcal{G}_{\mathcal{D}^{c}}(\cdot)\) indicates indexing based on \(\mathcal{D}^{c}\) and then performing average pooling in each cluster. **Canonical Learning.** After grouping, we now have a decent number of input tokens, and the next question is about the choice of position embedding for each token. Since we need the condition feature of a query point under the observation space, a possible choice is to directly learn under the observation space (same as SPC-based methods [18, 5]) and use the 3D coordinates of each token under the observation pose as the position information, _i.e_., \(\widehat{V}^{o}=\mathcal{G}_{\mathcal{D}^{c}}(V^{o})\in\mathbb{R}^{N_{t}\times 3}\). However, except for the pose misalignment issue mentioned previously, \(\widehat{V}^{o}\) is also varying for different time steps, which leads to the unfixed patterns of position embeddings that make it harder to capture the global relationships between human parts. Hence, to address these issues, we propose to learn the global relationships under the static canonical space for removing the pose-misalignment and easing the learning of global relationships: \[\widehat{F}^{{}^{\prime}}=\mathcal{T}(\widehat{F},\gamma_{1}(\widehat{V}^{c} )), \tag{2}\] where \(\widehat{V}^{c}=\mathcal{G}_{\mathcal{D}^{c}}(V^{c})\) is the token positions under the canonical space, \(\gamma_{1}(\cdot):\mathbb{R}^{3\to d_{1}}\) represents the positional encoding used in the original NeRF [27], \(\mathcal{T}(\cdot):\mathbb{R}^{d_{1}\to d_{1}}\) indicates the transformers, and \(\widehat{F}^{{}^{\prime}}\in\mathbb{R}^{N_{t}\times d_{1}}\) is the output tokens with learned global relationships between each other. ### Deformable Partial Radiance Fields For deforming the processed SMPL back to the observation space and get a robust human representation, we present the Deformable Partial Radiance Fields (DPaRF). The main idea of DPaRF is to bind each output token of TransHE with a conditional partial radiance field for a certain semantic part whose coordinate system deforms as the pose changes under the observation space, and the query points from rays are encoded as the coordinates under the deformed coordinate system, as shown in Fig. 2. **Coordinate System Deformation.** Given the \(i\)-th token \(\widehat{F}^{{}^{\prime}}_{i}\in\mathbb{R}^{d_{1}}\) from the TransHE output, a coordinate system \(W^{c}_{i}\in\mathbb{R}^{3\times 3}\) is initialized under the canonical space which takes \(\widehat{V}^{c}_{i}\in\mathbb{R}^{3}\) as the origin +. Then, as the pose changes under the observation space, we rotate \(W^{c}_{i}\) with the rotation matrix \(\widehat{R}_{i}\in\mathbb{R}^{3\times 3}\) of token \(i\): Footnote †: Without loss of generality, we set \(W_{i}\) as the identity matrix for all the tokens for simplicity. \[W^{o}_{i}=\widehat{R}_{i}W^{c}_{i}, \tag{3}\] Figure 3: **2D illustration of the semantic ambiguity issue.** Naive grid voxelization under the observation space leads to spatial semantic entanglement and temporal semantic variance issues, while the semantics with our canonical body grouping strategy is temporally consistent and spatially disentangled. where \(\widehat{R}_{i}\) is the averaged rotation matrix for vertices belonging to the \(i\)-th token, _i.e_., \(\widehat{R}=\mathcal{G}_{\mathcal{D}^{c}}(R)\in\mathbb{R}^{N_{t}\times 3\times 3}\), and \(R\in\mathbb{R}^{6890\times 3\times 3}\) can be calculated via blending the rotation matrices of \(24\) joints with the blend weights provided by SMPL [24]. **Coordinate Encoding.** After that, for a query point \(\mathbf{p}\) sampled from the rays under the observation space, we get its coordinate \(\overline{\mathbf{p}}_{i}\) under the DPaRF of the \(i\)-th token with: \[\overline{\mathbf{p}}_{i}=W_{i}^{o}(\mathbf{p}-\widehat{V}_{i}^{o}). \tag{4}\] And the final fetched human representation from the DPaRF of the \(i\)-th token is: \[\mathbf{h}_{i}=[\widehat{F}_{i}^{{}^{\prime}};\gamma_{2}(\overline{\mathbf{p} }_{i})],\ \ \ \mathbf{h}_{i}\in\mathbb{R}^{d_{2}}, \tag{5}\] where \([;]\) indicates the concatenation, and \(\widehat{F}_{i}^{{}^{\prime}}\) is the condition feature for the \(i\)-th DPaRF. **K-nearest Fields Aggregation.** Finally, for a more robust representation, we assign a query point \(\mathbf{p}\) to its \(N_{k}\) nearest DPaRFs, and aggregate them based on the distances: \[\mathbf{h}=\sum_{i=1}^{N_{k}}softmax(-\frac{\|\mathbf{p}-\widehat{V}_{i}^{o} \|_{2}}{\sum_{i}\|\mathbf{p}-\widehat{V}_{i}^{o}\|_{2}})\mathbf{h}_{i},\ \ \ \mathbf{h}\in\mathbb{R}^{d_{2}}. \tag{6}\] ### Fine-grained Detail Integration With TransHE and DPaRF, for a query point \(\mathbf{p}\), we can actually achieve a set of human representations from \(N_{v}\) reference views \(\mathbf{h}^{1:N_{v}}=\{\mathbf{h}^{j}\}_{j=1}^{N_{v}}\in\mathbb{R}^{N_{v} \times d_{2}}\) following the same procedure. \(\mathbf{h}^{1:N_{v}}\) contains coarse information with human priors (_e.g_., geometry constraints and certain color information) yet lacks the fine-grained information (_e.g_., lighting, textures) for high-fidelity novel view synthesis. Therefore, inspired by [18], we further integrate the fine-grained information from the pixel-aligned appearance feature \(\mathbf{a}^{1:N_{v}}=\{\mathbf{a}^{j}\}_{j=1}^{N_{v}}\in\mathbb{R}^{N_{v} \times d_{2}}\) at the guidance of human representation \(\mathbf{h}^{1:N_{v}}\). **Fine-grained Appearance Features.** For the appearance features, instead of directly using projected deep features from CNN, _i.e_., the one used when painting SMPL, we additionally concatenate the projected RGB-level information from the raw images and then fuse them with a fully connected layer \(FC(\cdot):\mathbb{R}^{3+d_{1}\to d_{2}}\). The projected RGB features can complement the misaligned and lost details caused by the down-sample operation in CNN. **Coarse-to-fine Integration.** Then, we employ a cross-attention module which takes the human representation \(\mathbf{h}^{1:N_{v}}\) as the query, and the appearance feature \(\mathbf{a}^{1:N_{v}}\) as the key and value, and get the integrated feature \(\mathbf{f}^{1:N_{v}}\in\mathbb{R}^{N_{v}\times d_{2}}\). The final condition feature \(\mathbf{f}\in\mathbb{R}^{d_{2}}\) of query point \(\mathbf{p}\) is achieved via the average pooling on the view dimension: \(\mathbf{f}=\sum_{j=1}^{N_{c}}\frac{1}{N_{c}}\mathbf{f}^{j}\). ### Volume Rendering **Desity & Color Prediction.** The final density \(\sigma(\mathbf{p})\in\mathbb{R}^{1}\) and color \(\mathbf{c}(\mathbf{p})\in\mathbb{R}^{3}\) are predicted as: \[\begin{split}\sigma(\mathbf{p})&=MLP_{\sigma}( \mathbf{f}),\\ \mathbf{c}(\mathbf{p})&=MPL_{\mathbf{c}}(\mathbf{f}, \gamma_{3}(\mathbf{d})),\end{split} \tag{7}\] where \(MLP_{\sigma}\) and \(MLP_{\mathbf{c}}\) are NeRF MLPs for density and color predictions, respectively, and \(\mathbf{d}\) is the unit view direction of the ray. **Differentiable Rendering.** Then, for a marched ray \(\mathbf{r}(z)=\mathbf{o}+z\mathbf{d}\), where \(\mathbf{o}\in\mathbb{R}^{3}\) represents the camera center, and \(z\in\mathbb{R}^{1}\) is the depth between a pre-defined bounds \([z_{n},z_{f}]\), its color \(\mathbf{C}(\mathbf{r})\) is calculated via the differentiable volume rendering [27]: \[\mathbf{C}(\mathbf{r})=\int_{z_{n}}^{z_{f}}T(z)\sigma(z)\mathbf{c}(z)dz, \tag{8}\] where \(T(z)=exp(-\int_{z_{n}}^{z}\sigma(s)ds)\) represents the probability that the ray travels from \(z\) to \(z_{n}\). ### Training & Inference **Training Losses.** We compare the rendered pixel colors with the ground truth ones for supervision. Similar to [39], we employ the MSE loss for pixel-wise and perceptual loss [43] for patch-wise supervision, which is more robust to misalignments. The random patch sampling [39] is employed for supporting perceptual loss training. The overall loss is: \[\mathcal{L}=\mathcal{L}_{MSE}+\lambda\mathcal{L}_{PER}, \tag{9}\] where we set \(\lambda=0.1\) by default. **Inference.** During the inference stage, for each time step, \(N_{v}\) reference views are provided and the rendered target views are compared with the ground truth ones for calculating the metrics. Notably, GP-NeRF [5] has proposed a fast rendering scheme that leverages the coarse geometry prior from the 3D feature volume to filter out useless points. Similarly, our framework also supports such strategy by simply using the SMPL template as the geometry prior instead (detailed in the appendix). ## 4 Experimental Results ### Experimental Settings **Datasets.** We benchmark on ZJU-MoCap [29] and H36M [15] for verifying the effectiveness of our TransHuman. (i) ZJU-MoCap [29] provides multi-view videos of \(10\) human subjects with \(23\) synchronized cameras, together with the pre-fitted SMPL parameters and human masks. Each video spans between \(1000\) to \(2000\) frames and contains complicated motions like "Taichi" and "Twirl". Following [18, 5], \(10\) subjects are split into \(7\) source subjects (ZJU-7) and \(3\) target subjects (ZJU-3), and each subject is further divided into training and testing parts. We strictly follow the officially released human split from [18] for training and testing. We refer the detailed split information to the appendix. To prove that our method can welly handle the incomplete painted SMPL, we additionally report the performance of the one-shot generalization setting, _i.e_., only 1 reference view is provided during inference. (ii) H36M [15] records multi-view videos with 4 cameras and includes multiple subjects with complex motions. We use the preprocessed one by [28] which contains representative subjects S1, S5, S6, S7, S8, S9, S11, and their corresponding SMPL parameters and human masks. We verify the cross-dataset generalization ability with H36M, _i.e_., trained on ZJU-MoCap and then directly inference on H36M. The first 3 views are taken as the reference views, and the last one is used as the target view. **Evaluation Metrics.** For novel view synthesis, we report the commonly used Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM) [38], and Learned Perceptual Image Patch Similarity (LPIPS) [43] as the evaluation metrics. For 3D reconstruction, following [18, 5], we only report the qualitative results since ground truth meshes are unavailable. ### Implementation Details In line with [18], we take the ResNet-18 [13] (only the first \(3\) layers are used) as the CNN for extracting the deep features from reference images and set the multi-view number \(N_{v}=3\). The number of clusters (tokens) in human body grouping is set as \(N_{t}=300\), and the light-weight ViT-Tiny [9] is employed as the transformer module. Each query point is assigned with \(N_{k}=7\) DPaRFs. Following [18, 5], we train on ZJU-MoCap with \(512\times 512\) resolutions, and for each ray we sample \(64\) points by default during both the training and inference stages. ### Comparisons with State-of-the-art **Baselines.** Following [18, 5], we compare with both per-subject optimization methods [29, 34, 40, 23] and generalizable methods [31, 42, 26, 18, 5]. For per-subject optimization methods, an individual model is trained on the training part of each subject. Notably, previous state \begin{table} \begin{tabular}{l|c c|c c|c c|c c} \hline & Dataset & Per-subject & \multicolumn{2}{c|}{Unseen} & \multicolumn{3}{c}{Results} \\ Method & Train & Test & training & Pose & Subject & \(\uparrow\) PSNR & \(\uparrow\) SSIM & \(\downarrow\) LPIPS \\ \hline \hline \multicolumn{10}{c}{_Pose Generalization_} \\ NV [700][23] & ZJU-7 & ZJU-7 & ✓ & ✓ & ✗ & 22.00 & 0.818 & - \\ NT [700][34] & ZJU-7 & ZJU-7 & ✓ & ✓ & ✗ & 22.28 & 0.872 & - \\ NHR [40] & ZJU-7 & ZJU-7 & ✓ & ✓ & ✗ & 22.31 & 0.871 & - \\ NB [40][29] & ZJU-7 & ZJU-7 & ✓ & ✓ & ✗ & 23.79 & 0.887 & - \\ NHP [40][18] & ZJU-7 & ZJU-7 & ✗ & ✓ & ✗ & 24.60 & 0.910 & 0.147 \\ GP-NeRF [40][45] & ZJU-7 & ZJU-7 & ✗ & ✓ & ✗ & 25.05 & 0.909 & 0.159 \\ **Ours** & ZJU-7 & ZJU-7 & ✗ & ✓ & ✗ & **27.25** & **0.936** & **0.087** \\ \hline \hline \multicolumn{10}{c}{_Identity Generalization_} \\ NV [700][23] & ZJU-3 & ZJU-3 & ✓ & ✓ & ✗ & 20.84 & 0.827 & - \\ NT [700][34] & ZJU-3 & ZJU-3 & ✓ & ✓ & ✗ & 21.92 & 0.873 & - \\ NHR [40][40] & ZJU-3 & ZJU-3 & ✓ & ✓ & ✗ & 22.03 & 0.875 & - \\ NB [40][29] & ZJU-3 & ZJU-3 & ✓ & ✓ & ✗ & 22.88 & 0.880 & - \\ PVA [40][31] & ZJU-7 & ZJU-3 & ✗ & ✓ & ✓ & 23.15 & 0.866 & - \\ PixelNeRF [40][42] & ZJU-7 & ZJU-3 & ✗ & ✓ & ✓ & 23.17 & 0.869 & - \\ KeyNeRF [40][26] & ZJU-7 & ZJU-3 & ✗ & ✓ & ✓ & 25.03 & 0.897 & - \\ GP-NeRF [40][5] & ZJU-7 & ZJU-3 & ✗ & ✓ & ✓ & 24.55 & 0.902 & 0.157 \\ NHP [40][18] & ZJU-7 & ZJU-3 & ✗ & ✓ & ✓ & 24.94 & 0.905 & 0.144 \\ **Ours** & ZJU-7 & ZJU-3 & ✗ & ✓ & ✓ & **26.15** & **0.918** & **0.098** \\ \hline \multicolumn{10}{c}{_One-shot Generalization_} \\ NHP [40][18] & ZJU-7 & ZJU-3 & ✗ & ✓ & ✓ & 26.83 & 0.924 & 0.132 \\ **Ours** & ZJU-7 & ZJU-3 & ✗ & ✓ & ✓ & **27.55** & **0.933** & **0.090** \\ \hline \hline \multicolumn{10}{c}{_Cross-dataset Generalization_} \\ NHP [40][18] & ZJU-7 & H36M & ✗ & ✓ & ✓ & 23.20 & 0.877 & 0.182 \\ **Ours** & ZJU-7 & ZJU-3 & ✗ & ✓ & ✓ & **24.11** & **0.891** & **0.142** \\ \hline \hline \multicolumn{10}{c}{_Cross-dataset Generalization_} \\ NHP [40][18] & ZJU-7 & H36M & ✗ & ✓ & ✓ & 18.84 & 0.820 & 0.222 \\ **Ours** & ZJU-7 & H36M & ✗ & ✓ & ✓ & **20.48** & **0.856** & **0.169** \\ \hline \hline \end{tabular} \end{table} Table 1: **Comparisons of generalization ability with the state-of-the-art methods.** We achieve a significantly new sate-of-the-art performance compared with both generalizable [31, 42, 5, 18, 26] and per-subject methods [23, 34, 40, 29]. Following [18], the per-subject optimization methods are trained on the training part of each subject since they can not generalize to unseen subjects, which is actually an easier task. “\(\uparrow\)” means using the officially released human split from GP-NeRF [5] and employing the overfitting trick used in GP-NeRF. of-the-art methods for generalizable neural human rendering [18, 5] actually use different human splits in their officially released code and are not in line with the one used in their papers (performance is not reproducible). Hence, for fair comparisons, we **unify them under the released human split of NHP [18]**. Specifically, we report the performance of NHP [18] using the official checkpoint, and re-run the official code of GP-NeRF [5] under the unified human split. Note that, GP-NeRF has employed an overfitting trick which we think is unreasonable, _i.e._, they overfit the test reference views instead of randomly sampling during the training stage. This trick leaks the test information to the training stage, therefore we remove it in our re-running. We also provide the comparisons under the released human split of GP-NeRF with the overfitting trick, where our method outperforms it consistently by large margins. **Novel View Synthesis.** We compare the quantitative results with previous state-of-the-art methods in Table 1. Obviously, we outperform them by significant margins under all the settings. Notably, for the identity generalization setting, the per-subject methods are directly trained on the target subjects while our method is only trained on the source subjects, yet we still outperform them by large margins, _i.e._, \(+3.27\) in PSNR. Compared with the recent SPC-based generalizable methods [18, 5], our method also shows healthy margins, _i.e._, \(+2.20\) PSNR and \(-45\%\) LPIPS compared with the second-best under the pose generalization setting. For the more challenging cross-dataset generalization setting, we also outperform the baseline methods remarkably albeit these two datasets [29, 15] have significantly different distributions, which proves the superior generalization ability of our TransHuman. The qualitative comparisons are illustrated in Fig. 4, where our TransHuman gives significantly better details and body geometry. We attribute this to the careful design of our framework, _i.e._, the global human representation brings more complete body geometry, the canonical learning scheme gives better generalization ability, and FDI further includes more fine-grained details like textures and lighting. **3D Reconstruction.** The 3D reconstruction results are illustrated in Fig. 5. Compared with NHP [18] that uses the SPC-based human representation, our method achieves a more complete and fine-grained geometry with details like Figure 4: **Visualization comparisons with previous state-of-the-art methods on ZJU-MoCap (pose generalization, identity generalization) and H36M (cross-dataset generalization). Our method shows significantly better generalization ability with better body geometry and more accurate details like textures and lighting.** Figure 5: **3D reconstruction under the identity generalization setting. Our method achieves more complete geometry with details like wrinkles compared with NHP [18] which employs a SPC-based human representation.** wrinkles. ### Ablation Studies Following [18], we perform ablation studies under the identity generalization setting. Due to the limited space, we refer more detailed ablation studies to the appendix. **Ablation of TransHE.** We first study the effectiveness of canonical body grouping and canonical learning scheme in Table 2. When performing the body grouping under the observation space with grid voxelization ("obs. body grouping"), the performance suffers a significant drop from \(26.15\) to \(25.28\) in PSNR. As introduced in SS 3.1, performing grouping under the observation space leads to the semantic ambiguity issue, therefore leading to worse performance. Then, "obs. PE" changes the position embedding of input tokens from the canonical positions \(\hat{V}^{c}\) to observation positions \(\hat{V}^{o}\), and also observes a significant decrease, _e.g_., \(-0.35\) in PSNR. The canonical learning scheme eases the optimization and removes the pose misalignment between training and inference stages, therefore leading to better performance. **Ablation of DPaRF.** We verify the effectiveness of DPaRF in Table 3. "w/o coordinate" represents removing the coordinate part from the human representation. As expected, the performance drops by significant margins (\(-0.35\) in PSNR). Coordinates contain the accurate position information of query point in each DPaRF, therefore is important. "absolute coordinate" indicates using the absolute coordinate of query point, _i.e_., \(\mathbf{p}\) instead of \(\overline{\mathbf{p}}\) in Eq. 5, and the performance does not show significant improvement compared with "w/o coordinate". This further proves the importance of using the coordinate under the deformed coordinate systems. Finally, "w/o k-nearest fields" shows that the k-nearest fields aggregation design can bring certain improvement on all the metrics. **Ablation of FDI.** We first perform the ablation of FDI by individually removing the appearance feature part ("w/o \(\mathbf{a}\)") or the human representation part ("w/o \(\mathbf{h}\)"). As illustrated in Table 4, merely using either of them gives an unsatisfactory performance. Then, "w/o RGB" shows that the raw RGB features can further bring a measure of improvement. **Comparisons with SPC-based representation.** To further verify the effectiveness of our proposed transformer-based human representation, we directly replace the TransHE and DPaRF modules with SPC and trilinear sampling in our code. We follow [18] to configure the SPC including the architecture and input resolution. As shown by Table 5, our proposed transformer-based representation outperforms the SPC-based one by significant margins among all the metrics under a fair comparison setting. ## 5 Conclusion In this paper, we propose a brand-new framework named TransHuman for the generalizable neural human rendering task. At the core of TransHuman is a canonically optimized human representation with global relationships between human parts captured by transformers which shows superior generalization ability compared with previous methods. However, there are remaining challenges to be explored, such as the joint optimization of fitted SMPL and training on unconstrained multi-view capture setups. We hope that our efforts will motivate more researchers in the future.
2303.00362
How to Communicate Robot Motion Intent: A Scoping Review
Robots are becoming increasingly omnipresent in our daily lives, supporting us and carrying out autonomous tasks. In Human-Robot Interaction, human actors benefit from understanding the robot's motion intent to avoid task failures and foster collaboration. Finding effective ways to communicate this intent to users has recently received increased research interest. However, no common language has been established to systematize robot motion intent. This work presents a scoping review aimed at unifying existing knowledge. Based on our analysis, we present an intent communication model that depicts the relationship between robot and human through different intent dimensions (intent type, intent information, intent location). We discuss these different intent dimensions and their interrelationships with different kinds of robots and human roles. Throughout our analysis, we classify the existing research literature along our intent communication model, allowing us to identify key patterns and possible directions for future research.
Max Pascher, Uwe Gruenefeld, Stefan Schneegass, Jens Gerken
2023-03-01T09:43:05Z
http://arxiv.org/abs/2303.00362v2
# How to Communicate Robot Motion Intent: A Scoping Review ###### Abstract Robots are becoming increasingly omnipresent in our daily lives, supporting us and carrying out autonomous tasks. In Human-Robot Interaction, human actors benefit from understanding the robot's motion intent to avoid task failures and foster collaboration. Finding effective ways to communicate this intent to users has recently received increased research interest. However, no common language has been established to systematize _robot motion intent_. This work presents a scoping review aimed at unifying existing knowledge. Based on our analysis, we present an intent communication model that depicts the relationship between robot and human through different intent dimensions (_intent type, intent information, intent location_). We discuss these different intent dimensions and their interrelationships with different kinds of robots and human roles. Throughout our analysis, we classify the existing research literature along our intent communication model, allowing us to identify key patterns and possible directions for future research. intent, motion, robot, cobot, drone, survey [name=Section]subsection ###### Abstract The field of Human-Computer Interaction (HCI) has moved beyond traditional user interfaces and interaction technologies. The omnipresence of Artificial Intelligence (AI) research and development requires our field to scrutinize the applicability of established design practices (Human, 2017; Human, 2017). Human interaction with AI is evolving away from being like operating a tool to being more like interacting with a partner, which is particularly interesting concerning Human-Robot Interaction (HRI) (Human, 2017). The area of HRI has been studied for a long time in HCI and, in particular, the CHI community (Human, 2017; Human, 2017; Human, 2017; Human, 2017). For example, Arevalo Arboleda et al. (Arevalo, 2018) and Villanueva et al. (Villanueva et al., 2018) investigated combining robots and Augmented Reality (AR) technology to enable intuitive teleoperation, while others have explored on-site control of robot swarms (Human, 2017) and home robots (Human, 2017) as well as communication of emotions and intentions to the human (Villanueva et al., 2018). Robots are versatile, they can assist us in our workplaces, support us at home, and accompany us in public spaces (Human, 2017; Human, 2017; Human, 2017). The applications of robots are manifold, significantly increasing human capabilities and efficiency (Human, 2017). While robots come in many forms, robotic arms in particular have been shown to be suitable for and adaptable to different use cases, such as production lines (Human, 2017) and domestic care (Human, 2017). Here, they are known as cobots who support their users in Activities of Daily Living (ADLs), such as eating and drinking, grooming, or activities associated with leisure time. As robots have a physical form, they tend to move and operate in the same space as humans. With advances in the degree of autonomy allowing for effective close-contact interaction, there is a need for a shared understanding between humans and robots. While robotic research tackles this from a sensory and path planning perspective (e.g., human-aware navigation (Human, 2017)), the field of HCI (and HRI in particular) has been concerned with how humans may better understand robot behavior (Human, 2017; Human, 2017; Human, 2017). The subtleties of human communication are usually lost in this context, and robotic behavior needs to be understood from its own frame of reference. Robots are not a monolithic entity; with the many different types come just as many unique ways of conveying information, which could lead to erroneous interpretations by their human counterpart. An added complication is the increasing number of close-contact situations that allow little time to recognize and correct errors. This has led to numerous research efforts in recent years to find ways for robots to effectively communicate their intentions to their users (Human, 2017). This includes the direct communication of planned movements in space (Human, 2017), but also less obvious means, such as drawing a user's attention to the robot (Human, 2017), communicating the robot's movement activity state (e.g., active or inactive due to failure) (Human, 2017), and facilitating human oversight by communicating their external perception of the world (Human, 2017). While all of these examples are concerned with communicating _robot motion intent_, they differ tremendously in their methods and goals. Other researchers, such as Suzuki et al., have subsequently identified _robot motion intent_ as an essential research area (Suzuki et al., 2019). But beyond further solution approaches, the field needs a common understanding of the concept of _robot motion intent_ (i.e., what do we actually mean by intent, what are relevant intent dimensions, and how does the communication of _robot motion intent_ influence the relationship between robot and human). To this end, we conducted a scoping review of current approaches to communicate _robot motion intent_ in the literature. Based on our findings, we introduce an intent communication model of _motion intent_, which depicts the relationship between robot and human through the means of different intent dimensions (_intent type_, _intent information_, and _intent location_; see Figure 1). We further discuss these different intent dimensions and their interrelationships with different kinds of robots and human roles. Throughout our analysis, we classify the existing research literature along our intent communication model to form a design space for communicating _robot motion intent_. Practitioners and researchers alike may further benefit from this work for the design and selection of specific mechanisms to communicate _motion intent_. We identify future research directions and current gaps, which are further highlighted in an interactive website that lists the papers and allows comparisons based on user-selected categories.1 Footnote 1: Interactive Data Visualization of the Paper Corpus. [https://rmi.robot-research.de](https://rmi.robot-research.de), last retrieved March 2, 2023. **Our contribution** is two-fold: 1) a survey contribution that includes our analysis and classification of previous literature as well as future research (cf. contribution from Wobbrock and Kientz (2019)), and 2) a theoretical contribution that introduces an intent communication model and describes the relationship of its entities. ## 2. Background In this section, we will illustrate the need for communicating _robot motion intent_ and discuss the current understanding of the term, which provides the foundation for our scoping review. _Robot_ is an umbrella term that describes a miscellaneous collection of (semi-)automated devices with various capabilities, technologies, and appearances(Wobbrock and Kientz, 2019). These cyber-physical systems are often differentiated by their Degrees-of-Freedom (DoF) or ability to move and manipulate their environment. In industrial assembly lines, robotic arms manipulate and weld heavy parts (Kentz, 2019), often in restricted areas (Kentz, 2019). Enabled by lightweight materials and safety sensors, robots have started to adapt to their users - today, they shut down when humans get too close or when resistance to the robot's movement is detected. This has led to the development of _cell-less_ HRI (Han et al., 2019), which has also paved the way for further scenarios, such as supporting people with disabilities in their daily lives (Kentz, 2019). Ajoudani et al. trace in their review paper several approaches of HRI, how it evolved, and how it increased over the last two decades (Ajoudani et al., 2019). They conclude that the success of HRI comes from combining human cognitive skills (i.e., intelligence, flexibility, and ability to act in case of unexpected events) with the robot's high precision and ability to perform repetitive tasks. Matheson et al. proposed different types of such _cell-less_ HRI, defined by their closeness of interaction (Matheson et al., 2019). They include _coexistence_ (separation in space but not in time), _synchronized_ (no separation in space but in time), _cooperation_ (no separation in space or in time, but still not working on the same task), and _collaboration_ (human and robot work on a task together, where the action of one has immediate consequences for the other). These works indicate that communication and interaction between robots and humans are critical to successful HRI. While research in human-aware navigation aims to make the robot smart enough to understand human behavior and react to it (Kentz, 2019), supporting humans in understanding robot behavior is equally important (Matheson et al., 2019). As the work by Matheson et al. highlights, humans and robots increasingly share the same physical space in HRI, which makes communicating _robot motion intent_ a particularly relevant aspect for safe and effective collaboration and a prerequisite for _explainable robotics_(Matheson et al., 2019). However, _robot motion intent_ is a rather vague term and lacks a clear definition. Further, it is not consistently used by researchers in the field. Instead, similar underlying concepts have been investigated under terms such as situational awareness (Suzuki et al., 2019), forthcoming operation (Kentz, 2019), or robot signaling system (Kentz, 2019). Suzuki et al., as part of their extensive literature review covering the relationship between AR and robotics, emphasize the potential of AR-based visualizations for communicating movement trajectories or the internal state of the robot (Kentz, 2019). However, as their literature review extends beyond intent communication, they do not further discuss or define different types of intent, nor do they provide a deeper understanding of intent properties. **Our work** presents a systematic overview of the field and addresses the current issues by conducting a scoping review. Such a review or survey contribution helps to organize the published research of the field and enables reflection on previous findings after the field has reached a level of maturity (Koszcz ### Initial Query We explored a variety of query terms and their combinations because, as discussed, the field currently lacks a coherent and established terminology. In addition, we found several terms to be used in ambiguous ways, in particular terms such as _communication_ and _motion_. Therefore, we decided on a broad search in this first step to increase recall and reduce the risk of overlooking relevant literature. We aimed to encompass a variety of different robot technologies while still focusing on the concept of intent, even though the word may be used in a variety of circumstances. We searched the titles, abstracts, and keywords of the databases' full-text collections with the following combined terms2: Footnote 2: ScienceDirect does not support the wildcard \({}^{***}\) but uses stemming and lemmatization techniques. In order to achieve search results based on wildcards “\({}^{*}\),” we modified the combined term to: _(robot_**OR**_obot_**OR**_drone_) **AND** (intent_**OR**_intend_**OR**_intend_). \[\text{({robot}}^{*}\textbf{OR}\text{{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{cobot}}}^{*}\textbf{OR}\text{{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{cobot}}}^{*}\textbf{OR}\text{{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{cobot}}}^{*})\textbf{AND}\text{ (intent}^{*}\textbf{OR}\text{{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{cobot}}}^{*}\textbf{)} \tag{1}\] ### Algorithmic Filtering Due to our initial search being quite broad, further filtering was required to identify relevant papers. The initial set allowed us to apply an algorithmic approach similar to that of previous research done by O'Mara-Eves et al. (2019). Specifically, we applied the Term Frequency-Inverse Document Frequency (TF-IDF) (Kumar et al., 2019) method to identify frequently used terminology within our corpus. TF-IDF has been shown to be suitable for information retrieval in literature reviews (Sutskever et al., 2019; Kumar et al., 2019). First, we preprocessed the entries by a) combining each paper's title, keywords, and abstract into one field, b) fixing encoding issues such as & (and), ~(degree), and -- (emdash), and c) converting the strings to lowercase as well as removing punctuation, numbers, symbols, and standard English stop-words from the corpus and replacing tokens with their lemmatizations (Sutskever et al., 2019). For the creation of the TF-IDF-weighted document-term matrix, we calculated the Term Frequency (TF) for each term of our corpus, taking the static Inverse Document Frequency (IDF) into account, and computed the TF-IDF for each term over all documents. The resulting TF-IDF-weighted document-term matrix is shown in Table 1. From the first 150 entries of the TF-IDF sorted list of tokens, three researchers independently qualified related terms to _communication_ and _motion_ - two terms we had decided to leave out of the initial broad query due to word ambiguity. During the following consensus process, we excluded related terms that were too general and ambiguous (e.g., "show" is frequently used in "Our results show[...]," "present" in "In this work we present[...]," "demonstrate" in "We demonstrate in our results[...]," or "perform" in "We performed a study[...]"). All identified terms were then used in the filtering step by applying the following logic to the title, keywords, or abstract of each paper in our corpus: ( \[\text{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@gray@stroke{0}{cobot}}}^{*}\textbf{OR}\text{{\color[rgb]{0,0,0} \definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{cobot}}}^{*} \textbf{OR}\text{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@gray@stroke{0}{cobot}}}^{*}\textbf{)}\textbf{AND}\text{ (motion}^{*}\textbf{OR}\text{{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{cobot}}}^{*}\textbf{OR}\text{{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{cobot}}}^{*}\textbf{OR}\text{{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{cobot}}}^{*}\textbf{)}\] \[\text{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@gray@stroke{0}{cobot}}}^{*}\textbf{OR}\text{{\color[rgb]{0,0,0} \definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{cobot}}}^{*} \textbf{)}\textbf{AND}\text{ (intent}^{*}\textbf{OR}\text{{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{cobot}}}^{*}\textbf{OR}\text{{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{cobot}}}^{*}\textbf{OR}\text{{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{cobot}}}^{*}\textbf{)}\] \[\text{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@gray@stroke{0}{cobot}}}^{*}\textbf{OR}\text{{\color[rgb]{0,0,0} \definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{cobot}}}^{*} \textbf{)}\textbf{AND}\text{ (intent}^{*}\textbf{OR}\text{{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{cobot}}}^{*}\textbf{OR}\text{{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{cobot}}}^{*}\textbf{)}\] \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline **Rank** & **Term** & **TF** & **IDF** & **TF-IDF** & **Rank** & **Term** & **TF** & **IDF** & **TF-IDF** \\ \hline 1 & human & 6,547 & 0.92 & 6,052.89 & 7 & **interaction** & 3,383 & 1.33 & 4,515.61 \\ 2 & control & 6,769 & 0.87 & 5,902.24 & 15 & **movement** & 1,920 & 1.88 & 3,606.34 \\ 3 & system & 7,612 & 0.69 & 5,218.61 & 61 & **communicat** & 1,059 & 2.32 & 2,455.03 \\ 4 & **motion** & 3,640 & 1.42 & 5,154.59 & 140 & **feedback** & 665 & 2.74 & 1,820.08 \\ 5 & model & 3,978 & 1.24 & 4,938.74 & 143 & **visual** & 674 & 2.67 & 1,802.90 \\ \hline \hline \end{tabular} \end{table} Table 1. Sorted list of terms from the TF-IDF-weighted document-term matrix. The selected terms are highlighted in bold. For a paper to be accepted, a term from the cluster "communication" and another from "motion" (both OR operation) had to appear in the title, keywords, or abstract (AND operation). As a result, 822 papers remained in our corpus. ### Manual Screening The final phase of our paper selection process required manual screening, following an approach similar to that of Doherty and Doherty (Doherty and Doherty, 2013). The process involved _abstract screening, full-text screening_, and _reference screening_. During the screening of all abstracts, we identified 706 out of 822 papers as not fitting into the scope of this review. The full-text analysis of the remaining 116 papers reduced the set to 48 papers. In addition, we screened the references cited by the set of 116 papers that were assessed for full-text screening. We identified 29 further relevant references, which we then included. This led to a final set of 77 papers, which were examined in the following. During the abstract and full-text screening, we **pre-excluded** 36 papers in unfitting paper formats still in the corpus, such as proceedings front matter, workshop calls, survey papers, or semi-duplicates - when two papers essentially presented the same contribution, due to one being a work in progress and the other a full paper. We also excluded 305 papers that aimed to convey the **human's intent** (to the robot) but not the robot's intent (e.g., Kurylo and Wilson (Kurylo and Wilson, 2017)). Similarly, we removed another 210 papers where the research did not focus on the intention of robot motion (**no robot intent**). For example, 1:1 teleoperated devices (e.g., van Waveren et al. (van Waveren et al., 2018)), or work focusing on AVs and eHMIs. We excluded another 220 **system design** papers that focused on aspectus such as aesthetics, mathematical models of motion planning, or definitions (e.g., Girard et al. (Girard et al., 2018)). Eventually, we removed four papers where no approach or prototype was developed and reported (e.g., Thellman and Ziemke (Thellman and Ziemke, 2018)). ## 4. Intent Communication Model Through our literature review, we aim to improve understanding of the communication of _robot motion intent_ by analyzing previous research. To that end, each author analyzed our literature corpus (n=77) in a multi-step process. It was discovered that several papers presented, combined, or empirically compared multiple intents (on average, more than two per paper). Therefore, we first systematically extracted all individual intents, resulting in a total of 172 intents. By screening these intents, we identified the primary entities (_robot, intent_, and _human_) as well as a communication flow between these entities that parallels that of the HCI model from Schomaker (Schomaker, 1999). However, in contrast to the HCI model, we focus solely on the communication of _intent_ from _robot_ to _human_, as previous research has already covered the inverse (Zhou et al., 2018). Furthermore, we identified a top-level entity, _goal_, which describes the motivation to communicate intent, as well as a low-level entity, _context_, which describes the situation in which the intent is communicated. Reflecting on all Figure 3. Overview of the intent communication model from robot to human. The three entities (i.e., robot, intent, human) and their dimensions are derived from our literature corpus. The flow of communication parallels the human-computer interaction model from Schomaker (Schomaker, 1999). The main dimensions (i.e., kind, type, role) are discussed in Section 4, while a focused analysis of intent information and location is presented in Section 5. entities, we analyzed the intents by asking 1) _why_ they were communicated (_goal_), 2) _who_ communicated them (_robot_), 3) _what_ they communicated (_intent_), 4) _to whom_ they were communicated (_human_), and 5) _in which_ circumstances they were communicated (_context_). Dimensions, categories, and properties emerged from the data through an open coding process of the extracted answers; specifically, we identified _kind of robot_, _location, type of intent_, _information_ of _intent_, and _role of human_ as our dimensions. The resulting _intent communication model_ is shown in Figure 3. In the following, we present our findings for the three primary entities (_robot_, _intent_, and _human_), which we define and support by giving examples. We also discuss the _context_ of communicating _robot motion intent_. ### Human In HRI, we can distinguish between different scenarios based on how involved a human is in the task performed by the robot. For the entity _human_, we utilize these levels of closeness between robot and human to define the different _roles of human_. Moreover, all four _roles of human_ are illustrated in Figure 4. #### 4.1.1. Definition The human has a crucial role during HRI, which strongly impacts which intents need to be communicated. From the analyzed intents of our corpus, we derived four different _roles of human_ (_collaborator_, _observer_, _coworker_, and _bystander_). The roles are ordered by the degree of human collaboration and involvement with the robot, starting with the most involved (see Figure 4). These roles are also closely connected to the overarching goal of the HRI. Here, we found _supporting collaboration_, _oversight_, and _coexistence_ to be of primary importance. In the following, we define the different roles, discuss their relationships to overarching goals, and support them with examples. _Collaborator._ When in the role of a _collaborator_, a human works with a robot on a shared task in the same space and at the same time (S _Observer._ A human functions as an _observer_ when their main job is to supervise the task that is being carried out by the robot. Although they mostly just watch, an _observer_ must be ready to intervene and take control of the robot. In this context, communication of _robot motion intent_ is for the goal of _supporting oversight._ Here, the robot has to provide information to the human to allow effective intervention when needed. Fundamentally, supporting oversight refers to the ability of a human to judge and evaluate if a robot is operating within its intended parameters. For example, in work by Hetherington et al., the robot communicates its movement paths to an _observer_, which enables the _observer_ to foresee and prevent potential collisions of the robot with obstacles (Hetherington et al., 2019). Others communicate the inner state of the robot, allowing an _observer_ to anticipate potential task failures that may occur due to problems with the robot itself, e.g., faulty sensor information (Hetherington et al., 2019; D'Amico et al., 2020). An _observer_ is described in 47 papers and is the recipient of 94 intents. _Coworker._ In the _coworker_ role, the human works next to the robot but handles their own task. While these tasks may be part of a shared overarching effort or entirely disconnected, they take place in the same shared workspace (e.g., a robotic arm that picks up one out of two objects and leaves the other one for the human (D'Amico et al., 2020)). In the _coworker_ context, communication of _robot motion intent_ is for the purpose of _supporting coexistence_. Here, the human needs to understand the robot's motion to avoid safety-critical situations (e.g., colliding with the robot). In Aubert et al., a robot and human pick up objects from a shared bin for their individual tasks (Hetherington et al., 2019). Here, communication of _robot motion intent_ can help the human to coordinate their actions and avoid collisions with the robot. Chadalavada et al. showed that communication of _motion intent_ through Spatial Augmented Reality (SAR) can improve perceived safety with mobile robots (Chadalavada et al., 2019). In their study, it meant that participants could choose safer walking paths and get closer to the robot without subsequent safety shutdowns. In our literature corpus, a _coworker_ is described in six papers and is the recipient of 18 intents. _Bystander._ The human is a _bystander_ when they do not share the same task or the same task goal with the robot but still occupy an area overlapping the robot's physical workspace. Like the _coworker_ role, the _bystander_ role involves communication of _robot motion intent_ to support the goal of _supporting coexistence_. A _bystander_ needs motion information to avoid collision and feel safe. For example, imagine a human and a robot encountering each other in a corridor. To allow the human to choose a walking path that avoids collision, the robot can move to one side and communicate its intended movement path in advance (Hetherington et al., 2019; D'Amico et al., 2020). A _bystander_ is described in 17 papers and is the recipient of 23 intents. ### Intent We identified four different types of _intent_ that the _robot_ can communicate to the _human_ to express its intentions, contributing to increased transparency. We consider these types to be the main dimension for classifying _intent_ in the following text. In addition, we identified the dimensions _location_ and _information_, as shown in Figure 3, which help to further classify and describe _intent_. Given their great importance, they are discussed separately in Section 5. #### 4.2.1. Definition. As our literature review focused on communicating _robot motion intent_, a majority of the corpus (69% of all papers; 54% of all unique intents) deals with _motion intent_. Nevertheless, we identified additional intent types that are related to _motion intent_ and of equal importance (i.e., _attention_, _state_, and _instruction_). All _types of intent_ are described below and the relationship of each to motion is explained. Furthermore, we found that for each _type of intent_, we can further distinguish between an _intent_ that is _related to the robot_ and one that is _related to the world_ (more details can be found in the individual paragraphs below). An overview of all _types of intent_ and associated papers can be found in Table 2. _Motion._ These intents are the main _type of intent. Motion intent_ is concerned with explicitly communicating future motions (i.e., actions that the robot will perform). As our survey is focused on _robot motion intent_, it encompasses more than 50% of the identified unique intents in our corpus. Most of the described intents deal with _robot self-actions_, aiming to indicate future robot movement. Thereby, users may be able to improve the coordination of their actions in concert with the robot's behavior to avoid collisions and improve safety. For example, Chadalavada et al. employed SAR to communicate future movement direction as well as the specific path the robot will take, which helped _bystanders_ feel safe around a robotic forklift [20]. _World actions_ are activities that manipulate the world around the robot. Again, this may help the _bystander_ to coordinate their activities, but it also helps the _observer_ to understand when to take over control from the robot. Psarakis et al. applied this concept of _world actions_ in a VR simulation to visually augment the nearby objects that the robot planned to grasp [98]. _Attention._ Intents that communicate the need for attention are a supportive element. They precede a _motion intent_ to shift human attention toward the robot or process, especially when the humans' attention is not guaranteed (e.g., because they focus on their own tasks). For example, Bolano et al. used acoustic feedback to alert the human and shift their attention toward the robot whenever it detected a possible collision [14]. An example of _robot-focused attention_ was presented by Furuhashi et al., who designed an assistive robot based on the commercial Roomba device as a hearing dog that can notify deaf users of important events [45]. Here, the system uses physical touch to gain the human's attention by gently bumping into their body. As an example of _world-focused attention_, Mutlu et al. had a humanoid robot quickly look at an object of interest. They studied whether collaborators were able to understand the robot's gaze cues and correctly identify the object (among several others) that the robot had chosen as its object of interest [88]). _State._ A robot communicating its state allows a _human_ to deduce potential future motions and identify conflicts before they occur. For example, a _robot_ could collide with nearby objects due to errors in its sensor system. However, robot communication of the detected objects enables a _human_ to take over control and mitigate the issue. For _state \begin{table} \begin{tabular}{l l l l} \hline \hline **Category** & **Subcategory** & \multicolumn{2}{c}{**Number of**} & **References** \\ & & Papers (\%) & Intents (\%) & \\ \hline \multirow{4}{*}{Motion} & Robot Self-Actions & 38 (49.35\%) & 75 (43.60\%) & [3, 12, 13, 14, 16, 17, 20, 21, 23, 27, 30, 31, 35, 37, 42, 44, 49, 54, 55, 58, 60, 63, 72, 79, 83, 99, 101, 115, 120, 124, 127, 128, 130, 132] & \\ & World Actions & 15 (19.48\%) & 18 (10.47\%) & [3, 6, 21, 25, 40, 41, 57, 61, 64, 66, 71, 84, 89, 95, 98] & \\ \hline \multirow{4}{*}{Attention} & Robot-Focused Attention & 6 (7.79\%) & 8 (4.65\%) & [6, 14, 19, 24, 45, 67] & \\ & World-Focused Attention & 4 (5.19\%) & 5 (2.91\%) & [74, 88, 109, 111] & \\ \hline \multirow{4}{*}{State} & Robot Self-Perception & 23 (29.87\%) & 27 (15.70\%) & [3, 7, 8, 18, 29, 31, 38, 43, 55, 63, 74, 79, 80, 91, 105, 110, 114, 116, 117, 124, 128, 131, 132] & \\ & Robot World Perception & 8 (10.39\%) & 12 (6.98\%) & [3, 21, 30, 31, 57, 101, 128, 132] & \\ \hline \multirow{2}{*}{Instruction} & Robot-Centered Instructions & 10 (12.99\%) & 16 (9.30\%) & [8, 19, 39, 45, 51, 67, 74, 86, 108, 117] & \\ & World-Centered Instructions & 9 (11.69\%) & 11 (6.40\%) & [3, 8, 13, 16, 21, 22, 84, 98, 128] & \\ \hline \hline \end{tabular} \end{table} Table 2. Overview of different intent types, sorted by their categories and subcategories, with their counts (and percentages) of identified relevant papers (max. 77) and unique intents (max. 172). Note: Papers may include multiple unique intents and can therefore appear in multiple categories and subcategories. intents, we distinguish between _robot self-perception_, meaning the state the _robot_ communicates about itself (e.g., simple text feedback presented on a display that indicates states such as "stop" or "moving" [(80)]), and _robot world perception_, meaning the communication of the perceived state of the world (e.g., visually highlighting objects in the environment that the sensor system has successfully detected, allowing the user to predict and understand subsequent robot movement [(57)]). _Instruction._ In several papers, we identified _instruction_ intents that accompany robot motion. For example, if a _robot_ is blocked by an obstacle, it can instruct a _human_ to remove the obstacle so it can continue its motion. _Instructions_ can be _robot-centered instructions_ when they stand in relation to the robot itself (e.g., Moon et al. applied head gaze cues to communicate instructions to the user to complete the handover of an object from the robot's gripper [(84)]). Or, in contrast, _instructions_ can be _world-centered instructions_ when they stand in relation to the world (e.g., a robot instructing a human to push a button on a wall to open an elevator so that it can continue its movement [(128)]). #### 4.2.2. Relationship to Human. Communicating a robot's intended motion to the human helps to improve the perception and understanding of the robot's behavior. However, humans that are, for example, not involved in the robots' task - perhaps because they are focusing on their own tasks (_coworker_) or are just uninvolved in general (_bystander_) - often need an additional cue to be able to read _robot motion intent_, which makes the intent type _attention_ necessary (e.g., by an acoustic prompt [(6)]). _State_ intents enable a human to see not only the next _motion_ but also the internal state and planning, enabling them to understand actions ahead of time. Such intents also support _observers_ in their task of supervising the robot. Finally, collaboration means a constant shifting of who is in charge when humans and robots work together on a shared task. Therefore, _motion, state, attention_, and _instructions_ are all necessary intents for providing a baseline for collaboration (_collaborator_). ### Robot In our corpus, we identified three different _kinds of robot_, which together form the _robot_ entity. #### 4.3.1. Definition. We identified three main _kinds of robots: robotic arm, humanoid_, and _mobile robot_. These, in order, represent a spectrum of increasing mobility and flexibility based on the area of deployment, starting with stationary robots (still with many DoF) and ending with robots that are inherently mobile (which also includes mobile arms with many DoF on a platform). Based on different robots, researchers have investigated different intents with varying frequencies. In the following, we illustrate each _kind of robot_ with examples from our literature corpus. _Robotic Arm. Robotic arms_ can be described as a chain of axis links. They are typically fixed to one place and can have a manipulator [(47)]. Nowadays, they are the industry standard in production lines of factories [(15)] and work alongside humans in HRI environments [(35)]. _Robotic arms_ are described in 13 papers and send 22 intents. _Humanoid. Humanoids_ have two robotic arms with manipulators, a torso, a head, eyes, and, often, basic facial expressions. Due to the two robotic arms, _humanoids_ have more DoF than single robotic arms. Still, _humanoids_ are often fixed to one place and lack mobility. Nonetheless, they are an important part of HRI when working with humans in a shared workspace [(72; 99)]. In rare cases, they can move in space, imitating human movement. Here, anthropomorphic features of the robots - such as gaze or certain gestures - can decrease the time required to predict the robot's intent [(49)]. _Humanoids_ are described in 11 papers and send 21 intents. _Mobile Robot._ With the addition of mobility comes increased flexibility. _Mobile robots_ can be deployed in the air, on the ground, or in water. For this kind of robot, we have actively chosen to define them more broadly to include robots that appear only once in the corpus. For _mobile robots_ (also referred to as drones), we distinguish between _ground drones without a manipulator_ that move between locations, _ground drones with a manipulator_ that can also manipulate the world, _flying drones_ that maneuver through the air, and _water drones_ that operate on water or underwater. Communicating _motion intent_ helps _ground drones without a manipulator_ to, for example, lead or follow a human to a specific place (Deng et al., 2018). It can help _ground drones with a manipulator_ to, for example, communicate which object they intend to pick up (Deng et al., 2018). _Flying drones_ or _water drones_, on the other hand, can communicate their _motion intent_ by flying or driving in a pattern (Deng et al., 2018; Deng et al., 2018). All kinds of drones can appear alone (Deng et al., 2018) or as a swarm of drones (Deng et al., 2018). _Mobile robots_ are described in 53 papers and send 129 intents. #### 4.3.2. Relationship to Intent As _mobile robots_ move around more freely, they frequently encounter human _bystanders_ who cross their paths. Consequently, _mobile robots_ often have to first shift the _human's_ attention toward the robot's display, preparing them for the communication of the robot's intended _motion_. For example, a projection in front of the robot can catch the attention of a bystander while simultaneously informing about the direction of driving (Deng et al., 2018). At the same time, _mobile robots_ need to communicate their _state_ and planning of actions ahead of time, either the inner state (e.g., what is the current mission status (Deng et al., 2018)) or the perceived world state (e.g., which objects are detected (Deng et al., 2018)). _Humanoids_ and _robotic arms_, on the other hand, are often deployed in collaborative scenarios, teaming up with humans. Here, robots need to communicate their intended _motion_ to coordinate their actions with a human collaborator (e.g., which items the robot intents to pick next from a shared bin (Deng et al., 2018) or when objects are to be handed over to the collaborator (Deng et al., 2018)). ### Context The _context_ describes the setting of the HRI scenario. While the location is an essential part of the context, there is more: for example, the social environment (Deng et al., 2018). Nonetheless, we consider the location helpful to define HRI scenarios. In our analysis, we found various types of locations, including _workplace_, _domestic_, and _outdoor_. In _workplace_ settings, the robot is frequently part of an assembly line or, more generically, a manufacturing process (e.g., collaborating with a human worker (Deng et al., 2018)). However, _workplace_ locations also include industrial settings, offices, or generic work rooms. In total, 42 papers took place at a _workplace_ location. In _domestic_ environments, robots support a task at home (e.g., by picking cups up off a kitchen table (Deng et al., 2018)). Here, we found five relevant papers. Finally, in two papers the robot could move freely outside (e.g., fulfilling a mission and communicating its status (Deng et al., 2018)). Apart from these, 28 papers had no particular location specified. Instead, the authors of these papers investigate more generic scenarios of _robot motion intent_ (e.g., by stating that a robot moves between two locations but without fine details of these locations (Deng et al., 2018)). For these scenarios, it is unclear which locations are most relevant. ## 5. Analysis of intent information and location In addition to the different _types of intent_ discussed in the previous section, two other dimensions of intent emerged from the data: _Intent information_ (which refers to the data communicated by the _robot_) and _intent location_ (which describes from where the intent is communicated to the _human_). In this section, we define these dimensions, illustrate their application with examples, and present a summary of empirical findings concerning their usage. ### Intent Information Based on our analysis of _how_ the intent is communicated as well as _what_ is communicated, we derived two main properties for categorizing _intent information: spatial_ and _temporal_. #### 5.1.1. Spatial Property The primary approach to convey spatial information is to embed it directly into the environment, i.e., have it **registered in space**. We identified 105 matching intents. We can further classify such intents as conveying _local_ information (74 intents) or _directional_ information (31 intents). _Local_ information aims to precisely relate the information to the surrounding space by showing an exact position that naturally may contain orientation information as well. Han et al., as an example, convey _local_ information by using SAR polygon visualizations to frame and highlight detected objects on a table, allowing a human observer to supervise the robot's intended movement and manipulation actions (Wang et al., 2017). In contrast, _directional_ information aims to communicate the explicit direction of movement (e.g., an arrow pointing in the direction of movement (Han et al., 2017) or toward an object or person of interest (Wang et al., 2017)). Information that is **unregistered in space**, however, employs an abstract encoding of the spatial property. In total, we identified 67 matching intents. This category includes the following _types of intent: Description, symbol_, and _signal Description_ (11 intents) applies to scenarios in which textual or verbal information is used (e.g., the robot informs the human verbally before initiating a movement to perform a touch (Wang et al., 2017)). _Symbol_ (25 intents) applies to cases in which a symbolic representation is used to form the intent communication (e.g., a mobile robot that nods its head to request a human follow before moving toward its destination (Steiner et al., 2017)). _Signal_ (31 intents) applies when components are turned on/off to indicate a change (e.g., an acoustic prompt is turned on to gain attention for the upcoming communication of _motion intent_(Borda et al., 2017)). Mini maps provide an abstract but geographical encoding that includes the relationships among different objects in the environment (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017). **Empirical Implications.** While information _registered in space_ provides a direct link between real-world objects and the displayed information, information _unregistered in space_ lacks this connection and requires an additional mental step to create this link. Consequently, information _unregistered in space_ may be less intuitive, and thus researchers have explored different combinations of information to mitigate that. Andersen et al. as well as Wengefeld et al. showed that combining multiple types of intent information that are _unregistered in space_ (e.g., text _description_ and _symbol_ icons) helps to effectively communicate _motion intent_ to the user (Borda et al., 2017; Wang et al., 2017). On the other hand, Staudte and Crocker found that combining both categories (_registered & unregistered_), which in their case involved a robot gazing at a specific object while a verbal description of the object played, leads to successful perception and understanding by the user (Steiner et al., 2017). Similarly, Bolano et al. later showed that a verbal description of the target can be combined with visual feedback of the motion endpoint to achieve the same improvement (Borda et al., 2017). #### 5.1.2. Temporal Property The temporal property of _intent information_ is about the distinction between having a _discrete_ or _continuous_ information flow. **Discrete** information has a fixed, distinct appearance in time and is beneficial for communicating _robot motion intent_ because it enables the human to detect a change (i.e., the information appears) and it signals at which point the information loses its relevance (i.e., it disappears). For example, Aubert et al. equip their humanoid robot with a display that shows the number of the next bin it will approach, thereby allowing a human to avoid conflict with the robot (Borda et al., 2017). Overall, we identified 89 intents that communicate _discrete_ information. **Continuous** information, as has been provided in 83 intents, is available throughout the whole task or over several task phases (i.e., it is visible independent of its relevance to the current task). It enables the human to observe the robot, compare it with the world, and evaluate the correct task execution. Tsamis et al., for example, implemented AR visualizations for a Head-Mounted Display (HMD) to continuously communicate the intended movement space of a robotic arm by placing a semitransparent red sphere around the robotic arm (Hamb et al., 2017). **Empirical Implications.**Faria et al. showed that both _discrete_ and _continuous_ information are effective for communicating a _follow me_ intent with spherical robots (Koay et al., 2017). Koay et al. also evaluated both temporal properties using a robot dog that guides people living with hearing loss. However, they found that a motion-based approach (_continuous_), in which the robot's head movements request users to follow, is more successful than using a flashing Light-Emitting Diode (LED) stripe (_discrete_). They attribute this to the fact that head movements are more straightforward to interpret (Kolmogorov, 1957). The findings of Aubert et al. suggest that combining _discrete_ and _continuous_ information is the most effective method. They showed that the combination of a motion-based approach (_continuous_) and a display approach (_discrete_) to communicate the robot motion end-point outperformed both uni-modal intent communication conditions (Corbett et al., 2019). #### 5.1.3. Cross Relations Inherently, the information of every intent has _spatial_ and _temporal_ properties. In the following, we describe the relationships between these properties of intent information. For _unregistered in space_, the temporal property is almost evenly distributed between _discrete_ and _continuous_ information. Here, _signal_ is an exception, as _discrete_ (23 intents; e.g., having flashing lights attached to a mobile robot to indicate a discrete change of movement direction, similar to a car (Corbett et al., 2019)) is used more often than _continuous_ (eight intents; e.g., an LED stripe attached to the robot to continuously communicate the remaining distance to the target position through a color-coded progress bar (Corbett et al., 2019)). _Signals_ are primarily used to communicate sudden changes. Accordingly, such _discrete_ events are naturally communicated as _discrete_ intent information. For _registered in space_, we see an uneven distribution for both subcategories. Intent information classified as _local_ is mostly communicated as _continuous_ information (50 intents; e.g., using SAR to continuously highlight an area in a workplace where the robot will be active during its movements and action (Corbett et al., 2019)) instead of _discrete_ (24 intents; e.g., using SAR to highlight a button on a wall that must be pushed by a human for the robot to continue its movement (Hamb et al., 2017)). We think that robot _motion_ likely relates to a continuous event because it is meant to happen over time and takes place continuously. Intent information classified as _directional_ is mostly communicated as _discrete_ information (23 intents; e.g., a display is attached to the top of a mobile robot, communicating the intended movement direction with an arrow (Corbett et al., 2019)) and only seldom as _continuous_ (8 intents; e.g., a drone is visualized as an eye in AR, constantly looking in the direction of movement (Hamb et al., 2017)). The reason is that _directions_ are primarily used to communicate an updated movement direction to the human; therefore, it makes sense that they are most often given as _discrete_ information. ### Intent Location Various technologies can enable the communication of _robot motion intent_. We found that, in particular, the placement of these technologies (_on-robot_, _on-world_, and _on-human_) can help to classify the different approaches in the literature, as there is often a relationship between the placement and specific types of technology. _On-Robot_ can be further divided into _robot-only_ technology or additional _robot-attached_ devices. We identified 114 intents communicated through _on-robot_ technology. As an example for the subcategory _robot-only_, Moon et al. utilize the head orientation of the robot, mimicking a gaze cue, to communicate mid-air locations for its intended movement as an instruction to the user (Moon et al., 2017). Nearly half of all categorized intents that utilize _on-robot_ technology fall into that subcategory, which is of particular interest because it limits the need for additional technology and often involves imitation of human-to-human behavior. The _robot-attached_ subcategory requires some additional hardware to be mounted to the robot (e.g., SAR, LED, or displays). For example, Wengefeld et al. attach a laser projection system to the robot and thereby communicate various types of intents, including _state_, _motion_, and _instruction_(Wengefeld et al., 2017). _On-World_ has received relatively little attention in the literature. It includes, for example, small displays attached to the workspace at object bins (Gruenefeld et al., 2017), or a desktop display (to visualize _motion intent_) with speakers (to gain _attention_) next to the robot's workspace (Gruenefeld et al., 2018). While the inability to change the environment may be less desirable from a generalizability perspective, for some technology, it adds significant benefits. In particular, SAR would be easier to realize with a fixed projector position _on-world_ and it would allow for larger projection areas. We identified eight different intents _on-world_. _On-Human_ includes _head-attached_ technologies, which primarily refers to HMD devices, which allow more complex visualizations. Gruenefeld et al., for example, experimented with different spatial visualizations, such as visualizing the intended movement path, previewing future locations of the robot arm, or visualizing the activity area as a whole (Gruenefeld et al., 2018). In addition, some approaches rely on _hand-held_ technologies. Correa et al., for example, used a tablet device displaying various types of information (map, live view, next steps) to support oversight and communicate _motion intent_(Cordes et al., 2019). We identified 50 intents _on-human_. **Empirical Implications.** For the _intent location_, it is generally better to output information closer to the target. For example, LeMasurier et al. compared several motion-based and light-based approaches for _humanoids_ to communicate an intended start of movement at an assembly workplace. They saw that an LED bracelet located closest to the workspace was the most noticeable and least confusing (Cordes et al., 2019). Furthermore, researchers found evidence that humans may prioritize _on-human_ technology over _on-robot_ technology. For example, Che et al. were able to show that the use of a vibrotactile bracelet worn by the user led to a better expression of the robot's _motion intent_, reduced users' effort, and increased users' trust in the robot during a collision-avoidance movement when compared to a solely robot-based approach using _legible motion_(McMahan et al., 2019). Finally, combining multiple output technologies can further increase performance. For example, Mullen et al. investigated a multi-modal approach for communicating robot interference in a sorting scenario that combined an AR-HMD visualization and active feedback via a vibrotactile bracelet. They found that combining both feedback types outperformed the single modality baselines. It allowed the human to more efficiently teach the robot and decreased the required interaction time. (Mullen et al., 2019). ### Relation between Location and Information In the following, we provide insights into the relationship between _intent location_ and _intent information_ (cf. Table 3). #### 5.3.1. Registered in Space To communicate location information registered in space, most researchers rely on _head-attached_ technologies, such as AR-HMDs (_on-human_). For example, Tsamis et al. implemented AR visualizations to communicate an intended movement trajectory of a robotic arm (Tsamis et al., 2019). They placed small spheres along a defined path in 3D space from the robot's end-manipulator to a specific destination. They found that using their system improved task completion and robot idle times, with fewer interruptions to the overall workflow. In addition, users reported increased feelings of safety and trust toward the robot. In contrast, Correa et al. proposed a tablet visualization that showed a live camera feed of the mobile robot highlighting recognized objects in its environment via a wireframe in the visualization (Cordes et al., 2019). In addition to intents displayed _on-human_, robots are often used to convey information directly through specific movements or pointing (_on-robot_). For example, Holladay et al. used a robotic arm and its end-effector to communicate a directional cue by pointing toward an object placed on a table (Holladay et al., 2019). The resulting pointing configurations were reported to make it easier for novice users to infer the target object. Another example for displaying information _on-robot_ is provided by Hetherington et al. They used SAR to project an arrow in the intended movement direction of the mobile robot on the floor (Kumar et al., 2017). Their results show that projected arrows were more socially acceptable and more understandable than flashing lights. Finally, information _registered in space_ can be outputted _on-world_. For example, Cleaver et al. used their web-based environment (Cleaver et al., 2019) to compare four different conditions of visualizing the intended movement trajectory of a mobile robot on a _world_-located display (Levas et al., 2019). In contrast, Aubert et al. placed small displays on three bins and used bin numbers and progress bars to indicate from which bin the robot coworker would next withdraw an item. However, the display-based approach could not significantly reduce the number of physical conflicts (Kumar et al., 2017). #### 5.3.2. Unregistered in Space Interestingly, a relatively large number of _symbol_ information is communicated through the robot itself (_on-robot_). Here, we found many approaches where the robot performs specific movement patterns that the human has to decode appropriately. A symbolic approach is shown by LeMasurier et al. (2019). They slightly move the robot's manipulator to the left and right to communicate an intended movement start. This approach received relatively high ratings on several measures; however, the authors recommend that the addition of light signals near the workspace and the origin of motion (like an LED bracelet) may provide a benefit to HRI in shared spaces. Song and Yamada provide an example of the type _symbol_ by using different static and dynamic light patterns on a _robot-attached_ colored LED stripe to illustrate different _states_ of the robot (Wang et al., 2018). Communication of _signal_ information is mainly achieved through robot-attached technology, such as LED or audio speakers. Wearable technologies can also show spatially _unregistered_ information (_on-human_). Che et al. propose a vibrotactile bracelet worn by the user to communicate an initiated collision-avoidance movement of a _mobile robot_(Dal #### 5.3.3. Discrete. Discrete information is usually presented directly _on-robot_. As an example of _robot-attached_ technology, Domonkos et al. attached a colored LED stripe to the base of a robotic arm to communicate the intended direction of movement to a human _coworker_(Domonkos et al., 2019). In contrast, Glas et al. proposed a _mobile robot_ that performs head gestures to initiate either a follow-me or lead-me request to the human (Gli et al., 2019), relying on the robot itself as in _robot-only_. Gu et al. evaluated a visual feedback displayed through an AR-HMD (_on-human_), indicating the planned movement direction of the robot via an arrow visualization (Domonkos et al., 2019). They found that the visualization improved perceived safety and task efficiency. Instead of relying on the visual modality, Mullen et al. proposed discrete feedback through a vibrotactile bracelet that is activated to communicate robot interference, triggering the human to move in order to allow the robot to continue its movement (Mullen et al., 2019). Their findings show that vibrational feedback can reduce the time required to notice and respond to an intent. Aubert et al. equipped bins (from which items could be chosen) in the environment with speakers to emit _discrete_ auditory information _on world_(Domons et al., 2019). They recommend not solely relying on auditory information, but using it in a multi-modal approach, which is further supported by Bolano et al. (2019). #### 5.3.4. Continuous Like _discrete_ information, _continuous_ information is primarily displayed _on-robot_. Matsumaru et al. _attached_ an omnidirectional display _on-robot_, projecting an eyeball-like visualization that effectively communicates the direction of movement to a human (Mullen et al., 2019). In contrast, Dragan et al. propose performing legible motions with a robotic arm itself to communicate the next object it will grasp (Domonkos et al., 2019), which they found enabled fluent collaboration. As an example of communicating intents _on-human_, Walker et al. display a symbolic representation of a focusing eye lens in an AR-HMD, encoding the relative distance to the next target (Mullen et al., 2019). Their results show a significant improvement in users' understanding of _robot motion intent_. Watanabe et al. proposed presenting _continuous_ visual feedback via a tablet to inform a wheelchair passenger of a robot's intended motion path (Watanabe et al., 2019). Lastly, _continuous_ information can be displayed _on-world_. Chandan et al. proposed a map visualization for a stationary tablet display that continuously shows the locations of three mobile robots and other objects of interest (Damand et al., 2019). They found this approach significantly improved the participants' ability to observe and assist the robot. Similarly, albeit only studied in a web-based experiment, Cleaver et al. proposed a 3D visualization displayed on a 2D screen to continuously communicate the intended path of a mobile robot (Watanabe et al., 2019). ## 6. Discussion and Future Research In the following, we discuss key findings of our literature survey and formulate future research directions as takeaway messages for the HCI community. The organization of the section follows the three entities _human, intent_, and _robot_ from our intent communication model and concludes with a discussion of the overall model. _Human._ From the analyzed intents of our corpus, we derived four different _roles of human_ (_collaborator_, _observer_, _coworker_, and _bystander_). In our analysis, we found that the human role is strongly related to the overarching goals of communicating _motion intent_ - a specific goal can be directly derived given a specific human role. For example, if the HRI scenario involves the human taking the role of an _observer_, the _motion intent_ needs to help with fostering oversight. As a result, this indicates that practitioners and researchers should explicitly define the role and, thereby, the involved human stakeholders before settling on the robot or specific intents they may want to communicate. The human roles we found in a bottom-up process through our analysis align well with the previous work of Onnasch and Roesler (2019). In contrast to Onnasch and Roesler, the role of the _operator_ did not show up in our analysis. We suggest this is because robots are not manually operated by humans in our corpus, as this would not require the robot to communicate any intent (Omansch and Roesler, 2019). **Future Research**: Our analysis showed that nearly all papers a) investigate individual human roles, e.g., they (often implicitly) pick one and focus on that, and b) design and study only for a 1:1 relationship between human and robot. The only exceptions to this are Faria et al., Kirchner et al., and Palinko et al., who investigate the legibility of robot movement for a group of humans [41] or explore the use of gaze cues to allow the robot to choose their human collaboration partner from a group of humans [66; 95]. This limited involvement of multi-user groups is, of course, to be expected in an emerging field that first needs to establish certain ground truths. Involving multiple persons or even multiple robots and persons complicates HRI tremendously, yet we think this is the subsequent step research must take. In particular, it would be interesting to reflect on the suitability of specific technologies (e.g., SAR will likely be better suited to satisfy multi-user scenarios compared to HMD technology). _Intent Types_. Through our scoping review of _robot motion intent_, we observed that communication of motion often requires additional intents that serve as pre- or post-cursors to the communicated _motion intent_. Furthermore, we found that robot motion can also be indirectly communicated: For example, by communicating only the robot's state (e.g., [8]) or by instructing a human to open a door so the robot can continue on its path (e.g., [127]). These various _types of intent_ demonstrate the different facets of _robot motion intent_, which represent both actual intended movement trajectories and related communication. We see that as a key finding, distinguishing our work from previous research that focuses primarily on the communication of _motion intent_[99; 113; 124]. With our survey, we are confident that other researchers will start to adopt a more holistic and precise use of the term _robot motion intent_ and, for example, start highlighting the need for related intents, as we found in our analysis. **Future Research**: Researchers should investigate how the different _types of intent_ may best be combined to achieve specific intent communication goals. Currently, there is little empirical knowledge about, for example, when and to what extent a robot may need to first communicate _attention_ before effectively being able to communicate _motion intent_. Further research should also challenge our classification of _types of intent_ and potentially extend them. _Intent Information and Location_. We derived two main properties that categorize our identified _intent information_ related to space: _registered in space_ (61.05%) and _unregistered in space_ (38.95%). This almost-even distribution reveals that a lot of relevant research not only focuses on information that aims to convey _local_ or _directional_ information (e.g., a resulting trajectory [27]), but also on more abstract representations, namely _description, symbol_, and _signal_. These are often much less complex and indicate that _robot motion intent_ can be communicated without visual 3D representations of future movement. This shows that there are viable alternatives to wearing special _on-body_ technology, resulting in fewer system costs and a decreased setup time. An alternative can be the _intent location on-robot_. In previous work, researchers have refined robots with anthropomorphic elements - such as eye-like features or certain movement gestures - to communicate motion intent. Our literature review identified 15 such instances, specifically applying eye- or head-gaze (e.g., looking at an object to indicate a handover between human and robot [84]). While anthropomorphic elements may not be as precise as digital representations through technology means (e.g., visualizations in AR), they share the same baselines as in Human-Human Collaboration (HHC). The general assumption is that, in turn, they can be easily understood by users and can mostly be integrated into the actual HRI. A possible combination with a verbal description provides a multi-modal output to the user, resulting in faster recognition of the specific object [111]. **Future Research**: While previous research has explored combinations of spatially registered and unregistered information [111], we are unaware of research that has contrasted their effectiveness. Therefore, current design decisions may be based more on the availability of particular technology and less on the intended outcome. Future research should explore this further so that practitioners can more accurately judge the potential trade-offs between simple or complex information and related technology use. Regarding the use of anthropomorphic features, the integration of such communication cues has been explored regarding their legibility and effectiveness in communicating _robot motion intent_. However, their implicit consequences (e.g., causing the human to ascribe human-like behavior to the robot) may still need to be fully explored. The means and cues of communication have significant consequences for the trust relationship between humans and robots [56]. _Robot._ When looking at the three _kinds of robots_ and their usage in research, we can see that the physical properties of a robot have a large impact on communication means: In particular, the _on-robot_ location for intent communication. Some robots come with pre-installed displays, while others have anthropomorphic features built in. _Flying drones_, on the contrary, require some kind of remote communication tool (often in the form of HMDs) to communicate over a larger distance. Robots are also an area of much technical experimentation, i.e., many researchers are building or customizing their own robots. For example, one may add anthropomorphic features to a robotic arm. As a result, researchers tend to use these built-in or customized features to communicate intent. They may often have only a particular kind of robot available; thus, they are limited to a certain way of communicating _robot motion intent_. Of course, this limits the generalizability of current findings, as each robot conveys unique features that can impact HRI. **Future Research**: These findings show that many research endeavors explore only certain _kinds of robots_. A more systematic approach is called for to investigate the various kinds of robots and their impacts on communicating _robot motion intent_. We also found that more and more research applies simulation environments in Virtual Reality (VR) to explore HRI. Nevertheless, we need more studies to validate such findings and provide a broader foundation for their generalizability. _Context._ Compared with previous research in AVs [28; 32] and eHMIs [33], we can identify several similarities, despite the substantial differences in the context of use and robot technology. Colley et al. found that visualizing internal information processed by an Augmented Virtuality (AV) could calibrate trust by enabling the perception of the vehicle's detection capabilities (and its failures) while only inducing a low cognitive load [28]. Currano et al. explored the interaction between complexity of head up displays, driving style, and situation awareness [32]. In the area of eHMIs, researchers have been able to distinguish between different _natures of message_ (e.g., danger and safety zones) [33]. These correspond to our identified _types of intent_, highlighting different meanings for the user for the provided intent. In the context of AVs, the information used to formulate the actual intent is primarily unregistered in space. It uses text, symbols, and audio prompts. The intent primarily describes the vehicle's state (e.g., automated/manual, cruising, yielding) or advice/instructions to the pedestrian (e.g., to allow safe road crossing). The large differences between the fields of research result primarily from the standardizations in automotive research, such as roads, road signs, markings, and restrictions. Nevertheless, there are potential overlaps. **Future Research**: The two fields have, from our perspective, not yet shared many cross-activities among researchers, which could lead, for example, to transferring those _motion intent_ techniques that have shown to be effective in one field to the other. We could imagine that future research could benefit both sides if a more holistic perspective is applied. In particular, the research for eHMIs in AVs could benefit from more exploratory technological approaches in HRI, such as making use of AR-HMDs and applying more advanced visualization to communicate _motion intent_. While this may not be relevant for the near future, as such devices are not yet consumer-ready, this may change over the coming years. _The Model._ The overall model is an abstract characterization of the current literature on _robot motion intent_. It may be seen as a summary of the current understanding of the design space for robot intent communication, where it illustrates all components and highlights their interconnection. Thereby, future researchers and practitioners should benefit from the model by using it as a guidance and checklist throughout the design phase of such Human-Robot scenarios; i.e., being guided to carefully think and decide upon different types of intents or whether intent information should be encoded spatially or temporally. In addition, the model can help to unify the language of _robot motion intent_ and thereby support researchers and practitioners to find related work as well as help to identify research gaps. **Future Research:** We invite researchers to actively challenge the model and thereby helping to develop the field even further. They should scrutinize whether the design space is sufficiently classified or how it can and needs to be extended to cover future work. As our model was derived from the analysis of our literature corpus, it is fitted to the gathered research. Nonetheless, one can utilize novel research contributions that will be published in the future to revisit and evaluate the model (i.e., to investigate if novel contributions can still be described by our model). Moreover, we imagine that a more thorough discussion in the context of eHMIs may benefit the model as well as incorporating other lines of research that are concerned with communicating intent, such as Sodhi et al. or Muller et al. (2017); Sodhi et al. (2017). ## 7. Conclusion This paper provides two main contributions: 1) a survey contribution that includes an analysis and classification of previous literature as well as future research directions, and 2) a theoretical contribution that introduces an intent communication model and describes the relationships of its entities, dimensions, and underlying properties. In particular, our work highlights that _robot motion intent_ requires a broader perspective on robot intent and that it includes intent types that may seem, at first glance, unrelated to motion. However, in our analysis, we found that _attention, state_, and _instruction_ are important and often necessary pre- or post-cursors to communicate explicit _motion intent_. We also found that only a few papers explicitly discuss or present the type of intent they aim to communicate and they also lack clear descriptions of intent information or location. Our work aims to help researchers in the future to better align their work with the suggested dimensions, making it easier to assess and compare different studies. Therefore, we aim to provide a foundation for a unified language regarding _robot intent_, even beyond motion. From a practical perspective, the classification of the existing research literature along our _intent communication model_ helps researchers and practitioners alike to understand the design space for communicating _robot motion intent_. As it is an emerging field, much work has focused on finding novel approaches and solutions to communicate _robot motion intent_ in one way or another. We have identified multiple areas of need for future research directions. However, we would like to emphasize once more that, above all, the field needs more systematic analysis and comparison of different approaches to improve understanding of the influences of different intent dimensions and properties. We believe that the presented intent communication model provides an empirically deducted foundation to inspire and guide such work.
2301.01605
Friction Laws and Numerical Modeling of the Seismic Cycle
Earthquakes rank among the most destructive manifestations of the Earth's dynamics. Can they be predicted? This is often the first question students ask. To answer that right away: no, at present it is not possible to anticipate the date, site and magnitude of future seismic events. However, there does exist a general framework to describe observations related to earthquakes and understand the processes that lead to their occurrence: the seismic cycle. This chapter introduces the reader to the friction laws from a historical to state of the art perspective. It then deals with mechanical modelling of the seismic cycle through simple analog models and finally presents some open questions and directions for future research.
M. Y. Thomas, H. S. Bhat
2023-01-03T11:10:30Z
http://arxiv.org/abs/2301.01605v1
# Friction Laws and numerical modeling of the seismic cycle ###### Abstract The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic forcing. The seismic cycle is a natural model for seismic model for seismic forcing. The seismic cycle is a natural model for seismic forcing. \(\mathbf{F}_{n}=\overline{\sigma_{eff}}A\) through the constant \(\mu\) (\(\overline{\sigma_{eff}}\) corresponds to the effective normal stress). We thus have: \[\mu=\frac{\mathbf{F}_{fric}}{\mathbf{F}_{n}}=\frac{\tau}{\overline{\sigma_{ eff}}} \tag{1}\] Let us now consider an object of mass \(M\) placed on a table. The force \(\mathbf{F}_{n}=Mg\) is, therefore, normal to the surface. We apply a tangential force \(\mathbf{F}_{t}\) parallel to the surface of the table. If the object is initially at rest, a motion may be produced if a force \(\mathbf{F}_{t}\), greater than \(\mathbf{F}_{fric}\), is applied. In this case, the coefficient \(\mu_{s}\) is called the coefficient of static friction. \[\mathbf{F}_{fric}=\mathbf{F}_{s}=\mu_{s}\mathbf{F}_{n} \tag{2}\] Now, if the object is displaced at a finite velocity over the surface, it has been experimentally found that the frictional force is also proportional to the normal force, through the coefficient \(\mu_{d}\), called the coefficient of dynamic friction: \[\mathbf{F}_{fric}=\mathbf{F}_{d}=\mu_{d}\mathbf{F}_{n} \tag{3}\] Early experiments showed that the coefficient of static friction is different from the coefficient of dynamic friction Rabinowicz (1958). Static friction has the property of increasing logarithmically with time, and dynamic friction depends on the velocity \(V\). From the classic work carried out by Kostrov Kostrov (1964, 1966) and Eshelby Eshelby (1969), it soon became clear that friction also played a fundamental role in the initiation, rupture development and 'healing' of faults. The classic Amonton-Coulomb model, however, led to an impasse. Among other physical problems, it postulated the hypothesis of an instantaneous modification of the coefficient of friction, from its static value to its dynamic value. This brings in singularities (infinite stresses) at the rupture front (red model in figure 1). This model lacks a scale of length that makes it possible to define a finite quantity of energy released at the rupture front. There are two possible options. One consists of defining the characteristic quantity of slip (between the two surfaces) required to move from static friction to dynamic friction. The other consists of introducing a characteristic time in which friction decreases from \(\mu_{s}\) to \(\mu_{d}\). In this second case, a scale of characteristic length emerges when the characteristic time is related to the slip velocity. For example: to explain his experiments on friction, Rabinowicz Rabinowicz (1958) introduced the concept of a "critical distance" \(d_{c}\) during which the gap between the static friction and the dynamic friction is closed. He related this critical distance to the velocity, \(V=D_{c}/t_{w}\). Here \(t_{w}\) is called _weakening time_. In general, the laws called _weakening friction laws_ were thus developed to reproduce seismic behavior. We speak of _weakening_ because the friction reduces with the slip (or rate of slip) and these laws can thereby produce instabilities Bocquet (2013); Zhuravlev (2013); Romanet (2017). This ingredient is required to anticipate seismic velocities (m/s) in the models. We will now present the most used models in the following sections. ### Slip weakening friction law In fracture mechanics, the model where friction weakens with distance, also known as the _cohesive zone model_, postulates that: * the rupture process, which causes the shift from static friction to dynamic friction, is confined to the fracture plane, * inelastic deformation begins when the stresses on the rupture front reach a certain critical level, * we reach the value of the coefficient of dynamic friction when the displacement on the fracture plane exceeds a critical value \(\delta_{c}\) Leonov & Panasyuk (1959); Barenblatt (1959); Dugdale (1960). This law was introduced in the context of a study of tension fractures, in order to solve the problem of singularities coming up (infinite stresses) on the rupture front (blue model in figure 1). The slip weakening friction law was introduced by Ida Ida (1972) and Andrews Andrews (1976) to model dynamic ruptures for 2D models, and by Day Day (1982) for 3D models. This is analogous to the cohesive zone model, but for mode II fractures, that is, for shear fractures. In this law, the slip is zero until the shear stress \(\tau\) reaches a maximum value (elasticity limit) that will be denoted by \(\tau_{f}^{s}\). Once this stress is attained, the slip starts and the resistance to the sliding \(\tau_{f}\) decreases linearly until the value \(\tau_{f}^{d}\), i.e., when the plane has Figure 1: _Comparison between the rupture model hypothesizing linear elasticity (red curve) and the cohesive zone model (dotted blue curve). a) Coefficient of friction in terms of the quantity of slip. b) Quantity of slip in terms of the position along the fracture. The point \(x_{1}\) is in the position \(A\) on the friction curve and the point \(x_{2}\) is at position \(B\). c) Stress field close to the rupture front._ slipped with a critical value \(\delta_{c}\): \[\tau_{f}(\delta)=\begin{cases}(\tau_{f}^{s}-\tau_{f}^{d})\left(1-\frac{\delta}{ \delta_{c}}\right)+\tau_{f}^{d}&;\delta<\delta_{c}\\ \tau_{f}^{d}&;\delta>\delta_{c}\end{cases} \tag{4}\] If this law is combined with the Amonton-Coulomb law (equation 1), we have: \[\tau_{f}(\delta)=\begin{cases}\left[(\mu_{s}-\mu_{d})\left(1-\frac{\delta}{ \delta_{c}}\right)+\mu_{d}\right]\overline{\sigma_{eff}}&;\delta<\delta_{c}\\ \mu_{d}\overline{\sigma_{eff}}&;\delta>\delta_{c}\end{cases} \tag{5}\] where \(\mu_{d}<\mu_{s}\). In their article, Palmer and Rice Palmer and Rice (1973) presented a law that is very close to this for which they could derive a complete analytical solution for the rupture front. They showed that this law made it possible to regularize the numerical model by distributing the stresses and the slip over a distance controlled by the length scale in the friction law. A few nuanced but important points with respect to the slip weakening law: 1. This friction law describes the start and growth of a seismic rupture. The more the fault slips, the weaker its resistance. If the shear stress on the fault, \(\tau\), is uniform, then this law implies that the fault will continue to slip indefinitely until \(\tau<\tau_{f}\). This does not match the observations. There are therefore two possibilities: either \(\tau\) is heterogeneous along the the fault due to its geometric complexity (branches, non-linear plane, fault jump etc.) or related to past earthquakes. The second possibility, since faults have finite length, is that the rupture stoped because the earthquake ruptured the entire slip plane. Consequently, when it arrived at the geometric limit of the fault, the friction resistance \(\tau_{f}\), is infinite by definition. For most small earthquakes it seems likely that the first case is the applicable one. For larger earthquakes it may be assumed that the second case is applicable. 2. This law does not explain how the next earthquake will occur. Following an earthquake, the entire fault plane that reruptured should, logically, have a shear stress equal to the dynamic friction multiplied by the effective normal stress i.e., \(\tau=\tau_{f}^{d}=\mu_{d}\overline{\sigma_{eff}}\). Further, for the nucleation and propagation of the next earthquake, \(\tau\) must again increase and reach the value \(\tau_{f}^{s}\). We talk about a fault plane 'healing', but the slip-weakening law does not allow this. It is thus well-suited to model a single rupture, but not to simulate Figure 2: Schematic illustration of (a) the _slip weakening friction law_, (b) the _velocity weakening friction law_ the seismic cycle, where inter-seismic periods and earthquakes succeed one another over a long period of time. 3. If we go back to law 4, but \(\mu_{s}<\mu_{d}\), we will then have an increase in friction with the slip, which does not produce instabilities. We then talk of _slip-hardening_ behavior, which leads to 'creep' type behavior. ### Rate weakening friction law In order to respond to the problem of the fault plane 'healing', i.e., to allow the shear value \(\tau\) to return to the value \(\tau_{f}^{s}\), Burridge and Knopoff R. Burridge (1967) propose a new model. They base it on a key observation made in the laboratory: once the plane has slipped from the critical value \(\delta_{c}\), the friction becomes a function of the slip rate \(V\) : \[\tau_{f}(V)=(\tau_{f}^{s}-\tau_{f}^{d})\frac{V_{0}}{V_{0}+V}+\tau_{f}^{d} \tag{6}\] where \(V_{0}\) corresponds to the characteristic slip velocity. When the slip velocity is much smaller than \(V_{0}\), the fault's resistance to slip corresponds to the static friction (\(\mu_{s}\)) multiplied by the effective normal stress (\(\overline{\sigma_{eff}}\)), i.e., \(\tau_{f}^{s}\). Conversely, when the slip velocity is much greater than \(V_{0}\), the fault's resistance to slip corresponds to \(\tau_{f}^{d}=\mu_{d}\overline{\sigma_{eff}}\). Therefore, during an earthquake, the resistance decrease as the slip velocity is large (of the order of 1 m/s). On the other hand, it rises again quickly as the slip on the fault slows down, when it reaches loading velocities of the order of a mm/year to cm/year. This, this law can not only model an earthquake individually, but also model the entire seismic cycle. Burridge et Knopoff R. Burridge (1967) applied this friction law over a series of connected block-spring systems used as a proxy for an elastic medium hosting a fault (cf. section 2.1.1). ### Rate-and-state type friction law Continuing with the work started by Brace and Byerlee Brace & Byerlee (1966), new experimental protocols have emerged. In particular, researchers wished to explore the effect of the sudden change in velocity observed in nature, when there is a shift from aseismic velocities (\(\sim\)cm/yr) to seismic velocities (\(\sim\)m/s). Experiments with velocity jumps in the loading of the system were carried out (figure 3). In his seminal 1998 paper, Chris Marone Marone (1998) offered an exhaustive review of these works. There are four key observations from this (figure 4). * A sudden change in slip rate first leads to a sudden increase in the coefficient of friction. This is called the direct effect. * A transient adjustment is then seen towards a new, stationary value of the coefficient of friction. * The coefficient of dynamic friction depends on the slip velocity. * The coefficient of static friction increases with time when there is no motion between the two surfaces in contact. James H. Dieterich was the first person to propose an empirical law that could reproduce these observations both qualitatively and quantitatively Dieterich (1979a,b). He based this, notably, on his own friction experiments, with velocity jumps, that involved two ground blocks of granodiorite. He also based it on his earlier experiments, demonstrating the coefficient of static friction increased with time Rabinowicz (1958). He thus interpreted the decrease of the coefficient of friction with velocity as an effect of the reduction of the mean contact time. And so, in his friction law, the coefficient of friction goes from \(\mu_{s}\) to \(\mu_{d}\) over a distance \(D_{c}\), which relates the contact time \(t\) to the slip velocity \(V\) in the following manner: \(V=D_{c}/t\). With this, he adopted an approach that was similar to that proposed by E. Rabinowicz (cf. section 1.2). The law that he proposed made it possible to bring together the different coefficients of static and dynamic friction into a single coefficient, which depended on the slip rate. It was later refined by Ruina (1983), through the introduction of a state variable \(\theta\), which followed a law of evolution. A common way to interpret \(\theta\) is to relate it to the lifespan of the asperities present on the surfaces in contact. The law was thus called the _rate-and-state_ law, due to the existence of this _state_ variable, and the dependence of the coefficient of friction on the velocity or _rate_. A modern form of the rate-and-state law was given by Marone (1998): \[\tau_{f}(V,\theta)=\left[\mu_{0}+a\log\left(\frac{V}{V_{0}}\right)+b\log\left( \frac{\theta V_{0}}{D_{c}}\right)\right]\overline{\sigma_{eff}} \tag{7}\] By associating this either with a law called the _aging law_: \[\dot{\theta}=1-\frac{\theta V}{D_{c}} \tag{8}\] or with a state law called _slip evolution_: \[\dot{\theta}=-\frac{V\theta}{D_{c}}\log\left(\frac{V\theta}{D_{c}}\right) \tag{9}\] Here, \(a>0\) and \(b\) are state parameters, of an order of magnitude of \(\sim 10^{-2}\), associated, respectively, with the direct effect and the transient change in the coefficient of friction (Figure 5). \(f_{0}\) corresponds to the refernce coefficient of friction at the reference velocity \(V_{0}\). At constant slip velocity, \(V\), the coefficient of friction ad the state variable evolve toward Figure 3: Experiments on friction, by applying velocity jumps, for different types of materials, published by Dieterich in 1994 Dieterich & Kilgore (1994) Figure 4: Experiments on friction. Figures modified as per C. Marone Marone (1998) Figure 5: Schematic illustration of the rate-and-state law a stationary value, \(f_{ss}\) and \(\theta_{ss}\). It is thus possible to rewrite the rate-and-state law as follows: \[\theta_{ss}=D_{c}/V\ \ \ \ \ \&\ \ \ \ f_{ss}=f_{0}+(a-b)\log\frac{V}{V_{0}} \tag{10}\] Thus, when \((a-b)<0\), the coefficient of friction decreases with the increase in slip velocity. We then speak of a _rate-weakening_ material. If \((a-b)>0\) then a _rate-strengthening_ behavior is obtained. Today, none of the state laws (equations 8 and 9) reproduce the full set of experimental data. The slip evolution law does not reproduce the logarithmic time dependence of the coefficient of static friction (figure 4). If \(\dot{\delta}=0\), \(\theta\) does not evolve over time. This is probably why the models tend to favor the _aging law_Ampuero & Rubin (2008). However, this law offers a non-symmetric response according to which a positive (increase) or negative (decrease) velocity jump is introduced Blanpied et al. (1998); Ampuero & Rubin (2008). Several modifications were proposed to improve the state law. For example, by introducing a dependency for the normal stress Linker & Dieterich (1992), by proposing a completely different evolution of the parameter \(\theta\)Perrin et al. (1995); Kato & Tullis (2001), or by adding a dependency to the shear rate Bhattacharya et al. (2015). However, none of these laws led to a consensus. On the other hand, other promising modifications made it possible to come close to observations made in nature (cf section 3.2). Some of those include additional friction mechanisms that increase friction through dilatancy Segall & Rice (1995); Segall & Bradley (2012), or lead to a decrease in effective friction through the pressurization of pore fluids Rice (2006); Schmitt et al. (2011). ## 2 Modeling fault behavior: the'spring-block slider' model In the brittle part of the crust, the deformation is essentially accommodated along faults in response to the tectonic plate movement in the earth's crust. Along these faults two main behaviors are observed: either the fault creeps continuously at a velocity comparable to the plate velocity (mm/yr to cm/yr), or it remains locked for years, or even centuries, and slips suddenly in a very short time, of the order of several seconds, thus resulting in an earthquake. An earthquake of magnitude \(M_{w}\) 4-5 corresponds to an average slip of a few centimeters, a \(M_{w}\) 7 corresponds to a slip of a few meters, and a \(M_{w}\) 9 to 10 to a slip of 20 meters. It is thus observed that slips of the order of m/s, cause destructive seismic waves that propagate in the surrounding medium. A simple analogy to represent the behavior of faults on the Earth's surface is the'spring-block slider' model (Figure 6), which is described in the following section. ### Modeling the slip on a fault: creep or earthquake #### 2.1.1 Block-spring model In the spring-block slider model, the force that pulls on the spring attached to the block in a constant manner represents the plate motion. The stiffness constant \(k\) of the spring represents the rock's elastic properties, the weight of the spring, the compression and basal friction of the block, the friction of the fault plane (Figure 6). There is therefore competition between the shear force pulling the block, \(\mathbf{F}_{spr}\), and the force of the friction that resists the shear force, \(\mathbf{F}_{fric}\), defined as follows: \[\mathbf{F}_{spr}=\tau\times A=k\times x \tag{11}\] \[\mathbf{F}_{fric}=\mu\times\overline{\sigma_{eff}}\times A=\mu\times\mathbf{F} _{n} \tag{12}\] To recall: \(\tau\) is to the shear stress, \(A\) is the contact area, \(k\) is the spring's stiffness coefficient, \(\overline{\sigma_{eff}}\) is the effective normal stress, and \(\mu\) is the coefficient of friction. Depending on the law applicable to \(\mu\), for example _slip-hardening_ or _slip-weakening_, 'creep' or 'earthquakes' can be reproduced as observed in nature (cf. section 1.3). In the case of faults that produce earthquakes, we speak of _stick-slip_ behavior. That is, alternating between long periods where the fault does not move but stress accumulates (stick) and periods where the accumulated stress exceeds the fault's resistance to slip, which results in a slip displacement. #### 2.1.2 Earthquake and instability condition By applying a slip-weakening law to the block-spring model, it is therefore possible to reproduce stick-slip behavior and deduce the instability condition that will lead to a rapid, 'earthquake' type slip. Initially, the spring is pulled over a distance \(x\) but the block does not move (phase 1 in figures 6 and 7). We thus have: \[F_{spr}+F_{fric}=0 \tag{13}\] Next, when the shear stress, \(\tau\), which is equal to the fault's resistance to slip, \(\tau_{f}^{s}=\mu_{s}\overline{\sigma_{eff}}\), the block begins to move. Since the block slips in the direction parallel to \(\mathbf{F}_{spr}\), this force decreases, just as \(\mathbf{F}_{fric}\) because we applied a slip-weakening type friction to the model (cf Figure 6: _Spring Block slider model_ eq. 4). When \(\mathbf{F}_{spr}\) exceeds \(\mathbf{F}_{fric}\), the block accelerates (phase 2a. in figure 7). We therefore add an inertial force to equation 13. \[\mathbf{F}_{spr}+\mathbf{F}_{fric}=m\ddot{x} \tag{14}\] When the coefficient of friction \(\mu\) reaches its dynamic value \(\mu_{d}\), \(\mathbf{F}_{fric}\) remains constant, while \(\mathbf{F}_{spr}\) continues to decrease (phase 2b in figure 7). The block finally decelerates. After it completely stops, phase 1 (the stretching of the spring) resumes. There is therefore an 1instability', i.e. an acceleration in slip, when \(\mathbf{F}_{fric}\) decreases faster than \(\mathbf{F}_{spr}\) during the slip. The instability condition is, thus, defined through the following relation, where \(k\), the stiffness of the spring, must be smaller than a critical value \(k_{c}\): \[k<k_{c}=\left|\frac{\overline{\sigma_{eff}}(\mu_{s}-\mu_{d})}{\delta_{c}}\right| \tag{15}\] Conversely, creep is produced if \(k>k_{c}\), i.e., if the system is 'rigid' (a high \(k\)) or if the normal stress is low. #### 2.1.3 Representation of a subduction zone. A simple way of representing a subduction zone, therefore, consists of combining several blocks, connected ot each other through springs, as proposed by Burridge and Knopoff in 1967 R. Burridge (1967). The aseismic zone at depth is represented by a block whose basal friction responds to a slip-hardening law, and the seismogenic zone is represented by a block whose basal friction follows a slip-weakening law (figure 8a and b). Researchers then observed that for the seismogenic zone, the slip accumulates in'steps' (figure 8c). This is expressed by jagged variations in the shear stress, which is accumulated over long periods of time and then released in a few seconds (figure 8d). Figure 7: _Balance equation of forces for the block-spring model with a slip-weakening friction law_ We then speak of a _stress-drop_. For the aseismic zone, after going through a plateau, which corresponds to the time required for the shear stress to reach the block's value of resistance to slip, i.e., \(\tau_{d}^{s}\) (figure 8d), the slip accumulates continuously and therefore there is indeed creep (figure 8c). ### Modeling the seismic cycle align=left, leftmargin=2em, itemindent=0pt, labelsep=0pt, labelwidth=2em ] #### 2.2.1 Shifting to the rate-and-state law As discussed in section 1.3, while the earlier model makes it possible to reproduce the essential steps that lead to the seismic slip, it does not allow multiple events to be chained, since \(\mu\) does not return to its static value \(\mu_{d}\) (figure 7). On the other hand, the R&S law, with the state variable \(\theta\), takes into account the healing of the fault plane (figure 9). If we go back to the spring-block slider model and replace the slip weakening friction law with a rate-and-state friction law, it is possible to derive a new instability condition. In this second case, during the acceleration phase (2a in figure 9), the slope of \(F_{fric}\) is approximately equal to \(\overline{\sigma_{eff}}(b-a)/D_{c}\). Consequently, for an instability, and potentially an earthquake, to be generated, we must have the following relation: \[k<k_{c}\approx\left|\frac{\overline{\sigma_{eff}}(b-a)}{D_{c}}\right| \tag{16}\] #### 2.2.2 Implications for the nucleation size of earthquakes To move from the spring-block slider model to a slightly more realistic Earth model with elastic behavior, we use Figure 8: _Modeling of a subduction using the block-spring method. a) chematic representation of a subduction. b) Conceptual model. c) Accumulation of slip over time. c) State of shear stress over time._ elasticity to determine the \(k\) value of an elliptical crack: \[k=\frac{G}{(1-\nu)L} \tag{17}\] where \(G\) is the shear modulus, \(\nu\) is the Poisson's ratio and \(L\) is the length of the zone that slips over the fault plane (figure 10). In this case, the instability occurs when the decrease in the frictional force is greater than the decrease in elastic force, and equation 16 is rewritten as: \[\frac{G}{(1-\nu)L}<k_{c}\approx\left|\frac{\overline{\sigma_{eff}}(b-a)}{D_{c}}\right| \tag{18}\] Consequently, the zone that slips must be greater than a critical size \(L_{c}\) in order to become unstable and generate earthquake nucleation: \[L>L_{c}\approx\left|\frac{D_{c}G}{(1-\nu)\overline{\sigma_{eff}}(b-a)}\right| \tag{19}\] #### 2.2.3 Continuum model In his seminal 1993 article Rice (1993), J. R. Rice highlights the importance of moving from "spring-block slider" models to continuous medium models. He demonstrated, notably, that "While the equations of Newtonian dynamics are solved exactly in these Burridge-Knopoff models, it has not been generally acknowledged that the dynamical solution for rupture along a chain of lumped masses,or a string of concentrated mass in the continuous limit, bears a presently uncertain relation to dynamical solutions for rupture along a fault embedded in a surrounding elastic continuum. For example, the Figure 9: _Assessment of forces for the block-spring model with a rate-weakening friction law (Rate-and-state law)_ response of B-K models to an instantaneous change in stress \(\tau\) along the rupture is an instantaneous change in the acceleration \(\partial^{2}\delta/\partial t^{2}\), but there is no instantaneous change in \(\partial\delta/\partial t\)." This is true, on the other hand, in continuum models. The other major drawback is "Also, since there is no analogue to energy radiation as seismic waves in the normal implementation of the B-K models (an exception is the recent work of Knopoff et al. [1992]), all potential energy lost to the system during a rupture is fully account- able as frictional work; the same is not true for rupture in a continuum." It is therefore essential to highlight, in this text, that while the block-spring model makes it possible to qualitatively reproduce the phenomena observed in nature, it is essential to shift to a continuum model if we wish to develop robust numerical models. Interested readers can consult _The mechanics of faulting: from laboratory to real earthquakes_Bizzarri & Bhat (2012). ## 3 A more complex physical reality align=left, leftmargin=2em, itemindent=0pt, labelsep=0pt, labelwidth=2em ] ### Spatial and temporal variability in the slip mode on faults Until recently the deformation in fault zones, in the brittle part of the crust, was attributed either to earthquakes or to the slow, continuous slip during the inter-seismic period (creep) or post-seismic period. This latter phenomenon is called the _afterslip_ and corresponds to a logarithmic acceleration in the aseismic slip on the fault, which can be observed after large earthquakes. However, this paradigm of two 'extreme' behaviors is being questioned today. Advances in technology and methodology in the field of geodesy and in seismology have significantly improved our capacity to measure deformation rates and given us higher resolutions. These observations have enabled us to document a large variability in the slip dynamic in the seismogenic zone (figure 11). Faults may have chiefly seismic behavior, have Figure 10: _Nucleation model_ a slow, stable slip Thomas et al. (2014a) or a transient slip Rousset et al. (2016). In addition to this, one of the most significant discoveries in the last decade has been revealing the existence of'slow earthquakes' (cf. Chapter 7). These encompass several phenomena. _Slow slip events_ rupture the fault very slowly over several hours or even days, at velocities that are higher than the inter-seismic creep (cm/yr), but slower than earthquakes, such that no detectable seismic waves are radiated Dragert et al. (2001). They are generally (though not always) accompanied by weak seismic signals of a long duration (a few minutes to a few weeks) called _non volcanic tremors_Obara (2002). _Low frequency earthquakes_, with a duration close to a second, and _Very low frequency earthquakes_, which can last a hundred seconds, are commonly observed within _non volcanic tremors_Ide et al. (2007); Ito et al. (2007). As a result, it is known today that slip velocities on faults cover a continuum going from a millimeter per year to a meter per second Peng & Gomberg (2010). This is therefore an essential parameter to take into consideration when modeling active faults. However, the physics behind the processes that govern this behavior is still unknown and is the subject of much active debate in the community. In addition to the large range of deformation velocities there is a spatial and temporal variability in the slip mode. Contrary to what the schematic representation of figure 11 might suggest, the phenomena described here are not restricted to a specific depth. On some faults creep may be recorded over the entire seismogenic zone i.e. from the surface up to the maximum depth where earthquakes are observed Titus et al. (2006); Thomas et al. (2014a). Further, while slow earthquakes were first located beyond the seismogenic zone Obara (2002); Ide et al. (2007), non volcanic tremors and slow slip events have recently been observed at epths of less than 10 km, as well as in the sub-surface Ito & Obara (2006); Outerbridge et al. (2010). Moreoever, geodetic data has shown that the seismic or aseismic behavior is not necessarily stable over time, and that the same zone may creep and slide seismically Johnson et al. (2012); Thomas et al. (2017a). These observations lead to two hypotheses. (1) These different phenomena can occur under varied pressure/temperature conditions and/or result from various deformation mechanisms. (2) They correspond to particular mechanical and rheological properties, but which vary over time. Consequently, they also vary over space, depending on what seismic cycle phase the observed site is undergoing. ### Additional mechanisms that can come into play during earthquakes The standard formulation of the rate-and-state law, (section 1.5), allows a numerical reproduction of a large number of the phenomena discussed above. However, this formulation was based on slip velocity experiments ranging from \(10^{-9}\) to \(10^{-3}\) m/s. While comparable to aseismic velocities (\(10^{-10}\) to \(10^{-9}\) m/s), they are still slow when compared to seismic velocities (\(\sim 1\) m/s). There is increasing experimental and theoretical proof that larger slip velocities and quantity of slip also come into play Lapusta & Barbot (2012). This has the effect of drastically reducing the dynamic friction. Wibberley and co-authors Wibberley et al. (2008) have compiled laboratory values for different kinds of rocks and at different loading velocities (figure 12). The lack of experimental data on the properties of friction that are applicable to earthquakes is due to the difficulty of carrying out experiments in conditions similar to earthquakes. A laboratory experiment that would reproduce the conditions that exist during seismic slip would simultaneously involve high slip rates (1-10 m/s), with large displacements (0.1-20 m), a resulting effective normal stress (50-200 MPa), high pore pressure (0.4 to 1 times the normal stress) and high temperature (ambient temperatures of 100 to 300\({}^{\circ}\)C, but potentially as high as 1500\({}^{\circ}\) C in the slip zone). Although considerable progress has been made over the last decade, there is as yet no device that is capable of simultaneously responding to all these requirement. It is therefore necessary to compromise on one or more factors. Tullis and Schubert highlighted this difficulty and proposed a complete review of the processes that could lead to substantial reductions in the friction coefficient with respect to its typical experimental value of 0.6 Tullis & Schubert (2015). The proposed mechanisms include: * dynamic reduction in the normal stress or loss of contact due to the vibrations perpendicular to the interface, * dynamic reduction in the normal stress due to the contrast in elastic properties, or permeability, on either side of the fault, * acoustic fluidization, * elasto-hydrodynamic lubrication, * thermal pressurization of pore fluids, * pressurization of pore fluids induced by the degradation of minerals, * local heating/melting of the point of contact between the asperities, * lubrication of the fault through fusion, in response to frictional processes, * lubrication of the fault through the creation of a thixotropic silica gel, * superplastic deformation of fine grains. These highlight the difficulty of proving which mechanism is responsible for the observed experimental behavior and to design experiments that can clearly prove or refute a mechanism proposed in theory. Nonetheless, since it is likely that one or more of these processes is activated at high slip rates, the rate-and-state law described in section 1.5 does not adequately Figure 12: _Dependence of the coefficient of dynamic friction, in a continuous regime, on the slip velocity. Figure modified as per Wibberley et al. (2008)._ reproduce this strong fall in the coefficient of dynamic friction. Indeed, for seismic velocities (\(\sim 1\) m/s) is a typical value for \((a-b)\) equal to \(-0.005\), we obtain a \(\mu_{d}\) of \(\sim 0.54\). Further, based on laboratory experiments, the effectve \(\mu_{d}\), i.e., \(\tau/\overline{\sigma_{eff}}\), can reach very low values (0 to 0.2) during co-seismic slip. This observation has many implications for our understanding of the mechanism of earthquakes: on the amplitude of the stress drop, on the propensity of earthquakes to propagate in pulse form, on the amplitude of ground movements, and on the orientation of stresses in the crust. N. Lapusta and S. Barbot propose two ways of modifying the _rate-and-state_ law to take into account these additional weakening mechanisms Lapusta & Barbot (2012). Interested readers may refer to their publication for more details. ### Going beyond the elastic Earth model Many ground studied, geophysical observations, and laboratory experiments have highlighted the strong coupling that exists between the main rupture plane and the surrounding medium.. The faults zones are not made up only of a major plane where the majority of slip occurs, but also make up a complex group, surrounded by a zone where surrounding rock is fractured intensively (figure 13). Seismic ruptures result in damage around the faults with an exponential decrease in the density of microfractures perpendicular to the main slip plane Anders & Wiltschko (1994); Mitchell & Faulkner (2009). The damage modifies the microstructure and changes the elastic properties of the rocks at the level of the fault breccia and in the adjacent medium Walsh (1965a,b); Faulkner et al. (2006). These changes, in return, modify the extension an dynamic of the rupture as well as the radiation of seismic waves Thomas et al. (2017b). They also influence seismic processes during the post-seismic period, such as aftershocks, with the minimum size of the nucleation zone depending chiefly on the elastic modulus Rubin & Ampuero (2005). In their experimental study, Gratier et al. (2014) have also demonstrated that the co-seismic damage would promote aseismic slip through pressure-dissolution, thus explaining the afterslip recorded after large earthquakes. Co-seismic damage also increases permeability (figure 13e), which results in a variation in the fluid pressure Sibson (1994) that modifies the fault's resistance to slip. Geophysical observations suggest that this effect is transient (figure 13d), because a gradual and partial recovery of the elastic properties after the earthquake has been recorded Hiramatsu et al. (2005); Froment et al. (2014). This evolution is probably related to the healing of microfractures and faults through the precipitation of dissolved substances, products of alteration and/or the development of clayey minerals Mitchell & Faulkner (2008). In their model, den Hartog & Spiers (2014) propose that the compaction through pressure-dissolution leads in turn to the recovery of seismogenic behavior. Moreover, several studies have demonstrated the influence of the properties of the surrounding rock on the behavior of faults. Audet and co-authors have shown a direct relationship between the physical properties of the interlocking plate in the subduction zone and the recurrence of slow earthquakes Audet & Burgmann (2014). In my microstructure study of Taiwan's longitudinal valley fault, Thomas and co-authors were able to demonstrate the aseismic behavior of the fault was controlled by inherited microstructure Thomas et al. (2014b). Perrin et al. (2016) looked at the influence of the'maturity' of the faults on the accumulation of slip. A study of 27 earthquakes concluded that the more damage the fault presents (mature fault), the greater the quantity of slip during an earthquake. Transition towards a new generation of models The usual way of looking at the fault restricts the deformation in the brittle part of the crust to slip along the interface (fault plane), loaded with creep at depth, whose behavior is controlled by its frictional properties Scholz (1998). According to these properties, when the resistance threshold is exceeded, the stress accumulated when the fault is locked is released through seismic slip or creep, or again during slow earthquakes. Further, as the previously cited studies have highlighted, while the behavior of the fault zones is intrinsically related to the properties of the main slip plane, it also depends on the properties of the surrounding rock. In parallel, the displacement on the faults induces a modification of the physical properties of the surrounding medium. these observations suggest the existence of a second 'cycle' where the properties of the fault zone evolve with respect to the slip dynamic, which in turn influences the deformation mode. However, the majority of models used today do not take this complex feedback into account. By attributing constant properties (pressure, temperature, petrology, microstructure) that do not evolve with deformation, we neglect to take into account how seismic/aseismic fault behavior is impacted by temporal variations of the physical properties of the volume and the interface. It is thus useful to develop a new generation of models that take into account spatial-temporal evolution of physical properties in fault zones. New models are being developed and have already shown the importance of these interactions from a seismic point of view Thomas et al. (2017b); Thomas & Bhat (2018); Okubo et al. (2019).
2305.14099
Energy storage properties of ferroelectric nanocomposites
An atomistic effective Hamiltonian technique is used to investigate the finite-temperature energy storage properties of a ferroelectric nanocomposite consisting of an array of BaTiO$_{3}$ nanowires embedded in a SrTiO$_{3}$ matrix, for electric field applied along the long axis of the nanowires. We find that the energy density \textit{versus} temperature curve adopts a nonlinear, mostly temperature-independent response when the system exhibits phases possessing an out-of-plane polarization and vortices while the energy density more linearly increases with temperature when the nanocomposite either only possesses vortices (and thus no spontaneous polarization) or is in a paraelectric and paratoroidic phase for its equilibrium state. Ultrahigh energy density up to $\simeq$140 J/cm$^{3}$ and an ideal 100% efficiency are also predicted in this nanocomposite. A phenomenological model, involving a coupling between polarization and toroidal moment, is further proposed to interpret these energy density results.
Zhijun Jiang, Zhenlong Zhang, Sergei Prokhorenko, Yousra Nahas, Sergey Prosandeev, Laurent Bellaiche
2023-05-23T14:22:01Z
http://arxiv.org/abs/2305.14099v1
# Energy storage properties of ferroelectric nanocomposites ###### Abstract An atomistic effective Hamiltonian technique is used to investigate the finite-temperature energy storage properties of a ferroelectric nanocomposite consisting of an array of BaTiO\({}_{3}\) nanowires embedded in a SrTiO\({}_{3}\) matrix, for electric field applied along the long axis of the nanowires. We find that the energy density _versus_ temperature curve adopts a nonlinear, mostly temperature-independent response when the system exhibits phases possessing an out-of-plane polarization and vortices while the energy density more linearly increases with temperature when the nanocomposite either only possesses vortices (and thus no spontaneous polarization) or is in a paraelectric and paratoroidic phase for its equilibrium state. Ultrahigh energy density up to \(\simeq\)140 J/cm\({}^{3}\) and an ideal 100% efficiency are also predicted in this nanocomposite. A phenomenological model, involving a coupling between polarization and toroidal moment, is further proposed to interpret these energy density results. ## I Introduction Dielectric capacitors with high energy densities and deficiencies are particularly promising for advanced electronics and electric power systems due to their ultrafast charging/discharging rates [1; 2; 3; 4; 5]. However, traditional commercial dielectric capacitors, such as biaxially oriented polypropylene (BOPP), possess relatively low energy density of about 1.2 J/cm\({}^{3}\)[6] while intensive works have been devoted to improve their energy densities and efficiencies. One key parameter for energy storage is the recoverable energy density, which is defined as \(U=\int_{P_{\mathrm{x}}}^{P_{\mathrm{max}}}\mathcal{E}dP\)[4], where \(P_{\mathrm{max}}\) is the maximum polarization at the maximal applied field, \(\mathcal{E}_{\mathrm{max}}\), and \(P_{\mathrm{r}}\) is the remnant polarization under zero electric field. Another key parameter is the efficiency \(\eta\), defined as \(\eta=[U/(U+U_{\mathrm{loss}})]\times 100\%\)[4], where \(U_{\mathrm{loss}}\) is the dissipated energy because of hysteresis loss and is associated with the area inside the polarization-_versus_-electric field (\(P\)-\(\mathcal{E}\)) hysteresis loop. In the last decade, ferroelectric thin films, dielectrics, antiferroelectrics, relaxor ferroelectrics, superlattices, and lead-free paraelectrics have been intensively studied in the search of large energy densities and efficiencies [7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17]. For instance, a ultrahigh energy density of 112 J/cm\({}^{3}\) with a high energy efficiency of 80% has been observed in lead-free ferroelectric BiFeO\({}_{3}\)-BaTiO\({}_{3}\)-SrTiO\({}_{3}\) films [7]. For antiferroelectrics, a giant energy density of 154 J/cm\({}^{3}\) and 97% efficiency has been achieved in epitaxial lead-free thin films [8]. Moreover, relaxor ferroelectrics can also possess ultrahigh energy densities up to 156 J/cm\({}^{3}\) and efficiencies above 90% [11; 12; 13; 14]. Epitaxial and initially nonpolar AlN/ScN superlattices have also been predicted to have ultrahigh energy density up to 200 J/cm\({}^{3}\) with an ideal efficiency of 100% [16]. Furthermore, ferroelectric nanocomposites combining ceramic filler and polymer matrix have shown great potential for high energy storage capacitors because of their high breakdown strength and high dielectric permittivity [18; 19; 20]. Experimentally, nanocomposites made of Ba\({}_{0.2}\)Sr\({}_{0.8}\)TiO\({}_{3}\) nanowires were shown to reach a high energy density of 14.86 J/cm\({}^{3}\) at \(4.5\times 10^{8}\) V/m. Based on phase field calculations, Liu _et al_. also numerically found an energy density of 5 J/cm\({}^{3}\) and over 95% high energy efficiency at a relatively low electric field of 140 MV/m, in nanocomposites consisting of ferroelectric BaTiO\({}_{3}\) filler embedded in a polymer matrix [21]. Interestingly, using an atomistic effective Hamiltonian simulations, different phases were predicted in ferroelectric nanocomposites consisting of periodic arrays of BaTiO\({}_{3}\) nanowires embedded in a SrTiO\({}_{3}\) matrix, for different temperature regions [22]. Some of these phases have a coupled macroscopic polarization and an electrical toroidal moment associated with vortices at low and intermediate temperatures [23] while heating the system leads to the progressive disappearance of the polarization and then vortices (note that frustration and ordering of topological defects were also found there [24]). One may therefore wonder how these phases, as well as the coupling between polarization and electrical toroidal moment, affect energy storage properties in ferroelectric nanocomposites. Also, can these properties be large? Is it also possible develop a simple model to analyze and explain their energy density, which may help in designing future ferroelectric nanocomposite systems with large energy storage performance? The aim of this work is to address all the aforementioned important issues by conducting atomistic first-principles-based effective Hamiltonian simulations and interpreting the energy storage results via a phenomenological model. Such simulations and phenomenological model allow us to obtain a deep insight into energy storage properties of nanostructures. In particular, ultrahigh energy densities (up to 141.2 J/cm\({}^{3}\)) with an ideal efficiency of 100% is presently found. We also demonstrate that the energy density of ferroelectric nanocomposites can be decomposed into three energy contributions, each associated with a different behavior as a function of temperature for different equilibrium phases. This article is organized as follows. Section II describes details about the effective Hamiltonian scheme used here. Results are presented in Sec. III. Finally, a summary is provided in Sec. IV. Methods Here, we use the first-principles-based effective Hamiltonian (\(H_{\text{eff}}\)) approach developed in Ref. [25], with the total internal energy \(E_{\text{int}}\) being written as a sum of two main terms: \[E_{\text{int}}=E_{\text{ave}}(\{\mathbf{u}_{i}\},\{\eta_{I}\},\{\eta_{H}\})+E_{ \text{loc}}(\{\mathbf{u}_{i}\},\{\eta_{I}\},\{\sigma_{j}\},\{\eta_{\text{loc}} \}), \tag{1}\] where the first energy term \(E_{\text{ave}}\) is associated with the local soft mode \(\{\mathbf{u}_{i}\}\) in unit cell \(i\) (that is directly proportional to the electric dipole moment centered on Ti site \(i\)), and on the \(\{\eta_{I}\}\) and \(\{\eta_{H}\}\) inhomogeneous and homogeneous strain tensors, respectively. \(E_{\text{ave}}\) consists of five energetic parts: (i) a local mode self-energy; (ii) the long-range dipole-dipole interaction; (iii) short-range interactions between local soft modes; (iv) an elastic energy; and (v) interactions between local modes and strains [26]. The second energy term, \(E_{\text{loc}}\), involves the \(\{\sigma_{j}\}\) and \(\{\eta_{\text{loc}}\}\) parameters. \(\{\sigma_{j}\}\) characterizes the atomic configuration of the \(A\) sublattice, that is \(\sigma_{j}=+1\) or \(-1\) corresponds to the distribution of Ba or Sr ions located at the \(j\) sites of the \(A\) sublattice in (Ba\({}_{x}\)Sr\({}_{1-x}\))TiO\({}_{3}\) systems, respectively. \(\{\eta_{\text{loc}}\}\) represents the local strain stemming from the difference in ionic radii between Ba and Sr atoms (that is relatively large \(\simeq\)2%). We presently employ this effective Hamiltonian scheme within Monte Carlo (MC) simulations and large supercells to obtain energy storage properties in a ferroelectric BaTiO\({}_{3}\)-SrTiO\({}_{3}\) nanocomposite. Note that \(E_{\text{int}}\) of Eq. (1) is used in Monte Carlo simulations with the Metropolis algorithm [27], which allows to compute finite-temperature properties of ferroelectric nanocomposites. Note also that \(E_{\text{loc}}\) of Eq. (1) automatically implied that intrinsic effects of the interface on physical properties (such as local electric dipoles and local strains) are accounted for. On the other hand, the role of structural defects such as dislocations are not included. Practically, we consider a ferroelectric nanocomposite system made of a periodic square array of BaTiO\({}_{3}\) (BTO) nanowires embedded in a SrTiO\({}_{3}\) (STO) medium [22]. Figure 1 shows the considered nanocomposite structure used in this study. Note that each wire of this nanocomposite has a 4.8 \(\times\) 4.8 nm\({}^{2}\) (144 sites of BTO) rectangular (\(x\), \(y\)) cross section and a long axis running along the \(z\) axis (\(x\), \(y\), and \(z\) axes are parallel to the pseudocubic [100], [010], [001] directions, respectively). Adjacent wires are separated by 6 sites (\(\simeq\)2.4 nm) of SrTiO\({}_{3}\) medium. This nanocomposite is mimicked by a 36 \(\times\) 36 \(\times\) 6 supercell (that contain 38,880 atoms), with a periodicity of 6 sites (\(\simeq\)2.4 nm) along the \(z\) axis. To mimic the energy storage properties under an applied dc electric field, an additional term \(-\sum_{i}\mathbf{p}_{i}\cdot\mathcal{E}\) is added to the total internal energy \(E_{\text{int}}\), where \(\mathbf{p}_{i}\) is the local electric dipole (which is equal to the product between the local soft mode \(\mathbf{u}_{i}\) and its Born effective charge \(Z^{*}\)), and \(\mathcal{E}\) is the electric field that is applied along the \(z\) axis. To obtain converged results, 20,000 MC sweeps are run for equilibration and an additional 20,000 MC sweeps are used to get the statistical thermal averages at each considered temperature and applied electric field. Note that we numerically found that the theoretical electric field is larger from the measured one by a factor of 1.3 in (Ba\({}_{x}\)Sr\({}_{1-x}\))TiO\({}_{3}\) compounds, by comparing the \(H_{\text{eff}}\)-obtained \(P\)-\(\mathcal{E}\) loop with the experimental one for disordered (Ba\({}_{0.5}\)Sr\({}_{0.5}\))TiO\({}_{3}\) solid solutions at 300 K [28]. To correct for such discrepancy, the electric fields considered in the present study are divided by a factor of 1.3. Figure 2 shows the resulting renormalized \(P\)-\(\mathcal{E}\) loop of the disordered (Ba\({}_{0.5}\)Sr\({}_{0.5}\))TiO\({}_{3}\) system at room temperature, which matches the experimental one rather well. Note that such rescaling is an approach that has been previously successful in several compounds [9; 14; 16; 29]. ## III Results and discussion ### Different phases in the chosen BaTiO\({}_{3}\)-SrTiO\({}_{3}\) nanocomposite Figures 3(a)-3(h) show the temperature dependence of the overall and individual polarizations, the electrical toroidal moment, and the dipolar configurations in a given (\(x\), \(y\)) plane for different temperatures in the chosen BaTiO\({}_{3}\)-SrTiO\({}_{3}\) nanocomposite. The polarization contributions of BTO wires and STO medium to the total \(z\)-component polarization in Figure 1: Schematic representation of the 36 \(\times\) 36 \(\times\) 6 supercell mimicking the studied nanocomposite. (a) The structure is comprised of four BaTiO\({}_{3}\) nanowires (red color) with each one having a cross-sectional of 12 \(\times\) 12 (144 Ti sites) along the \(x\)- and \(y\)-directions separated by six sites of SrTiO\({}_{3}\) medium (green tubes), with a periodicity of six Ti sites along the \(z\)-axis ([001] pseudocubic direction). (b) The top view of the ferroelectric nanocomposite supercell. Fig. 3(a) are given by \(P_{z}(\text{BTO})=a_{\text{lat}}Z^{*}\sum\mathbf{u}_{\text{BTO}}/NV\) and \(P_{z}(\text{STO})=a_{\text{lat}}Z^{*}\sum\mathbf{u}_{\text{STO}}/NV\), where \(a_{\text{lat}}\) is the five-atom lattice constant, \(Z^{*}\) represents the Born effective charge associated with the local mode, \(N\) is the number of sites in the supercell, \(V\) is the unit cell volume, and \(\sum\mathbf{u}_{\text{BTO}}\) and \(\sum\mathbf{u}_{\text{STO}}\) are the sum of the local modes centered on BTO wires and STO medium, respectively. Note that the electrical toroidal moment is defined as \(\mathbf{G}_{j}=\frac{1}{2N_{j}}\sum_{i,j}\mathbf{r}_{i,j}\times\mathbf{p}_{i,j}\), where \(N_{j}\) is the number of sites in nanowire \(j\); \(\mathbf{p}_{i,j}\) is the local electrical dipole of site \(i\) in wire \(j\), which is located at \(\mathbf{r}_{i,j}\). A nonzero value of \(\mathbf{G}_{j}\) typically characterizes dipole vortex in the nanowire \(j\)[30], and the data of Fig. 3(b) represents the average of these \(\mathbf{G}_{j}\) over the four BaTiO\({}_{3}\) wires. Based on the evolutions of polarizations and toroidal moment _versus_ temperature, six different phases are identified for this nanocomposite system. For instance, Phase I exhibits a significant polarization and toroidal moment in BTO wires both along the pseudocubic [001] direction while the STO medium possesses vortices and antivortices in addition to a polarization along [001] [see Fig. 3(c)]; Phase II still has both vortices and polarization along the [001] direction in the BTO nanowire, and antivortices and polarization still occur in the STO medium. However, the \(z\)-component of the dipoles in the STO medium is significantly reduced in Phase II [see Figs. 3(a) and 3(d)]; In Phase III, the polarization and vortices still appear in the BTO nanowires, but the vortices and antivortices in the STO medium have basically disappeared [see Fig. 3(e)]; Phase IV distinguishes itself from Phase III by the annihilation of the \(z\)-component of the electrical dipoles in the STO medium [see Figs. 3(a) and 3(f)]. In Phase V, the overall polarization vanishes, which indicates that the polarization disappears in both BTO wires and STO medium [see Fig. 3(a)]. However, the vortices still exist in the BTO nanowires in Phase V [see Fig. 3(g)] as consistent with the nonzero \(z\)-component of the electrical toroidal moment [see Fig. 3(b)]; The paraelectric and paratoroidic Phase VI occurs above 330 K where both the overall polarization and electrical toroidal moment have vanished [see Figs. 3(a), 3(b), and 3(h)]. Note that our predicted phases and their temperature range shown in Fig. 3 are in rather good agreement with previous theoretical findings [22] except for adding the (previously overlooked) new Phase IV where the polarization in the STO medium has disappeared. Note also that the temperatures at which successive changes in phases happen are 75, 125, 190, 240, and 330 K from Phase I to Phase VI, respectively, as emphasized in Figs. 3(a) and 3(b). In order to determine the boundaries between Phases I-IV, we identified the temperatures at which the in-plane and out-of-plane dielectric responses peak below 240 K--as similar Ref. [22]. We also looked at the temperature dependence of electrical toroidal moment in the BaTiO\({}_{3}\) nanowires above 240 K, to determine the boundary between Phases V and VI. ### Energy storage properties in the BaTiO\({}_{3}\)-SrTiO\({}_{3}\) nanocomposite In order to investigate the energy storage properties in the BaTiO\({}_{3}\)-SrTiO\({}_{3}\) nanocomposite, a dc electric field \(\mathcal{E}\) is applied along the pseudocubic [001] direction (\(z\) axis). Figure 4(a) displays the response of the \(z\)-component of the overall polarization when the electric field increases from zero to \(\mathcal{E}_{\text{max}}=4.5\times 10^{8}\) V/m for different temperatures. We also numerically find that the charging and discharging processes are completely reversible (note that the charging and discharging correspond to the processes of increasing the electric field from zero to the maximum applied field and then decreasing the field back to zero, respectively) for any considered temperature. The resulting efficiency is therefore 100%, which has also been reported in epitaxial AlN/ScN superlattices [16] and lead-free Ba(Zr, Ti)O\({}_{3}\) relaxor ferroelectrics [14] because these two latter compounds possess a field-induced _second-order_ transition from an overall paraelectric to ferroelectric state. Figure 4(b) shows the electric field as a function of polarization for the same temperatures than those indicated in Fig. 4(a), which allows us to extract the energy density \(U=\int_{P_{z}}^{P_{\text{max}}}\mathcal{E}dP\) by integrating the area below the \(\mathcal{E}\)-versus-\(P\) curve. Such kind of procedure can be done for any temperature and for any \(\mathcal{E}_{\text{max}}\). Figure 4(c) displays the resulting energy density as a function of temperature, when choosing three maximal applied electric field \(\mathcal{E}_{\text{max}}\): \(4.5\times 10^{8}\), \(6.4\times 10^{8}\), and \(10\times 10^{8}\) V/m. Note that a field of \(4.5\times 10^{8}\) V/m has been experimentally realized in Ba\({}_{0.2}\)Sr\({}_{0.3}\)TiO\({}_{3}\) nanocomposites [19]; \(6.4\times 10^{8}\) V/m has been reached for the commercial polypropylene capacitors [6]; and a field of \(10\times 10^{8}\) V/m was reported in La\({}_{0.1}\)Bi\({}_{0.9}\)MnO\({}_{3}\) and BaTiO\({}_{3}\) films [31; 32]. As shown in Fig. 4(c), the energy density is only slightly temperature-dependent and is nonlinear below 240 K (these temperatures correspond to the polar Phases I, II, III and IV). It is equal to 38.3, 58.7, and 105.1 J/cm\({}^{3}\)at 240 K when the Figure 2: \(P\)-\(\mathcal{E}\) hysteresis loops obtained from MC data and from measurements in (Ba\({}_{0.5}\)Sr\({}_{0.5}\))TiO\({}_{3}\) system at 300 K (note that the theoretical electric field has been divided by a factor of 1.3). maximal applied fields are \(4.5\times 10^{8}\), \(6.4\times 10^{8}\), and \(10\times 10^{8}\) V/m, respectively. On the other hand, when the temperature is between 240 and 600 K (which is the range associated with Phases V and VI), the energy density significantly and more linearly increases with temperature, which provides values up to 59.5, 86.1 and 141.2 J/cm\({}^{3}\) for \(\mathcal{E}_{\text{max}}=4.5\times 10^{8},6.4\times 10^{8}\), and \(10\times 10^{8}\) V/m, respectively. Strikingly, the predicted energy densities in the studied BaTiO\({}_{3}\)-SrTiO\({}_{3}\) nanocomposites therefore exceed the experimentally value of 14.86 J/cm\({}^{3}\) reported for a maximum electric field of \(4.5\times 10^{8}\) V/m in Ba\({}_{0.2}\)Sr\({}_{0.8}\)TiO\({}_{3}\) nanowires [19], and is also much larger than the energy density of 1.2 J/cm\({}^{3}\) achieved in a commercial capacitor with a maximal field of \(6.4\times 10^{8}\) V/m [6]. To understand the energy density behaviors depicted in Fig. 4(c), one can use the following simple Landau-type free energy potential: \[F=\frac{1}{2}a_{0}P^{2}+\frac{1}{4}bP^{4}+\frac{1}{6}cP^{6}+\frac{1}{2}dP^{2}G ^{2}-\mathcal{E}P, \tag{2}\] where \(a_{0}\), \(b\), \(c\), are coefficients that correspond to quadratic, quartic, sextic coefficients, respectively, while \(d\) quantifies the sign and strength of the biquadratic coupling between polarization and electrical toroidal moment. Under equilibrium condition, the polarization \(P\) satisfies \(\frac{\partial P}{\partial P}=0\), which yields: \[\begin{split}\mathcal{E}&=(a_{0}+dG^{2})P+bP^{3}+ cP^{5}\\ &=aP+bP^{3}+cP^{5},\end{split} \tag{3}\] where \(a=a_{0}+dG^{2}\). As shown in Fig. 4(b), the electric field _versus_ polarization (\(\mathcal{E}\)-\(P\)) data obtained from MC simulations for all considered temperatures can be relatively well fitted by the rather simple Eq. (3) (see solid blue lines), taking the total polarization from Fig. 3(a) and allowing \(a\) to be a free parameter while \(b\) and \(c\) are constant for any temperature (as consistent with traditional Landau theories). Such good fitting confirms the validity of our present Landau model, and also results in the determination of the \(b\) and \(c\) coefficients as well as the temperature behavior of \(a\). This latter is shown in Fig. 4(d) for the maximal field \(\mathcal{E}_{\text{max}}=4.5\times 10^{8}\) V/m applied along the [001] pseudocubic direction. Moreover, this fitting also provides values of the Figure 3: Temperature dependence of some properties in the studied BaTiO\({}_{3}\)-SrTiO\({}_{3}\) nanocomposite: (a) the macroscopic polarization, as well as the contribution of the \(z\)-component of the polarization in the BaTiO\({}_{3}\) wires and SrTiO\({}_{3}\) medium to the overall polarization; (b) the average electrical toroidal moment in the BaTiO\({}_{3}\) nanowires; and (c)-(f) snapshots of dipolar configurations in a given (\(x\), \(y\)) plane at 10 K (Phase I), 100 K (Phase II), 150 K (Phase III), 200 K (Phase IV), 300 K (Phase V), and 400 K (Phase VI) under zero electric field, respectively. The color bars indicate the magnitude of the out-of-plane component of the local modes. Figure 4: (a) \(P\)-\(\mathcal{E}\) curves at different selected temperatures obtained from MC simulations for electric field applied along the pseudocubic [001] direction. (b) Electric field versus polarization at these different temperatures in BaTiO\({}_{3}\)-SrTiO\({}_{3}\) nanocomposite. The solid blue lines represent the fit of the MC \(\mathcal{E}\)-\(P\) data by Eq. (3). (c) Energy density obtained from MC simulations data _versus_ temperature for electric fields applied along the pseudocubic [001] direction, with maximal applied electric fields of \(\mathcal{E}_{\text{max}}=4.5\times 10^{8}\), 6.4 \(\times\) 10\({}^{8}\) and 10 \(\times\) 10\({}^{8}\) V/m, respectively. (d) Fitting parameter \(a\) of Eq. (3) as a function of temperature when the maximal applied electric field is equal to 4.5 \(\times\) 10\({}^{8}\) V/m. \(b\) and \(c\) parameters to be 30.9\(\times 10^{8}\) V m\({}^{5}\)/C\({}^{3}\) and 114.2\(\times 10^{8}\) V m\({}^{9}\)/C\({}^{5}\), respectively. We will comment on the temperature behavior of the \(a\) coefficient soon, but let us first recall that the recoverable energy density can be written as \(U=\int_{P_{\rm r}}^{P_{\rm max}}\mathcal{E}dP\), which, when inserting Eq. (3), gives: \[U =\int_{P_{\rm r}}^{P_{\rm max}}(aP+bP^{3}+cP^{5})dP\] \[=\frac{1}{2}a(P_{\rm max}^{2}-P_{\rm r}^{2})+\frac{1}{4}b(P_{\rm max }^{4}-P_{\rm r}^{4})+\frac{1}{6}c(P_{\rm max}^{6}-P_{\rm r}^{6}), \tag{4}\] where \(P_{\rm max}\) is the maximum polarization at \(\mathcal{E}_{\rm max}\) and \(P_{\rm r}\) is the remnant polarization. Equation (4) therefore tells us that \(U\) is a rather straightforward function of \(a\), \(b\), \(c\), \(P_{\rm r}\) and \(P_{\rm max}\). Note that \(P_{\rm r}\) is directly obtainable from the MC data, and is nonzero for temperatures between 5 and 240 K (which covers the ranges of Phases I, II, III, and IV) while it vanishes for temperatures above 240 K (corresponding to Phases V and VI) as shown in Figs. 3(a) and 4(a). Note also that \(P_{\rm max}\) can either be directly obtained from the MC simulations by taking the value of the polarization at \(\mathcal{E}_{\rm max}\) or computed via Eq. (3) at this considered \(\mathcal{E}_{\rm max}\) and using the MC-fitted parameters of \(a\), \(b\) and \(c\). As we are going to see, both procedures give similar results. Let us now comment on the \(a\) coefficient. Figure 4(d) shows that the fitting parameter \(a\) has a nonlinear behavior for temperatures below 240 K and then basically linearly increases with temperature between 240 and 600 K, at \(\mathcal{E}_{\rm max}=4.5\times 10^{8}\) V/m. For instance, the \(a\) parameter decreases its magnitude from \(-2.44\times 10^{8}\) to \(-0.73\times 10^{8}\) V m/C in a nonlinear fashion between 5 and 240 K, then linearly decreases in magnitude with temperature between 240 and 600 K up to 4.16\(\times 10^{8}\) V m/C--with \(a\) being equal to zero at 300 K. Note that, as shown by open circles symbol of Fig. 4(d), the fitting parameter \(a\) can be well fitted by \(a_{1}(T_{c}-T)+dG^{2}\) in the polar Phases I, II, III and IV below 240 K with \(T_{c}\) being equal to 300 K and the toroidal moment being the one shown in Fig. 3(b). The resulting \(a_{1}\) is \(-4.5\times 10^{5}\) V m/C K while \(d=183.4\) V A\({}^{3}\)/\(e^{3}\). The positive sign of \(d\) therefore indicates a competition between polarization and toroidal moment, which also explains why the fitting provides a \(T_{c}\) of 300 K, while the true Curie temperature of the studied nanocomposite is lower and equal to 240 K. Moreover and as also shown by open circles symbols in Fig. 4(d), \(a\) can further be well fitted by \(a=a_{2}(T-T_{c})\) in Region V (for which the polarization has vanished but the toroidal moment still exists) and Region VI (for which the total polarization and toroidal moment have both been annihilated), with \(T_{c}\) being equal to 300 K too. Note that the fitted \(a_{2}\) is \(1.4\times 10^{6}\) V m/C K and that these different behaviors and analytical formula of \(a\) for temperatures below _versus_ above 240 K are consistent with the general line indicated below Eq. (3), namely that \(a=a_{0}+dG^{2}\)--with \(a_{0}\) being directly proportional to \((T-T_{c})\), as consistent with typical Landau theory, and with \(d\) being finite when both spontaneous polarization and toroidal moment exist and zero otherwise. Furthermore, Fig. 5(a) displays the value of the maximum polarization, \(P_{\rm max}\) as a function of temperature both from MC simulations and from the Landau model using the MC-fitted parameters of \(a\), \(b\) and \(c\) in Eq. (3). One can clearly see that for all considered temperatures at \(\mathcal{E}_{\rm max}=\) 4.5 \(\times\)\(10^{8}\) V/m, the MC simulations and the Landau-model-obtained \(P_{\rm max}\) provide nearly similar results, which is quite remarkable once realizing the simplicity of Eq. (3), on one hand, and the complexity of the investigated system on the other hand. As shown in Fig. 5(a), \(P_{\rm max}\) almost linearly and very slightly decreases with temperature in Regions I and II for temperatures between 5 and 125 K (values varying between 0.467 to 0.460 C/m\({}^{2}\)). In Regions III, IV, V and VI for temperatures ranging between 125 and 600 K, \(P_{\rm max}\) basically linearly decreases in a more significant fashion with temperature (the value of \(P_{\rm max}\) varies from 0.460 to 0.394 C/m\({}^{2}\)). To understand the energy density results in Fig. 4(c), we take advantage of Eq. (4). Figure 5(b) shows the energy density directly obtained from Eq. (4) at the maximal applied field of \(\mathcal{E}_{\rm max}=\) 4.5 \(\times\)\(10^{8}\) V/m, along with the energy den Figure 5: (a) The maximum polarization \(P_{\rm max}\) obtained from MC simulations and Landau model [see Eq. (3)] as a function of temperature for \(\mathcal{E}_{\rm max}=\) 4.5 \(\times\)\(10^{8}\) V/m and fields applied along the [001] direction in the studied BaTiO\({}_{3}\)-SrTiO\({}_{3}\) nanocomposite. (b) Energy density obtained from MC simulations and Eq. (4) as a function of temperature for a maximal applied electric field \(\mathcal{E}_{\rm max}=\) 4.5 \(\times\)\(10^{8}\) V/m. (c) The total and decomposed energy densities \(\frac{1}{4}a(P_{\rm max}^{4}-P_{\rm r}^{2})\), \(\frac{1}{4}b(P_{\rm max}^{4}-P_{\rm r}^{4})\), and \(\frac{1}{6}c(P_{\rm max}^{6}-P_{\rm r}^{6})\) obtained from Eq. (4) as a function of temperature for \(\mathcal{E}_{\rm max}=\) 4.5 \(\times\)\(10^{8}\) V/m. (d)-(f) \(P_{\rm max}^{2}-P_{\rm r}^{2}\), \(P_{\rm max}^{4}-P_{\rm r}^{4}\), and \(P_{\rm max}^{6}-P_{\rm r}^{6}\) versus temperature for \(\mathcal{E}_{\rm max}=\) 4.5 \(\times\)\(10^{8}\) V/m, respectively. sity data computed from the MC simulations. The Landau-model-obtained energy density agrees reasonably well with the MC-obtained energy density. Moreover and according to Eq. (4), the energy density is the sum of three terms, which are \(\frac{1}{2}a(P_{\text{max}}^{2}-P_{\text{r}}^{2}),\frac{1}{4}b(P_{\text{max}}^{4 }-P_{\text{r}}^{4})\), and \(\frac{1}{6}c(P_{\text{max}}^{6}-P_{\text{r}}^{6})\). The three contributions of energy density are shown in Fig. 5(c), while Figs. 5(d), (e) and (f) display the temperature dependency of \((P_{\text{max}}^{2}-P_{\text{r}}^{2})\), \((P_{\text{max}}^{4}-P_{\text{r}}^{4})\) and \((P_{\text{max}}^{6}-P_{\text{r}}^{6})\), respectively. The first contribution of the energy density thus relies on the product of the \(a\) parameter and \(P_{\text{max}}^{2}-P_{\text{r}}^{2}\), and only slightly depends on temperature with a nonlinear behavior in Regions I and II for temperature between 5 and 125 K (the energy density value associated with this first term ranges from \(-18.6\) to \(-17.7\) J/cm\({}^{3}\) within Phases I and II). This first energy density term of \(\frac{1}{2}a(P_{\text{max}}^{2}-P_{\text{r}}^{2})\) is almost constant in this temperature range because there is a compensation between the facts that (the negative) \(a\) decreases in magnitude and that \(P_{\text{max}}^{2}-P_{\text{r}}^{2}\) increases with temperature. On the other hand, such compensation does not occur anymore for temperatures from 125 to 240 K (from Phase III to Phase IV) due to the strong nonlinear increase of \(P_{\text{max}}^{2}-P_{\text{r}}^{2}\) as well as the more pronounced (nonlinear) decrease of the magnitude of \(a\). Consequently, the first energy density term, \(\frac{1}{2}a(P_{\text{max}}^{2}-P_{\text{r}}^{2})\), almost linearly increases with temperature between 125 and 240 K. In Phases V and VI (for temperatures between 240 and 600 K), \(a\) linearly increases with temperature (from \(-0.73\times 10^{8}\) to 4.16\(\times 10^{8}\) V m/C) faster than \(P_{\text{max}}^{2}-P_{\text{r}}^{2}\) decreases with temperature (from 0.201 to 0.153 C\({}^{2}\)/m\({}^{4}\)), hence resulting in \(\frac{1}{2}a(P_{\text{max}}^{2}-P_{\text{r}}^{2})\) increasing in a linear fashion with temperature. The second contribution of energy density, \(\frac{1}{4}b(P_{\text{max}}^{4}-P_{\text{r}}^{4})\), is only slightly dependent on temperature between 5 and 125 K (the values varying between 33.4 and 34.3 J/cm\({}^{3}\)) because \(P_{\text{max}}^{4}-P_{\text{r}}^{4}\) is basically constant (around 0.043 C\({}^{4}\)/m\({}^{8}\)) there and \(b\) is always a constant in our fitting. Above 125 K (from Region III to Region VI), \(\frac{1}{4}b(P_{\text{max}}^{4}-P_{\text{r}}^{4})\) linearly decreases with temperature up to 600 K (from 34.3 at 125 K to 18.2 J/cm\({}^{3}\) at 600 K) because \(P_{\text{max}}^{4}-P_{\text{r}}^{4}\) adopts such behavior (it decreases from 0.044 to 0.024 C\({}^{4}\)/m\({}^{8}\)). The third energy density, \(\frac{1}{6}c(P_{\text{max}}^{6}-P_{\text{r}}^{6})\), only very slightly decreases with temperature in Phases I and II, _i.e._ for temperatures ranging between 5 and 125 K, which arises from the weak decrease of \(P_{\text{max}}^{6}-P_{\text{r}}^{6}\) (from 0.0101 C\({}^{6}\)/m\({}^{12}\) at 5 K to 0.0097 C\({}^{6}\)/m\({}^{12}\) at 125 K) in these regions--since the \(c\) parameter is constant too. Furthermore, for temperatures ranging between 125 and 600 K (in Regions III, IV, V and VI ), \(\frac{1}{6}c(P_{\text{max}}^{6}-P_{\text{r}}^{6})\) rather strongly and continuously decreases with temperature up to 600 K (from 18.4 to 6.9 J/cm\({}^{3}\)) as a result of the significant decrease of \(P_{\text{max}}^{6}-P_{\text{r}}^{6}\) with temperature from 0.0097 to 0.0036 C\({}^{6}\)/m\({}^{12}\). Figure 5(c) further shows that the energy densities of \(\frac{1}{2}a(P_{\text{max}}^{2}-P_{\text{r}}^{2})\) and \(\frac{1}{6}c(P_{\text{max}}^{6}-P_{\text{r}}^{6})\) nearly cancel each other in Phases I and II for temperatures ranging between 5 and 125 K, implying that \(\frac{1}{4}b(P_{\text{max}}^{4}-P_{\text{r}}^{4})\) is the dominant contribution there--which also explains why the total energy density only very slightly depends on temperatures in these regions. In Phases III and IV, the first energy density \(\frac{1}{2}a(P_{\text{max}}^{2}-P_{\text{r}}^{2})\) increases with temperature while the second and third energy densities both decrease, which, once again, results in a total energy density that only weakly depends on temperature. On the other hand, in Phases V and VI, the total energy density is significantly enhanced with temperature, and in a nearly linear fashion, following the strong linear increase of \(\frac{1}{2}a(P_{\text{max}}^{2}-P_{\text{r}}^{2})\) which is counteracted by the smaller linear decrease of the second and third energy densities. Interestingly, the contributions in percentage of the total energy density can be temperature-dependent between Regions V and VI. As a matter of fact, the contributions of \(\frac{1}{2}a(P_{\text{max}}^{2}-P_{\text{r}}^{2})\), \(\frac{1}{4}b(P_{\text{max}}^{4}-P_{\text{r}}^{4})\) and \(\frac{1}{6}c(P_{\text{max}}^{6}-P_{\text{r}}^{6})\) to the total energy density at 300 K are 0%, 67%, and 33%, respectively (the zero value of the first energy term arises from the annihilation of \(a\) at 300 K). This is to be compared with the corresponding numbers of 56%, 32%, and 12%, respectively, at 600 K. ## IV Summary In conclusion, based on atomistic effective Hamiltonian scheme combined with Monte Carlo simulations, we investigated the energy storage properties in a BaTiO\({}_{3}\)-SrTiO\({}_{3}\) nanocomposite consisting of BaTiO\({}_{3}\) nanowires embedded in a SrTiO\({}_{3}\) matrix. We found that this nanocomposite system can exhibit large energy densities and an ideal 100% efficiency for three considered maximal applied electric fields. It is also found that the energy density-_versus_-temperature curve is nonlinear and only weakly dependent on temperature, for temperatures below 240 K (for which the equilibrium phases are polar). On the other hand, it becomes more linear and strongly temperature-dependent as the temperature increases from 240 to 600 K, when the system progressively loses its spontaneous polarization and then its spontaneous toroidal moment. Such unusual energy storage features are then interpreted via the development of a simple Landau model that reproduces the Monte Carlo simulation data and that also involves a coupling between polarization and toroidal moment. In particular, the energy density consists of three energy terms, namely \(\frac{1}{2}a(P_{\text{max}}^{2}-P_{\text{r}}^{2})\), \(\frac{1}{4}b(P_{\text{max}}^{4}-P_{\text{r}}^{4})\), and \(\frac{1}{6}c(P_{\text{max}}^{6}-P_{\text{r}}^{6})\), that adopt different behaviors in different structural phases. The proposed phenomenological model may be further put in use to search for, or analyze results of, other ferroelectric nanostructures with large energy density. This work is supported by the National Natural Science Foundation of China (Grant No. 11804138), Natural Science Basic Research Program of Shaanxi (Program No. 2023-JC-YB-017), Shaanxi Fundamental Science Research Project for Mathematics and Physics (Grant No. 22JSQ013), "Young Talent Support Plan" of Xi'an Jiaotong University (Grant No. WL6J004), and the Fundamental Research Funds for the Central Universities. L.B. acknowledges ARO Grant No. W911NF-21-1-0113 and the Vannevar Bush Faculty Fellowship Grant No. N00014-20-1-2834 from the Department of Defense. The HPC Platform of Xi'an Jiaotong University is also acknowledged.
2310.10049
FATE-LLM: A Industrial Grade Federated Learning Framework for Large Language Models
Large Language Models (LLMs), such as ChatGPT, LLaMA, GLM, and PaLM, have exhibited remarkable performances across various tasks in recent years. However, LLMs face two main challenges in real-world applications. One challenge is that training LLMs consumes vast computing resources, preventing LLMs from being adopted by small and medium-sized enterprises with limited computing resources. Another is that training LLM requires a large amount of high-quality data, which are often scattered among enterprises. To address these challenges, we propose FATE-LLM, an industrial-grade federated learning framework for large language models. FATE-LLM (1) facilitates federated learning for large language models (coined FedLLM); (2) promotes efficient training of FedLLM using parameter-efficient fine-tuning methods; (3) protects the intellectual property of LLMs; (4) preserves data privacy during training and inference through privacy-preserving mechanisms. We release the code of FATE-LLM at https://github.com/FederatedAI/FATE-LLM to facilitate the research of FedLLM and enable a broad range of industrial applications.
Tao Fan, Yan Kang, Guoqiang Ma, Weijing Chen, Wenbin Wei, Lixin Fan, Qiang Yang
2023-10-16T04:17:13Z
http://arxiv.org/abs/2310.10049v1
# FATE-LLM: A Industrial Grade Federated Learning Framework for Large Language Models ###### Abstract Large Language Models (LLMs), such as Chat-GPT, LLaMA, GLM, and PaLM, have exhibited remarkable performances across various tasks in recent years. However, LLMs face two main challenges in real-world applications. One challenge is that training LLMs consumes vast computing resources, preventing LLMs from being adopted by small and medium-sized enterprises with limited computing resources. Another is that training LLM requires a large amount of high-quality data, which are often scattered among enterprises. To address these challenges, we propose FATE-LLM, an industrial-grade federated learning framework for large language models. FATE-LLM (1) facilitates federated learning for large language models (coined FedLLM); (2) promotes efficient training of FedLLM using parameter-efficient fine-tuning methods; (3) protects the intellectual property of LLMs; (4) preserves data privacy during training and inference through privacy-preserving mechanisms. We release the code of FATE-LLM at [https://github.com/FederatedAI/FATE-LLM](https://github.com/FederatedAI/FATE-LLM) to facilitate the research of FedLLM and enable a broad range of industrial applications. ## 1 Introduction In recent few years, the advent of large language models (LLMs) [15, 16] has been reshaping the field of artificial intelligence. In particular, the most advanced LLMs, such as ChatGPT [1], GPT-4 [1], and PaLM [11] that boasts billions of parameters have gained considerable attention due to their remarkable performance in a variety of natural language generation tasks. Many open-sourced LLMs with high performance have been released, and the public's enthusiasm for research and application of LLMs has been stimulated. However, grounding LLMs in real-world applications faces many challenges. The two main challenges are (i) training LLMs consumes vast computing resources, which prevents LLMs from being adopted by small and medium-sized companies with limited computing resources; (ii) training LLMs requires a large amount of public data, which may run out soon [20]. Federated learning (FL) [13][15], a privacy-preserving collaborative machine learning paradigm, is a promising approach to deal with these two challenges. For one thing, FL enables many companies with different computing resources to collaboratively train powerful machine learning models such that the computational burden of training large models can be alleviated. For another, massive high-quality data are scattered among companies that are typically isolated from each other, and FL can exploit these data silos in a privacy-preserving way. In this work, we propose FATE-LLM, built upon FATE (Federated AI Technology Enabler) [14], to facilitate federated learning for large language models. More specifically, FATE-LLM (1) enables federated learning for both homogeneous and heterogeneous large language models (FedLLM); (2) promotes efficient training of FedLLM through parameter-efficient fine-tuning methods, such as LoRA [12] and P-Tuning-v2 [14]; (3) protects the intellectual property of LLMs using federated intellectual property protection approach [10]; (4) protects data privacy during training and inference through privacy-preserving mechanisms. We release the code of FATE-LLM at [https://github.com/FederatedAI/FATE-LLM](https://github.com/FederatedAI/FATE-LLM) to promote the research of FedLLM and enable a broad range of industrial applications. Figure 1: **Large Language Models are federated on FATE.** Related Work In this section, we briefly review related work regarding large language models and federated learning. ### Large Language Models The advancements in large language models(LLMs) have led to significant advances in a variety of NLP tasks. A great example of LLMs application is ChatGPT[14]. ChatGPT is fine-tuned from the generative pretrained transformer GPT-3.5, which was trained on a blend of text and code. ChatGPT applies reinforcement learning from human feedback (RLHF), which has become a promising way to align LLMs with a human's intent. LLMs are generally divided into two categories: encoder-decoder or encoder-only large language models and decoder-only large language models [21]. Bert [15] is the representative of encoder-only large language models. GPTs [16] is the representative of decoder-only large language models. At the early stage of LLMs development, decoder-only LLMs were not as popular as encoder-only and encoder-decoder LLMs. However, after 2021, with the introduction of GPT-3 [17], decoder-only LLMs experienced a significant boom. At the same time, after the initial explosion brought about by BERT [15], encoder-only LLMs gradually began to fade away. Recently, many decoder-only LLMs have been released, such as LLaMA [20], OPT [22], PaLM [18], and BLOOM [23]. These LLMs demonstrated reasonable few-/zero-shot performance via prompting and in-context learning. ### Federated Learning Federated learning (FL) [19][21, 20] is a distributed machine learning paradigm that enables clients (devices or organizations) to train a machine learning model collaboratively without exposing clients' data. Unlike traditional centralized machine learning techniques, data are fixed locally rather than being gathered in a central server, which exists many of the systemic privacy risks and costs [10]. Hence, FL is a promising approach to deal with this data isolation challenge. To enhance data privacy, federated learning uses a variety of secure computing protocols. The most popular protocols are Homomorphic Encryption (HE) [12], Multi-Party Computation(MPC) [2][14], and Differential Privacy (DP) [15]. In recent years, the literature has presented various algorithms in the FL setting. [1] proposed vertical logistic regression (VLR) using homomorphic encryption (HE) to protect data privacy. [2] further enhanced the privacy-preserving capability of VLR by employing a hybrid strategy combining HE and secret sharing (SS). [2] proposed the SecureBoost, a VFL version of XGBoost, that leverages HE to protect the parameters exchanged among parties. [19] applied a semi-supervised learning method to estimate missing features and labels for further training. [19] proposed Secure Aggregation to enhance data protection. ## 3 FATE-LLM System Design We introduce the FATE-LLM system, including its components, architecture, and roadmap. ### Overview of FATE-LLM system FATE-LLM1 was open-sourced as a submodule of FATE, and it contains three components: Communication-Efficient Hub, FedLLM Model Hub, and FedLLM Privacy Hub. Figure 2 overviews the FATE-LLM system. Footnote 1: FATE-LLM was open-sourced in April 2023 in the FATE Community and is running on the infrastructure of FATE. **The Communication-Efficient Hub** integrates a variety of communication-efficient methods into FedLLM to reduce the communication cost for training LLMs, including parameter-efficiency fine-tuning (PEFT) [22] methods (e.g., Adapter Tuning [13] and Prompt Tuning [20], Knowledge Distillation(KD) [21], and Model Quantization [22]. More specifically, [22] proposed PETuning methods that can reduce the communication overhead by \(1\sim 2\) orders of magnitude under the FL setting compared with full fine-tuning. They also found that PETuning methods can bring down local model adaptation costs for clients in FL systems. These results imply that FL clients (e.g., devices) with limited storage capacity can benefit from PETuning methods since these methods enable sharing an LLM across different tasks and maintaining a few parameters for each task, reducing the storage requirement. **The FedLLM Model Hub** integrates a variety of mainstream LLMs, including BERT [15], GPTs [16], ChatGLM-6B [15], LLaMA [20], BLOOM [23], and Baiichuan [21]. These LLMs have different architectures and sizes and can be applied in different scenarios. **The FedLLM Trainer Hub** offers a variety of training methods for different federated LLMs learning scenarios, including FedHomoLLM, FedHeteroLLM, FedCoLLM, and FedOST. In FL, clients may have sufficient computing resources to train LLMs of the same size. However, in many heterogeneous scenarios, clients are likely to have quite different computing or data resources so that they can afford to train LLMs of quite different sizes. FATE-LLM offers Federated Homogeneous LLMs (FedHomoLLM) and Federated Figure 2: **Components of the FATE-LLM system.** Heterogeneous LLMs (FedHeteroLLM) to support both scenarios. FedHomoLLM leverages PEFT techniques to train clients' LLMs with the same architecture and size (illustrated in Figure 3(a)). FedHeteroLLM leverages knowledge distillation (KD) [2] and PEFT techniques to deal with the FL scenario where FL clients own LLMs of different sizes (illustrated in Figure 3(b)). Specifically, each client in FedHeteroLLM leverages KD to learn a mentee model from its local pre-trained LLM. Then, all clients send adaptor or prompt parameters to the server for secure aggregation. Next, the server dispatches the aggregated model to all clients for the next round of training. Initializing clients with an LLM distilled from a larger one hosted by the server enables federated LLMs to obtain a better global model more efficiently than starting clients' models from random initialization [23]. On the other hand, the domain knowledge captured by clients' local LLMs allows the server's larger LLM to continue to evolve. FATE offers the FedCoLLM (Federated Co-tuning LLM) framework to co-evolve the LLMs of the server and clients. Figure 3(c) illustrates the FedCoLLM. Specifically, in FedCoLLM, each client having a LLMa-7B model conducts federated learning applying PEFT techniques. On the server side, the server distills the knowledge between its LLMa-65B model and the aggregated LLMa-7B mode to co-evolve models on both sides. [17] proposed Offsite-Tuning, a privacy-preserving and efficient transfer learning framework that can adapt an LLM to downstream tasks without access to the LLM's full weights. More specifically, in Offsite-Tuning, the server sends two adaptors and an emulator of its LLM to a client, which in turn finetunes adaptors with the help of the frozen emulator using its domain-specific data. Next, the client sends adaptors back to the server, which then plugs them into its LLM to form an adapted LLM for the client. The Offsite-Tuning has the potential to protect the client's data privacy and the server's model property. FATE-LLM offers the FedOST (Federated OffSite-Tuning) that extends the Offsite-Tuning framework to the federated learning setting (see Figure 3(d)). In FedOST, multiple clients collaboratively train two global adaptors that adapt the LLM to all clients. FedOST brings two additional benefits than Offsite-Tuning: (1) FedOST enhances data privacy by adopting secure aggregation, and (2) it adapts an LLM to clients that did not even participate in the FL because of the generalization of the FL global model. Figure 3: FATE-LLM Trainers. FATE-LLM offers four trainers for four different federated LLM learning scenarios. **The FedLLM Privacy Hub** integrates various privacy and security protection technologies, including federated intellectual property protection (FedIPR) [11], secure aggregation (SecureAgg) [16], Differential Privacy (DP) and Multi-Party Computation (MPC) to protect data privacy and model security. Specifically, FedIPR [11] proposed a federated deep neural network ownership verification scheme that enables private watermarks to be embedded into private DNN models during FL training (see Figure 4) such that each client can independently verify the existence of embedded watermarks and claim its ownership of the federated model without disclosing private training data and watermark information. FedIPR can be applied to FedLLM to verify the IP ownership of the federated LLMs. SecureAgg, DP, and MPC can be applied to FedLLM during training and fine-tuning to protect clients' data privacy. ### Architecture of FATE-LLM FATE-LLM is running on the infrastructure of FATE, which consists of FATE-Flow, Eggroll, and OSX as the main components. FATE-Flow is a task scheduling engine for the multi-party federated learning end-to-end pipeline, Eggroll is the distributed computing engine, and OSX (open site exchange) is the multi-party federated communication engine. FATE-LLM Algorithm Hub and LLM Optim Lib Hub are tailored to perform FedLLM. FATE-LLM Algorithm Hub includes Communication-Efficient Hub, FedLLM Model Hub, and FedLLM Privacy Hub (see Figure 2). LLM Optim Lib Hub includes DeepSpeed and Megatron-LM. As of June 2023, FATE has integrated DeepSpeed into Eggroll, which can manage the GPUs cluster well and dispatch DeepSpeed LLMs tasks. Figure 5 shows the architecture of FATE-LLM. ### RoadMap of FATE-LLM We present the roadmap of FATE-LLM in Figure 6. As of June 2023, three versions of FTE-LLM have been released: FATE-LLM 1.0, FATE-LLM 1.1, and FATE-LLM 1.2. The three versions integrate Bert, GPT-2, ChatGLM-6B, and LLaMA, consecutively, and adopt FedIPR and privacy-preserving techniques to protect data privacy and model ownership. ## 4 Experiments We conduct experiments on the scenario in which each client owns a ChatGLM-6B [15] model, and all clients want to fine-tune their models collaboratively through federated learning. Since fine-tuning all parameters of ChatGLM-6B involves huge computational and communication costs, all clients leverage a PETuning method to only fine-tune a small portion of the ChatGLM-6B parameters through federated learning. We leverage our FedLLM modules to conduct these experiments using both _LoRA_[11] and _P-Tuning-v2_[11]. Figure 7 illustrates this scenario we conduct our experiments on. ### Experimental Setup We detail the experimental setup, including the dataset, FL setting, and baselines. **Dataset and setting**. We conduct experiments on AdvertiseGen [11], a dataset for advertising text generation. We simulate the FL setting with 2 clients and randomly split the AdvertiseGen dataset such that each client has 57K samples. Each client is assigned 8 NVIDIA V100 and trained on DeepSpeed. We set the FL training epoch to 5 and run the experiments in the LAN network environment. **Baselines**. We adopt two types of baselines. One is _centralized_, in which data of all clients are centralized to conduct fine-tuning (either LoRA or P-Tuning-v2) on a ChatGLM-6B model. The another is that each client uses local data to fine-tune its local ChatGLM-6B model. **Evaluation metrics**. We adopt Rouge-1, Rouge-2, Rouge-\(l\)[11] and BLEU-4 [12] to evaluate the performance of fine-tuned LLMs. ### Experiment Results **Model Performance** The experimental results for FedLLM using LoRA and P-Tuning-v2 are reported in Table 1 and Table 2, respectively, which show that LoRA Federated and P-Tuning-v2 Federated generally outperform their individual client counterparts across all performance metrics, demonstrating that federated learning help enhance the fine-tuning performance for each client. From Table 1 and Table 2, we also observe that the performance of LoRA and P-Tuning-v2 federated fine-tuning are generally worse than their centralized counterparts across all performance metrics, indicating that there has room to improve federated fine-tuning methods. ### Communication Cost We investigate the communication cost for FedLLM using LoRA and P-Tuning-v2 in terms of the size of parameters to be fine-tuned. Table 3 reports the results, and it shows that FedLLM using LoRA consumes 0.058% communication cost of FedLLM fine-tuning all parameters, while FedLLM using P-Tuning-v2 accounts for 0.475% communication cost of FedLLM fine-tuning all parameters. ### Communication Cost We investigate the communication cost for FedLLM using LoRA and P-Tuning-v2 in terms of the size of parameters to be fine-tuned. Table 3 reports the results, and it shows that FedLLM using LoRA consumes 0.058% communication cost of FedLLM fine-tuning all parameters, while FedLLM using P-Tuning-v2 accounts for 0.475% communication cost of FedLLM fine-tuning all parameters. ## 5 Conclusions and Future Work We proposed FATE-LLM, an industrial-grade federated learning framework for large language models(FedLLM). As an open-sourced software, FATE-LLM encourages collaboration among the research and industry communities and expects to receive increasing feedback on its use. In the future, we may consider research directions: (1) reconcile LLMs of different model architectures during FL fine-tuning; (2) fine-tune private LLMs of one party using private data of another party without compromising the data privacy and model ownership; (3) protect the privacy of user prompts efficiently in the inference stage; (4) apply FedLLM to vertical federated learning [11].
2307.02082
Infrared spectra of solid indene pure and in water ice. Implications for observed IR absorptions in TMC-1
Experimental and theoretical infrared spectra, between 4000-500 cm$^{-1}$ (2.5-20 microns), and infrared band strengths of two solid phases of indene, amorphous and crystalline, are given for the first time. The samples were generated via vapor deposition under high vacuum conditions on a cold surface. Density functional theory was employed for the calculations of the IR spectra. Lacking of previous information, a monoclinic symmetry is suggested for the theoretical crystalline phase of indene, based on the comparison of the calculated and experimental IR spectra. Assignments, based on the calculations, are given for the main indene IR absorptions. The infrared spectra of highly diluted mixtures of indene in amorphous solid water at 10 K are also provided, evidencing that the indene spectrum is not much altered by the water ice environment. These data are expected to be useful for the search of this species in the solid phase in astrophysical environments with the JWST. With the band strengths obtained in this work, and applying a simple literature model, we find that indene could represent at most 2-5 percent of the intensity of a weak absorption feature at 3.3 microns recently reported for Elias 16. A column density of (1.5 -0.6) 10$^{16}$ cm$^{-2}$ is estimated for indene in the ice mantles of TMC-1. It would correspond to aprox. (2 - 0.8) 10$^{-2}$ of cosmic carbon, which is probably too high for a single small hydrocarbon.
Belén Maté, Isabel Tanarro, Vicente Timón, José Cernicharo, Victor J. Herrero
2023-07-05T07:44:57Z
http://arxiv.org/abs/2307.02082v1
Infrared spectra of solid indene pure and in water ice. Implications for observed IR absorptions in TMC-1. ###### Abstract Experimental and theoretical infrared spectra; between 4000-500 cm-1 (2.5-20 \(\upmu\)m), and infrared band strengths of two solid phases of indene, amorphous and crystalline, are given for the first time. The samples were generated via vapor deposition under high vacuum conditions on a cold surface. Density functional theory was employed for the calculations of the IR spectra. Lacking of previous information, a monoclinic symmetry is suggested for the theoretical crystalline phase of indene, based on the comparison of the calculated and experimental IR spectra. Assignments, based on the calculations, are given for the main indene IR absorptions. The infrared spectra of highly diluted mixtures of indene in amorphous solid water at 10 K are also provided, evidencing that the indene spectrum is not much altered by the water ice environment. These data are expected to be useful for the search of this species in the solid phase in astrophysical environments with the JWST. With the band strengths obtained in this work, and applying a simple literature model, we find that indene could represent at most 2-5% of the intensity of a weak absorption feature at 3.3 \(\upmu\)m recently reported for Elias 16. A column density of (1.5 -0.6) \(\times\) 10\({}^{16}\) cm-2 is estimated for indene in the ice mantles of TMC-1. It would correspond to \(\approx\) (2 - 0.8) \(\times\) 10\({}^{2}\) of cosmic carbon, which is probably too high for a single small hydrocarbon. ## 1 Introduction In the mid-eighties, Leger et Puget (1984) and Allamandola et al. (1985) proposed that based on the IR fluorescence properties of laboratory polycyclic aromatic hydrocarbons (PAHs), some related species were the likely carriers of the unidentified infrared emission (UIE) features (Willner et al. 1979), found in interstellar regions illuminated by UV photons, leading to the so called PAH hypothesis. The PAH hypothesis (Tielens 2008) has been widely, but not universally, accepted and a long-standing criticism (Zhang and Kwok 2015) was the non-detection of individual PAH molecules in the interstellar medium (ISM). The detection of individual pure PAHs is far from trivial. The identification of individual molecules in the ISM is mostly achieved through radiofrequency measurements of specific rotational transitions, and these measurements are very difficult for PAHs because they usually have negligible or very small dipole moments coupled to low fractional individual abundances. In 2021, the first PAH molecule, indene, was finally detected in the cold pre-stellar core TMC-1 (Cernicharo et al. 2021, Bukhardt et al. et al. 2021). Indene is a bicyclic molecule composed of a six- and a five-membered ring and has a small but appreciable (\(\approx\) 0.6-0.7 D) dipole moment (Caminati 1993, Burkhardt et al. 2021). Relatively high abundances, for such a complex molecule, of gas-phase indene (1-1.6 \(\times\) 10\({}^{9}\) with respect to H\({}_{2}\)) were derived from the observations. It has been traditionally assumed that PAHs should be largely formed in the envelopes of carbon rich asymptotic giant branch (AGB) stars, starting from simple molecular species like C\({}_{2}\)H\({}_{2}\) (Frenklach & Feigelson, 1989; Cherchneff, 2012), or at later stages of stellar evolution (Kwok 2004), like protoplanetary nebulae (PPN), where benzene has been detected (Cernicharo et al. 2001, Malek et al. 2011). In these environments, high temperatures, densities, or UV fields favor a chemistry conducting to the formation of aromatic structures (Woods et al. 2003, Cernicharo 2004, Martinez et al. 2020, Santoro et al. 2020). However, the first detection of a specific gas-phase pure PAH molecule has taken place in a cold cloud. Besides indene (C\({}_{9}\)H\({}_{8}\)), the cyano derivatives of benzene (C\({}_{6}\)H\({}_{5}\)CN), indene (C\({}_{9}\)H\({}_{7}\)CN), and naphthalene (C\({}_{10}\)H\({}_{7}\)CN) were also found in TMC-1 (McGuire 2018, McGuire 2021) with estimated abundances in the 10\({}^{\text{-}}\)10\({}^{\text{-}}\)10\({}^{\text{-}}\)11 range with respect to H\({}_{2}\). The nitrile groups in these molecules largely increase their dipole moments with respect to their pure hydrocarbon counterparts and makes them thus much easier to detect. It has been suggested that they could be taken as proxies for the non-functionalized molecules. Estimates based on the observed C\({}_{9}\)H\({}_{5}\)/C\({}_{9}\)H\({}_{7}\)CN quotient and on a chemical model, indicate that the pure hydrocarbons should be \(\approx\) 20-40 times more abundant than their cyano derivatives (Sita et al. 2022). In this context it is worth noting that benzyne (c-C\({}_{6}\)H\({}_{4}\)) and cyclopentadiene (c-C\({}_{5}\)H\({}_{6}\)) have been also found in the same cold dark cloud (Cernicharo et al., 2021, 2021). Volatile molecules like indene should freeze to a large extent on the ice mantles of dust grains in TMC-1. In fact, the presence of PAHs in the ice mantles of cold clouds has been conjectured since the start of the PAH hypothesis and possible signatures of PAHs in the IR ice absorption spectra have been sought. In the nineties, a weak band at about 3.25 \(\upmu\)m, detected in some clouds, was tentatively attributed to the CH stretch vibration of polyaromatic molecules (Sellgren 1995, Brooke 1999), and in a recent work, Chiar et al. (2021) have suggested that CH stretching vibrations of CH\({}_{2}\) groups in hydrogenated PAHs could account in part for the widely observed feature at 3.47 \(\upmu\)m which is mostly attributed to ammonia hydrates (Dartois & D\({}^{\prime}\) Hendecourt 2001, Dartois et al. 2002, Boogert el al. 2015). It was also surmised that other characteristic bands of PAHs like the CC stretching vibrations at 6.2 \(\upmu\)m (Keane et al. 2001) and the CH out of plane bending vibrations at 11, 2 \(\upmu\)m (Bregman 2000) could also contribute somewhat to other ice absorption features. However, the suggested contribution of PAHs to specific ice absorption bands remains highly speculative. The mentioned IR features appear as very weak substructures of broader absorptions and are much affected by uncertainties in baseline subtraction, and by telluric absorptions in ground-based observations. In addition, some of these substructures can have a contribution from ice profile modifications or from other ice species (Boogert et al. 2015). Using laboratory data and theoretical calculations for the IR band strengths of representative polyaromatic species, the maximum amount of cosmic carbon locked up in PAHs compatible with an attribution the absorption features in the ices of dense clouds was estimated at \(\simeq\) 5-15% (Bowman et al. 2011, Hardegree-Ullman et al. 2014, Chiar et al. 2021), which is consistent with the abundances derived for PAHs in photon dominated regions of the ISM (Tielens 2008) where the aromatic infrared emission bands are seen. In general, it was assumed that the hypothetic polyaromatic hydrocarbons in the ice mantles of dense clouds should be mixtures of large PAHs (\(\simeq\) 50 C atoms or more), able to withstand the intense UV fields in the transit from post AGB regions to dark clouds. The abundant presence of small gas-phase PAHs like indene and, presumably, naphthalene inside the cold dense cloud TMC-1 (Cernicharo et al. 2021, Burkhardt et al. 2021) is thus puzzling, since they are too small to survive the UV field outside the clouds, which strongly suggests that these molecules have been formed in situ. However, current models, including gas-phase and grain surface reactions, underpredict the observed indene abundances in TMC-1 by three orders of magnitude (Doddipatla et al. 2021, Burkhardt et al. 2021). The high relative abundance of gas-phase indene in TMC-1 raises the question about the cycling of this molecule between the gas and the solid. In the interior of dense clouds, in the absence of significant thermal of photo desorption, cosmic ray (CR) sputtering could provide a way to balance the freezing and accretion of indene on solid grains. This possibility has been recently addressed by Dartois et al. (2022) who used laboratory data on ion sputtering of PAHs to estimate the gas-solid partition of naphthalene in a dense cloud. They concluded that the CR sputtering mechanism would need fractional abundances of 10\({}^{3}\)-10\({}^{4}\) of naphthalene molecules in the ice mantles to justify the gas-phase abundances estimated for TMC-1. Progress in the understanding of the high abundance of indene in TMC-1 requires further modeling and laboratory data. Experimental IR spectra of indene are available in the literature for the liquid and for the vapor (Klots 1995), and theoretical calculations, showing good agreement with the measurements, have been published for the isolated molecule (Klots 1995, El-Azhary 1999), but as far as we know no IR spectra of indene ices have been reported. The present work is focused on the IR spectroscopy of solid phases of indene at low temperatures. IR spectra of vapor deposited amorphous and crystalline indene and of indene mixtures with water ice have been recorded. Solid structures and vibrational spectra have been calculated using density functional theory (DFT) and the results of the calculations have been used for the assignment of the measured IR spectra. Experimental and theoretical band strengths have also been determined and the astrophysical implications of the results for the IR spectra of ices in TMC-1 are discussed. ## 2 Experimental section The experimental set-up has been described previously (Mate et al. 2021). It consists in a high-vacuum chamber, with a background pressure in the 10\({}^{8}\) mbar range, provided with a closed-cycle He cryostat and coupled to a FTIR spectrometer (Bruker Vertex70) through KBr windows. Indene and water vapors are introduced in the chamber through independent lines with needle valves, and condensed on a cold Si substrate placed in thermal contact with the cold head of the He cryostat. Indene is a commercial liquid (\(\geq\)99%, Sigma-Aldrich) with about 1 mbar vapor pressure at 20\({}^{\circ}\)C. In our experimental setup, due to the small fluxes provided by the needle valves, it is convenient to increase the indene vapor pressure, by immersing a Pyrex tube containing the liquid in a silicone oil bath at 60\({}^{\circ}\)C. In this way the indene pressure s increased up to about 14 mbar. To guarantee the stability of water vapor pressure during the deposit, a water flask is placed in a thermal bath at 30\({}^{\circ}\)C. Both lines are heated to avoid condensation. Pure indene layers were generated at 10 K and at 160 K, with a deposition vapor pressure in the 10\({}^{\circ}\) mbar range. Ice mixtures were generated at 10 K with vapor pressure in the 10\({}^{\circ}\) mbar range. Ice layers of thickness of several tens of nm were grown. A quadrupole mass spectrometer (Hiden200) (QMS) directly connected to the HV chamber allows monitoring gas phase in the chamber during the experiments. A Faraday cup was used as detector. Absolute gas-phase densities in the deposition chamber were estimated by calibrating the quadrupole mass spectrometer as described in Appendix A1. Normal incidence transmission spectra with a 4 cm-1 resolution and 100 scans accumulation were recorded with the Vertex 70 FTIR spectrometer and a liquid nitrogen cooled mercury cadmium telluride (MCT) detector. ## 3 Theoretical calculations The structure of solid crystalline indene is not known from bibliographic data. In this work, in a first attempt, it was guessed the indene crystal symmetry to be orthorhombic, in analogy to that of the crystal of i3-(phenylthio)indene, with molecular formula C15H12S (Curnow et al. (2012)). To build the initial unit cell the structure of C15H12S molecular crystal was taken, transforming the molecular unit C15H12S to C8H9 molecule by removing the phenyl group and the S atom. However, the calculated IR spectra of the relaxed solid obtained in this way do not show a clear resemblance with the experimental spectra of crystalline indene. In a second attempt, a monoclinic symmetry was postulated. This idea was inspired from the behavior of the molecular crystal of 1-Bromo-2,3,5,6-tetramethylbenzene, also known as bromodurene, molecular formula C10H13Br, which, depending on temperature, crystallize in an orthorhombic or in a monoclinic system (Hamdouni et al 2019). The IR theoretical spectra obtained in this case is in good agreement with the experimental one, and this was taken as proof of the goodness of the hypothesis. Geometry optimization was carried out with the CASTEP code (Clark et al 2005) using the Broyden-Fletcher-Goldfarb-Shanno optimization scheme (Payne et al. 1992) under density functional theory (DFT)-based methodology. The exchange-correlation energy term was treated using the generalized gradient approximation (GGA) with revised Perdew-Berke-Ernzerhof (rPBE) settings (Zhang et al. 1998). Since indene molecules are packed together inside the unit cell, probably via van der Waals forces and/or weak and moderate hydrogen bonds, to better consider those interactions the Tkatchenko-Scheffler (TS) dispersion correction was included in the calculations (Tkachenko et al. 2009). OTFG pseudopotentials (Pickard et al. 2000) with a cut-off 720 eV were employed. The convergence criteria were set at 1 \(\times\) 10\({}^{5}\) eV/atom for the energy, 0.05 eV/A for the interatomic forces, maximum stress 0.1 GPa and 0.002 A for the displacements. Atomic forces and charges were evaluated at the minimum in the potential energy surface to predict the harmonic vibrational infrared spectrum by means of density functional perturbation theory (Baroni et al. 2006). On the other hand, no information about the structure of amorphous indene is available, in particular, on its density. In order to simulate an amorphous indene solid, a unit cell was constructed containing four molecules with an initial density of 1.3 g/cm\({}^{3}\), chosen to match that of the liquid phase of indene. Amorphous indene structures were generated through fast force-field molecular dynamics (MD) simulations using the Amorphous Cell modulus of the Materials Studio (MS) package. The classical MD simulations implemented were then optimized by means of periodic quantum mechanical calculations, allowing the system to modifying not only the molecular arrangement but also the unit cell size, and therefore its density. Its IR spectra were calculated with the same methodology described for the crystal. ## 4 Results and discussion. ### Amorphous and crystalline indene and phase transition. Figure 1 shows the mid IR spectra of amorphous and crystalline indene layers grown by vapor deposition at 10 K and at 160 K, respectively. Important differences in the IR spectra of the two phases are evidenced in the CH stretching region around 3000 cm-1 (top panel), where frequency shifts are appreciable and the relative intensities of the peaks change considerably. At lower wavenumbers there are also substantial intensity variations upon crystallization, but no significant band shape changes. A list of the main IR peaks of indene is given in Table 1, both for amorphous and crystalline phases. The peak positions of some minor features discernible in the spectra are not given in the table. An assignment of the strongest bands to vibrational modes of the indene molecule has been made, guided by the calculations performed in this work and presented in section 4. The mode assignments will be discussed in that section. Although there are works that present the IR spectra of the indene molecule and the liquid, providing IR band assignment (Klots1995), as far as we know, the IR spectra of the solid phases of indene are presented here for the first time. Figure 1: Infrared spectra of amorphous (blue) and crystalline (red) indene layers grown by vapor deposition at 10 K and 160 K, respectively. The estimated thickness, assuming a density of 1.3 g/cm3, is 20 nm per each side of the Si substrate. \begin{table} \begin{tabular}{|c|c|} \hline Peak position (cm\({}^{-1}\)) & Assignment \\ \hline \end{tabular} \end{table} Table 1: Peak positions of the main absorptions of amorphous and crystalline indene solids at 10 K and 160 K, respectively. The mode assignment is based on the theoretical calculations performed in this work, presented in section 4.3. When the wavenumber corresponds to a shoulder of a stronger band it is indicated as “sh”, and when the peak corresponds to a broad band it has been indicated with “br”. v\({}_{a}\) indicate asymmetric stretching modes, \(\delta\) bending modes. \begin{tabular}{|l|l|l|l|} \hline \multicolumn{1}{|c|}{amorphous} & \multicolumn{1}{|c|}{crystalline} & \multicolumn{1}{|c|}{level} & \multicolumn{1}{|c|}{mode} \\ \hline 3139,3111,3084 & 3139, 3106,3086 & L1 & \multicolumn{1}{|c|}{v\({}_{\mathrm{a}}\) CH} \\ \hline 3069 & 3069 & & \multicolumn{1}{|c|}{CH hexa- ring + CH penta- ring} \\ \hline 3044sh,3026,3016 & 3044sh,3028,3016 & & \\ \hline 2909, 2891,2887 & 2909, 2886 & L2 & \multicolumn{1}{|c|}{v\({}_{\mathrm{a}}\) CH\({}_{2}\)} \\ \hline 2770 & 2771 & & \\ \hline 1688 & 1708, 1690 & & \\ \hline 1633 br & 1637, 1630 & L3 & \multicolumn{1}{|c|}{v C=C} \\ \hline 1610 & 1609 & & \\ \hline 1587 & 1587 & & \\ \hline 1552 & 1549 & & \\ \hline 1470 sh & 1471 sh & & \\ \hline 1459 & 1458 & L4 & \multicolumn{1}{|c|}{\(\delta\) in-phase in-plane 3 CH hexa-ring} \\ \hline 1390 & 1399 sh & L5 & \multicolumn{1}{|c|}{\(\delta\) CH2} \\ \hline & 1387 & & \\ \hline 1362 & 1361 & & \\ \hline 1336 br & 1336 & & \\ \hline 1314 & 1315 & L6 & \multicolumn{1}{|c|}{\(\delta\) in-phase in-plane 2 CH penta-ring} \\ \hline 1289 & 1290 & & \\ \hline 1227 & 1227 & & \\ \hline 1206 & 1206 & & \\ \hline 1168 & 1168 & & \\ \hline 1153 & 1151 & & \\ \hline 1125 & 1122 & & \\ \hline 1108 sh & 1106 sh & & \\ \hline 1068 & 1067 & & \\ \hline 1019 & 1018 & & \\ \hline 977 w,br & 982 w, br & & \\ \hline 945 & 944 & L7 & \multicolumn{1}{|c|}{\(\delta\) out-of-phase in-plane 4 CH hexa-ring} \\ \hline 916 & 915 & L8 & \multicolumn{1}{|c|}{\(\delta\) in-plane CH-CH\({}_{2}\)(block)} \\ \hline 862 & 870, 863 & & \\ \hline 832 & 833 & & \\ \hline 768 & 766 & L9 & \multicolumn{1}{|c|}{\(\delta\) in-phase out-of-plane 8CH} \\ \hline 730 sh & 730 sh & & \\ \hline 720 & 720 & L10 & \multicolumn{1}{|c|}{\(\delta\) in-phase out-of-plane 4 CH hexa-ring} \\ \hline 698 & 698 & L11 & \multicolumn{1}{|c|}{\(\delta\) in-phase out-of-plane 2 CH penta-ring} \\ \hline 593 & 593 & & \\ \hline 553 & 553 & L12 & \multicolumn{1}{|c|}{Out-of-plane deformation hexa and penta rings} \\ \hline 532 & & & \\ \hline \end{tabular} When a pure indene ice grown at 10 K was heated with a continuous ramp of 1 K/min, the phase transition was observed to start at 120 K and to end at 130 K. Indene sublimation started at 180 K, well above the water sublimation temperature. Figure 2 shows the spectra at different temperatures during the annealing process of indene. ### Infrared band strengths. The infrared band strength of a given band (band i) of solid indene is experimentally obtained via the following expression: \[A_{i}^{\prime}=\frac{\int_{i}\tau d\tilde{\nu}}{N}=\frac{2.303\int_{i}Abs(\tilde {\nu})d\tilde{\nu}}{N}=2.303\frac{Int_{i}}{N} \tag{1}\] where \(\tau\) is the optical depth, \(\tilde{\nu}\) the wavenumber frequency, Abs the absorbance spectrum, \(Int\), the integrated area of band \(i\) in the absorbance spectrum, and \(N_{indene}\) the column density of indene. The column density of indene molecules has been estimated via the kinetic theory of gases, from the impinging rate of indene molecules on the cold surface and assuming a sticking probability of one at 10 K: Figure 2: Enlargement of the CH stretching region of the IR spectra of indene, showing the evolution with temperature of an indene layer grown at 10 K and warmed at 1 K/min. Spectra are shifted in the absorbance axis for a better visualization. \[N_{Indene}=n_{Indene}\ \left(\frac{kT}{2\pi m}\right)^{1/2}\Delta t \tag{2}\] where \(n_{\rm Indene}\) is the molecular density of gas-phase indene in the chamber, which was kept constant during deposition, \(k\) is the Boltzmann constant, \(T\) the gas temperature, \(m\) the mass of the indene molecule, and \(\Delta t\) the deposition time. Absolute \(n_{\rm Indene}\) during deposition was estimated by calibrating the quadrupole mass spectrometer with a procedure described in Appendix A1, and already employed in a previous work (Mate et al. 2017). The pressure homogeneity in the chamber assumed within this calibration procedure has been proved in a previous publication by our group (Mate et al 2003), where an agreement better than 10% was found in the ice layer thickness measured via kinetic theory of gases and laser interferometry. In the present work, ice layers of different thickness, varying from 10 ML to 600 ML (1ML= 10\({}^{15}\) molec/cm\({}^{2}\)), were employed for the estimation of the band strengths. Errors for these magnitudes were estimated to be about 30%, mainly due to the uncertainty in the number density determination, derived by errors in determining the calibration factor of the QMS. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Band & Peak position & Integration & A\({}_{\rm amorphous}\) & Peak position & Integration & A\({}_{\rm cryst}\) & A\({}_{\rm cryst}\)/ \\ label & amorph. & Limits & (cm\({}^{-1}\)) & cm/molec) & cryst. & Limits & (x10\({}^{-18}\) & A\({}_{\rm amorf}\) \\ & (cm\({}^{-1}\)) & & & (cm\({}^{-1}\)) & & & \\ \hline L1 & 3100 & 3153-2983 & 13,6 & 3100 & 3157-2985 & 10,5 & 0,77 \\ \hline L2 & 2900 & 2920-2873 & 4,2 & 2900 & 2926-2877 & 3,1 & 0,74 \\ \hline L3 & 1620 & 1660-1600 & 1,5 & 1620 & 1660-1600 & 1,4 & 0,96 \\ \hline L4 & 1458 & 1487-1443 & 6,4 & 1458 & 1439-1483 & 4,8 & 0,75 \\ \hline L5 & 1390 & 1419-1374 & 5,1 & 1387 & 1414-1371 & 6,8 & 1,34 \\ \hline L6 & 1314 & 1321-1302 & 1,4 & 1315 & 1327-1304 & 2,2 & 1,54 \\ \hline L7 & 945 & 966-928 & 3,8 & 943 & 966-930 & 4,6 & 1,22 \\ \hline L8 & 916 & 930-904 & 2,6 & 916 & 926-903 & 3,9 & 1,51 \\ \hline L9 & 768 & 793-755 & 20,0 & 766 & 796-752 & 23,0 & 1,15 \\ \hline L10 & 720 & 740-711 & 6,0 & 720 & 741-712 & 7,1 & 1,19 \\ \hline L11 & 698 & 711-678 & 6,1 & 698 & 712-681 & 7,7 & 1,26 \\ \hline L12 & 553 & 565-546 & 1,8 & 553 & 567-538 & 2,5 & 1,42 \\ \hline \end{tabular} \end{table} Table 2: Experimental peak positions and band strengths of the main IR absorption bands of amorphous and crystalline indene. The limits taken to calculate the band area in the absorbance spectra are given in columns three and six. Uncertainties in the band strengths are about 30%, due to the uncertainty in number density determinations. The last column shows the intensity variations of the different modes upon crystallization. The infrared band strengths of the strongest bands in the IR spectra of indene are listed in Table 2. They have been labeled L1-L12 as in Table 1, where the assignment of the different bands is given. The last column in Table 2 shows the intensity variations of the different modes upon crystallization. It can be noticed that the asymmetric stretching CH modes in the 3000 cm-1 region decrease their intensity while the intensity of the low wavenumber modes associated mainly to CH bending vibrations follows the opposite behavior. It is illustrative to compare the band strength provided in this work for indene with those recently published for ices of other aromatic species, benzene and pyridine by Hudson et al. 2022. Here, the authors measure the thickness and density of benzene or pyridine ices and use those magnitudes to determine the column density needed to extract infrared band strengths form the IR spectra. Band strengths of 1.62 10\({}^{\text{-17}}\) cm/molec and 1.55 10\({}^{\text{-17}}\) cm/molec were estimated for the strongest benzene IR absorption band, for the amorphous and crystalline forms, respectively. The band appears at 676 cm-1 in the amorphous form (10 K) ant it splits in a doublet at 679 and 681 cm-1 in the crystal (100 K). The IR spectrum of solid pyridine presents a strong absorption at 705 cm-1 (amorphous form) or at 712 cm-1 (crystalline form), with an intensity of 0.88 10\({}^{\text{-17}}\) cm/molec (amorphous form) or 1.15 10\({}^{\text{-17}}\) cm/molec (crystalline form). Although Hudson et al. (2022) do not provide a spectral assignment for these bands, it seems reasonable to assign them to the same mode than the strongest band of indene, that appears at 678 cm-1. This mode corresponds to the in-phase out-of-plane CH bending and it has a band strength of 2 10\({}^{\text{-17}}\) cm-1/molec for amorphous indene (10 K) and 2.3 10\({}^{\text{-17}}\) cm-1/ molec for the crystalline form (see Table 2). Therefore, since the number of CH groups per molecule are different for pyridine, benzene or indene, being 5, 6 and 8 respectively, a correlation is observed between the number of CH groups involved in the vibration and the band strength. The larger the number of CH groups involved, the larger the band strength of the vibration. This tendency somehow can be taken as a test of the quality of the band strengths given. IR absorption band strengths were also provided by Sandford et al. 2004 for amorphous naphthalene at 15 K. These authors used water ice bands for the calibration of absolute band intensities in naphthalene mixtures with water. Their band strengths are comparable to those commented on in this paragraph. Specifically, they derived a value of 1.2 \(\times\) 10\({}^{\text{-17}}\) cm/molec for the out of plane bending vibration at 784.4 cm-1. This value is smaller than that from the present work, but the two values lie within their mutual experimental uncertainty. ### Theoretical model. Comparison with experiments. As described in the theoretical section, a monoclinic symmetry was assumed for the indene crystal, containing two indene molecules in the unit cell. The relaxed structure obtained after running the calculations has a density of 1.40 g/cm3. On the other hand, for amorphous indene, a unit cell containing four indene molecules has been constructed, and the relaxed structure reached a density of 1.27 g/cm3. The cell parameters of the optimized structures calculated in this work for crystalline and amorphous indene solids are presented in Table 4, and the structures are shown in Figure 3. More details are given in the Supplementary Material. Table 4. Cell parameters for calculated amorphous and crystalline indene solids. The calculated IR spectra of amorphous and crystalline indene are compared with the experimental spectra in Figure 4. A general good agreement is found, the calculated spectra reproducing the main peaks positions and intensities. The simulations reproduce fairly well the relative intensities between the different absorptions. Although some frequency deviations are observed, for the main vibrations it is possible to make a clear correlation between the theoretical and experimental bands. For example, for the crystalline solid, the strongest absorption, L9, is predicted only 11 cm-1 shifted to lower wavenumber, and L5 and L4, only 25 and 50 cm-1 shifted to higher wavenumbers, respectively. The larger frequency shifts (approximately 5%) are in the CH stretching modes, that are predicted at about 120 cm-1 higher wavenumbers. For the amorphous solid, frequency deviations show a similar tendency. The theoretical calculations provide information on the atomic displacements associated to each particular vibration mode, that can be visualized using CASTEP. Indene solids are molecular solids where the indene molecules are bonded via hydrogen bonds, and it is possible, to some extent, to associate the vibrations of the solid (phonons) to specific vibrations of the indene molecule. However, the mode assignment is not always clear. In some cases, the vibrations can by clearly attributed to a particular mode, but in others there is mode mixing and the assignment is vaguer. With that limitation, an assignment of the strongest bands of indene is presented in Table 1, which is in good agreement with previous assignments of the vibrational spectra of indene in the gas and liquid phase (Klots 1995). Figure 3: Optimized structures of amorphous unit cell (left) and crystalline unit cell (right) of indene. The theoretical calculations provide also absolute infrared band strengths, that have been listed in Table 5 for the amorphous and crystalline phases. In the last two columns, the deviations between theoretical and experimental band strengths are listed. For the two forms the agreement between experimental and theoretical values is good, given the approximations implicit in the derivation of both experimental and theoretical magnitudes. It can be seen that for some absorptions the agreement is within 10%, whereas for others it is in the 50-70 % range. For example, the intensities of the strongest absorptions, L1 (\(\nu_{\mathrm{a}}\) CH) and L9 (\(\delta\) CH in-phase out-of-plane) are very well reproduced both for amorphous and crystalline solids. On the other hand, the intensity of the L2 (\(\nu_{\mathrm{a}}\) CH2) mode is poorly reproduced in both models. On average, for the eleven bands considered, the disagreement is below 30%, being better for the crystalline form. Regarding the intensity changes Figure 4: Comparison of theoretical (red) and experimental (black) spectra of solid indene. Top panel: crystalline indene. Bottom panel: amorphous indene. In the theoretical spectra the calculated mode intensities are represented with 10 cm-1 FWHM gaussians and scaled in intensity for better comparison with the experimental spectra. The theoretical spectra have been offset in the vertical axis for better visualization. occurring upon crystallization that are observed experimentally (see last column in Table 2), the calculations are not able to reproduce them, and therefore do not allow to provide a chemical explanation of the effect. Nonetheless, they do predict the intensity increase from 2.0 to 2.17 x10\({}^{7}\) cm/molec of the strong L9 (\(\delta\) CH in-phase out-of-plane) mode upon crystallization, a tendency that is in agreement with the experimental observations. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline Band & Peak & Interval & \(\Delta\)v\({}_{\text{exp-}}\) & A\({}^{\prime}\) & Mode & Peak & \(\Delta\)v\({}_{\text{exp-}}\) & A\({}^{\prime}\) & & \\ Band & amorph. & amorph. & theo & amorph. & Interval & position & theo & cryst. & A\({}_{\text{exp}}\)/A\({}_{\text{theo}}\) & A\({}_{\text{exp}}\)/A\({}_{\text{theo}}\) \\ label & (cm\({}^{-1}\)) & (cm\({}^{-1}\)) & amorph. & (10\({}^{-17}\) & cryst. & cryst. & cryst. & (10\({}^{-17}\) & amorph. & cryst. \\ & & (cm\({}^{-3}\)) & cm/molec) & (cm\({}^{-1}\)) & (cm\({}^{-1}\)) & (cm\({}^{-1}\)) & cm/molec) & & \\ \hline L1 & 3161 & 3268- & & & 3202- & & & & \\ & 3133 & -92 & 1,14 & 3159 & 3177 & -120 & 1,07 & 1,19 & 0,98 \\ \hline L2 & 3031 & 3050- & & & 3038- & & & & \\ & 3009 & -133 & 0,88 & 3019 & 3018 & -116 & 0,47 & 0,48 & 0,66 \\ \hline L3 & 1632 & 1647- & & & 1644- & & & & \\ & 1618 & -10 & 0,23 & 1621 & 1631 & -11 & 0,23 & 0,65 & 0,61 \\ \hline L4 & 1482 & 1496- & & & 1487- & & & & \\ & 1460 & -23 & 0,70 & 1482 & 1483 & -25 & 0,67 & 0,91 & 0,72 \\ \hline L5 & 1434 & 1443- & & & 1438- & & & & \\ & 1416 & -44 & 0,77 & 1424 & 1438 & -50 & 1,12 & 0,66 & 0,61 \\ \hline L6 & 1352 & 1356- & & & 1355- & & & & \\ & 1349 & -38 & 0,33 & 1353 & 1353 & -38 & 0,39 & 0,42 & 0,56 \\ \hline L7 & 979 & 995- & & & 1049- & & & & \\ & 952 & -34 & 0,37 & 1044 & 920 & 24 & 0,32 & 1,03 & 1,44 \\ \hline L8 & 912 & 950- & & & 986- & & & & \\ & 899 & 4 & 0,49 & 985 & 873 & 42 & 0,41 & 0,53 & 0,95 \\ \hline L9 & 751 & 758- & & & 756- & & & & \\ & 751 & 743 & 17 & 2,01 & 745 & 755 & 11 & 2,14 & 1,00 & 1,07 \\ \hline L10 & 705 & 710- & & & 709- & & & & \\ & 703 & 15 & 0,45 & 711 & 710 & 10 & 0,40 & 1,33 & 1,78 \\ \hline L11 & 678 & 672 & 18 & 0,64 & 692 & 696 & 2 & 0,59 & 0,95 & 1,31 \\ \hline L12 & 547 & 550- & & & 548- & & & & \\ & 540 & 6 & 0,19 & 533 & 547 & 6 & 0,13 & 0,95 & 1,92 \\ \hline \end{tabular} \end{table} Table 5: Calculated peak positions and band strengths of the strongest IR bands of amorphous and crystalline indene. The intensity of the modes within the interval indicated in columns two and five has been added to estimate the bands strengths given. \(\Delta\)v\({}_{\text{exp-theo}}\) is the deviation between theoretical and experimental peak positions. The ratio between experimental and theoretical band strengths in given in the last two columns. ### Indene in water ice. Ice mixtures with high dilutions, 7% to 2% number molecules of indene in water, were generated by simultaneous deposition of both gases at 10 K, with the goal of studying the possible spectral changes in the indene spectra caused by the water ice matrix. Figure 5 shows the IR spectra of the mixtures together with a pure indene spectrum. The stoichiometry has been estimated from the intensity of a water band and an indene band in the IR spectrum of the mixture. The OH stretching for water and the in-phase out-of-plane CH bending (L9) for indene, with band strengths of 1.9 10\({}^{\circ}\)\({}^{16}\) cm/molec (Mastrapa et al. 2009) and 2.0 10\({}^{\text{-}17}\) cm/molec (Table 2 this work), respectively, were chosen. Possible variations in the band strengths due to the mixture have been neglected. In the present work the band strengths of the mixtures are not provided. We refer to the recent works by Hudson and coworkers (Hudson et al. 2022, Gerakines et al. 2022) where they measured band strengths variations in water ice environments of several molecules, both polar (HCN) and non-polar (C\({}_{6}\)H\({}_{6}\)) and found them to be below 10%. In general, the IR spectrum of indene is not much affected by the water ice matrix, consistent with previous larger PAH studies (Bernstein et al. 2005). In contrast with other astrophysical complex organic molecules (COMs) like urea (Timon et al. 2021) or glycine (Mate et al. 2011), previously investigated in our group, the IR spectrum of indene ice presents narrow lines, with FWHM of the order of 10 cm\({}^{\text{-}1}\). This characteristic makes the sharp indene bands easily distinguishable on top of the broad ASW absorption profiles, at least in the 1500-500 cm\({}^{\text{-}1}\) region, away from the strong OH stretching mode of water (see Figure 5). In the figure inserts it is shown how the band profiles of the L9, L10, L11 bands broaden in the spectrum of the mixture, making these indene features less visible than L4 or L5. The latter two band are not very much affected by the presence of the ice matrix, and do not broaden appreciably. It seems that the water ice hydrogen bond network has an effect on the low wavenumber (low energy) indene vibrations assigned to in-phase out-of-plane CH bending (L9, L10, L11); and this effect is not so strong for higher wavenumber vibrations, in particular to those assigned also to an out-of-plane but out-of-phase (L4) CH bending, and to the CH2 bending (L5). Looking at the high wavenumber region, the CH stretching bands L1 and L2 do not suffer an appreciable broadening and the only effect that the water ice matrix has is modifying the relative intensities of the features within the L1 or L2 profiles. However, these intensity variations could be affected by baseline subtraction, which is difficult to do in this region because it coincides with the tail of the strong OH absorption. Regarding wavenumber shifts, the effect of the water ice matrix is very small, nonetheless, the peak positions of the low wavenumber peaks of indene in the 2% mixture are listed in Table 6, together with those of the pure species. A similar behavior had been previously observed for naphthalene in water ice (Sandford et al. 2004). As for ASW, the only modification appreciated in the spectra of the mixtures appears in the 3700 cm-1 dangling bond (DB) region, the DB feature being modified by the presence of impurities (Michoulier et al 2020). \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline label & \multicolumn{4}{|c|}{Peak position} \\ \hline & \multicolumn{2}{|c|}{pure} & \multicolumn{2}{|c|}{2\% in ASW} \\ & cm-1 & \(\mu\)m & cm-1 & \(\mu\)m \\ \hline L4 & 1459 & 6,85 & 1461 & 6,84 \\ \hline L5 & 1390 & 7,19 & 1395 & 7,17 \\ \hline L6 & 1314 & 7,61 & 1314 & 7,61 \\ \hline L7 & 945 & 10,58 & 945 & 10,58 \\ \hline L8 & 916 & 10,92 & 920 & 10,87 \\ \hline L9 & 768 & 13,02 & 774 & 12,92 \\ \hline L10 & 720 & 13,89 & 722 & 13,85 \\ \hline L11 & 698 & 14,33 & 702 & 14,25 \\ \hline \end{tabular} \end{table} Table 6: Peak positions of some indene bands in a 2% mixture in amorphous solid water (ASW) at 10 K, compared to those of the pure species amorphous spectrum. Figure 5: IR spectra of 7% (blue) and 2 % (black) indene in ASW at 10 K compared with a pure indene ice at the same temperature (red). Baseline corrections and vertical offsets have been applied to the spectra for a better visualization. ## 5 Astrophysical implications At the very low temperature (10 K) of TMC-1, volatiles should be largely frozen in the ice mantles of dust grains. Following Dartois et al. (2022) we express the partition of indene between the gas and the solid as: \[f_{H,gas}(indene)=C\;\chi_{ice}(indene) \tag{3}\] where f\({}_{\rm H,\,gas}\) is the abundance of indene gas-phase molecules with respect to the number of H atoms (which for a cold cloud will be mostly in the form of H\({}_{2}\)), \(\chi_{ice}\)(indene) represents the fraction of indene molecules in the ice, and C is a factor summarizing the effects of the processes leading to the increase and depletion of indene in each phase. For the conditions of a cold cloud like TMC-1, Dartois et al. (2022) consider that sputtering of small PAHs -containing ice by cosmic rays is the source of that gas-phase PAHs, and that the molecule is depleted from the gas-phase either by VUV photolysis or by condensation on the solid grains. Using experimental sputtering yields, Dartois et al. (2022) estimated a value C = 1.9 - 3.2 x 10\({}^{7}\) for naphthalene highly diluted in an ice matrix, and similar values are expected in general for small PAHs. By analogy, we assume C = 2 - 3 x 10\({}^{-7}\) for indene. Using equation (3), we can express the amount of indene in the ice as a function of the amount of water: \[N_{ice}(indene)\approx\frac{f_{H,gas}(indene)}{C}\frac{N_{ice}(H_{2}0)}{\chi_{ ice}(H_{2}0)} \tag{4}\] where N\({}_{ice}\)(indene) and N\({}_{ice}\)(H\({}_{2}\)O) are the ice column densities of indene and water respectively and \(\chi_{ice}\)(H\({}_{2}\)O) is the fraction of water molecules in the ice. The ice mantles in the dust grains of TMC-1 are observed in the IR extinction spectra toward the background star Elias 16 (Whittet et al. 1988, Smith et al. 1993, Chiar et al. 1996). From these observations, the fraction of water in the ice is estimated at \(\chi_{ice}\)(H\({}_{2}\)O) \(\simeq\) 0.65, and the water-ice column density at 2.5 x 10\({}^{18}\) cm\({}^{-2}\) (Knez et al. 2005). Substituting these values in equation (4) and taking the gas-phase indene abundances reported by Cernicharo et al. (2021) and Burkhardt et al. (2021), the column density of indene in the ice mantles derived from equation (4) is N\({}_{ice}\)(indene)=1.5-0.6 x 10\({}^{16}\) cm\({}^{-2}\) and the corresponding fraction of indene molecules in the ice is \(\chi_{ice}\)(indene) = (6 - 2.4) x 10\({}^{-3}\). The relevant column densities are summarized in Table 7. According to these estimates, the fraction of gas phase vs solid indene is \(\simeq\) 10\({}^{-3}\). \begin{table} \begin{tabular}{c c c c c} \hline \multicolumn{5}{c}{Column densities TMC-1, Elias 16} \\ \hline & H\({}^{\rm a}\) & H\({}_{2}\)O\({}^{\rm b}\) & Indene (gas)\({}^{\rm a}\) & Indene (ice)\({}^{\rm c}\) \\ \hline N (cm\({}^{-2}\)) & 2 x 10\({}^{22}\) & 2.5 x 10\({}^{18}\) & (1.6 - 1) x 10\({}^{13}\) & (1.5 - 0.6) x 10\({}^{16}\) \\ \hline a) & Cernicharo et al. (2021), Burkhardt et al. (2021) & \\ b) & Knez et al. 2005 & \\ c) & This work, equation (4) with C= 2-3 x 10\({}^{-7}\). See the text \\ \hline \end{tabular} \end{table} Table 7: Column densities in TMC-1 As mentioned in the introduction, tentative signatures of PAHs have been suggested in the IR extinction spectra of ices and some of them have been even used for the estimation of the amount of cosmic carbon locked in PAHs in dense clouds (Bowman et al. 2011, Hardegree-Ullman et al. 2014, Chiar et al. 2021 ). The preferred band for this type of estimates is the CH stretch vibration appearing at about 3.3 \(\upmu\)m because its position is very stable for different PAHs and is thus adequate for the mixtures of PAHs presumed to be present in the dust grains. The 3.3 \(\upmu\)m feature is not the most intense IR band for individual PAHs, which usually corresponds to the CH out-of-plane bending vibrations appearing at wavelengths larger than 11 \(\upmu\)m, but the CH out-of-plane bending bands are more variable in position than the CH stretch and lead to broadened and weaker bands in the spectra of PAH mixtures (Allamandola et al. 1999) that can't be easily discerned from other ice components. Two IR absorption features at 3.3 and 3.47 \(\upmu\)m, have been reported in the line of sight toward Elias 16 which is close to the position of TMC-1 (Chiar et al. 1996, Chiar et al. 2021). They can be seen in the upper curve of Figure 6. Note that these bands have been derived from ground-based observations of a very weak substructure in the long wavelength wing of the OH band of water ice and may be affected by a large uncertainty due to baseline subtraction and telluric absorptions. The full absorption spectrum of the OH band of ice in Elias 16, including data from the ISO satellite, can be found in Whittet et al. (1998) and Gibb et al. (2004). The 3.3 \(\upmu\)m feature, is generally attributed to aromatic CH stretching vibrations (Sellgren et al. 1994, Sellgren 1995, Bowman et al. 2011, Hardegree et al. 2014, Chiar et al. 2021). The 3.47 \(\upmu\)m absorption is mostly assigned to ammonia hydrates (Dartois & D' Hendecourt 2001, Dartois et al. 2002, Boogert et al. 2015) but other carriers have been also suggested, including H atoms bound to ternary sp\({}^{3}\) carbon (Allamandola et al. 1992), and hydrogenated PAHs (Chiar et al. 2021). Note also that in some of the mentioned observations a contribution from crystalline ice could lead to distortions in the baseline subtraction. Even if ammonia hydrates were the predominant carriers, contributions from other species cannot be excluded. With the value of N\({}_{\rm ice}\)(indene) from Table 7 and the experimental band strengths from Table 2 we can now estimate the expected contribution of indene to these absorption features. Indene has also two absorption bands (L1 and L2) in this spectral region. An optical depth spectrum of indene ice at 10 K is also shown in Figure 6 for comparison. It has been scaled for a column density N\({}_{ice}\) (indene) = 1.5 \(\times\) 10\({}^{16}\) cm\({}^{-2}\), which corresponds to the higher amount of indene estimated above (Table 7) for the ice mantles of TMC-1 (using the lower estimate would reduce the intensity by a 2.5 factor). For a better appreciation, the indene optical depth spectrum is multiplied by 10 in the figure. The integrated area of the L1 band (aromatic CH stretch at 3.25 \(\upmu\)m) of indene in the optical depth spectrum of Figure 6 is roughly 0.2 cm-1 and corresponds to about 5 % of the area of the first broad band in the observational spectrum. Comparable contributions to this absorption band can be expected from benzene and naphthalene, whose cyano derivatives have also been observed in TMC-1 (Sita et al., 2022), which strongly suggests that an appreciable fraction of the 3.3 \(\upmu\)m absorption feature toward Elias 16 could be accounted for by indene, benzene, naphtalene and other small aromatic hydrocarbons formed in situ in TMC-1, and not just by a mixture of large PAHs from post AGB stars, with an average of 50 carbons, as has been traditionally assumed. The L2 band of indene (CH stretch of the CH\({}_{2}\) group at 3.45 \(\upmu\)m) in the spectrum of Fig. 6 has an area of approximately 0.07 cm-1, which represents \(\approx\) 2 % of the second observational band and supports the assignment of this feature, at least in a small part, to hydrogenated (not necessarily large) PAHs (Chiar et al., 2021). For the same column density of indene, other characteristic absorption lines like L3 (6.18 \(\upmu\)m), L4 (6.86 Figure 6: Upper curve: Observed absorption spectrum towards Elias 16 (from Chiar et al., 2021). For the whole spectrum of the OH band see: Whittet et al. (1998), Gibb et al. (2005). Lower curve: Absorption spectrum of indene deposited at 10 K (Fig. 1) scaled for a column density N\({}_{ice}\)(indene)= 1.5 \(\times\) 10\({}^{16}\) cm-2 with the band strengths of Table 2. The selected column density corresponds to the higher value estimated above for indene in the ice mantles (Table 7). The intensity of this spectrum has been multiplied by 10 in the figure. \(\mu\)m) and L9 (13,02 \(\mu\)m), which are outside the range of Figure 6, have peaks, of \(\approx\) 7 \(\times\) 10\({}^{4}\), \(\approx\) 4 \(\times\) 10\({}^{3}\) and \(\approx\) 1 \(\times\) 10\({}^{2}\) respectively in the optical depth spectrum, and would be buried under broad ice absorptions (Chiar et al. 2021, Knez et al. 2005) due to multiple components (Boogert et al. 2015) Finally, assuming a cosmic carbon abundance of 3.24 \(\times\) 10\({}^{-4}\) with respect to H (Hensley and Draine, 2021) and taking into account that indene molecules have 9 C atoms, the fraction of cosmic carbon locked up in indene in TMC-1, mostly in ice mantles, would be \(\approx\) (2 - 0.8) \(\times\) 10\({}^{-2}\). This value is consistent with the carbon inventory, but is possibly too high for a single small hydrocarbon. Further work is needed to assess the presence and stability of indene and other small aromatic hydrocarbons in the ice mantles of dust grains in cold clouds. ## 6 Summary and conclusions. Two solid phases of indene, amorphous and polycrystalline have been generated via vapor deposition on a cold surface. An amorphous form is formed in the 10 K deposit, and a crystalline form for deposition at 160 K. The phase transition from amorphous to crystalline is observed to take place between 120 K and 130 K when worming the ice at 1 K/min. The solid sublimates at 180 K under high vacuum conditions. Infrared spectra and infrared band strengths are provided for both amorphous and crystalline phases. The infrared spectra of highly diluted mixtures (2% and 7%) of indene in amorphous solid water generated at 10 K are also provided. The indene spectrum is not much altered by the water ice environment. Small frequency shifts, not larger than 6 cm\({}^{-1}\) are appreciated, and only the bands with wavelengths below 1000 cm\({}^{-1}\) are significantly broadened. Crystalline and amorphous solids of indene have been constructed using density functional theory, and their infrared spectra have been calculated. Comparison of the calculated spectra with the experimental one, strongly suggests that the most probable crystalline structure of indene has monoclinic symmetry. No previous theoretical nor experimental information about the crystalline structure of indene was available in the literature. The density found for the crystal is 1.40 g/cm3, larger than the experimental value known for the liquid (1.3 g/cm\({}^{3}\)). For the amorphous form, the most stable structure was found to have a density of 1.27 g/cm3, close to that of the liquid. Assignments of the main absorptions of solid indene, based on the calculations, are given. Theoretical IR band strengths are also given and compare well with the experimental ones, within experimental error. Our results are expected to help the search of this species in the solid phase in astrophysical environments with the JWST. They could be applied also in laboratory astrochemistry. The band strengths will be an important magnitude to estimate the number of indene molecules in the ice deposit, for example, when conducting energetic processing experiments. With the signal to noise ratio of our experimental setup, 0.015 (mean square deviation of the baseline noise in the transmittance spectrum), indene could be detected in 1% mixtures using the 1461 cm\({}^{-1}\) (7,14 \(\mu\)m) (L3) band, and in 2% mixtures when looking at the L8 band (768 cm\({}^{-1}\), 13,00 \(\mu\)m). Larger fractions, about 7%, are needed if looking at L1, L2 in the 3000 cm\({}^{-1}\) (3,3 \(\mu\)m) region. Using the observed indene gas-phase column density from mm observations of rotational transitions, and assuming that indene molecules arise from the cosmic ray ejection from the ice mantles, we estimate with a simple literature model a column density of N\({}_{\rm ice}\) (indene) = (1.5 -0.6) \(\times\) 10\({}^{16}\) cm\({}^{-2}\) for the ice mantles of TMC-1. With our measured band strengths, this amount of solid indene could account for 2-5 % of the intensity of the weak absorption feature at 3.3 \(\upmu\)m in the IR spectrum towards Elias 16 reported by Chiar et al. (2021). This suggests that small polyaromatic hydrocarbons formed in situ through cloud chemistry, and not just large PAHs from post AGB stars, could contribute appreciably to hypothetical PAH signatures in the IR ice spectra. Assuming a cosmic carbon abundance of 3.24 \(\times\) 10\({}^{-4}\) with respect to H (Hensley and Draine, 2021), the fraction of cosmic carbon locked up in indene in TMC-1, mostly in ice mantles, would be = (2 - 0.8) \(\times\) 10\({}^{-2}\). This value is consistent with the carbon inventory, but is possibly too high for a single small hydrocarbon. The occurrence, stability and chemical relevance of small aromatic hydrocarbons in the ice mantles should be further investigated and considered in astrochemical models. ## Acknowledgements This work was funded by Ministerio de Ciencia e Innovacion (MCI) of Spain under grant PID2020-113084GB-I00, and by the European Union under grant ERC-2013-Syg-210656- NANOCOSMOS. The computation time provided by the Centro Tecnico de Informatica, cluster Trueno from CSIC and Centro de Supercomputacion de Galicia CESGA is deeply acknowledged. ## Data Availability Statements Data available on request. The data underlying this article will be shared on reasonable request to the corresponding author.
2306.04902
A Cover Time Study of a non-Markovian Algorithm
Given a traversal algorithm, cover time is the expected number of steps needed to visit all nodes in a given graph. A smaller cover time means a higher exploration efficiency of traversal algorithm. Although random walk algorithms have been studied extensively in the existing literature, there has been no cover time result for any non-Markovian method. In this work, we stand on a theoretical perspective and show that the negative feedback strategy (a count-based exploration method) is better than the naive random walk search. In particular, the former strategy can locally improve the search efficiency for an arbitrary graph. It also achieves smaller cover times for special but important graphs, including clique graphs, tree graphs, etc. Moreover, we make connections between our results and reinforcement learning literature to give new insights on why classical UCB and MCTS algorithms are so useful. Various numerical results corroborate our theoretical findings.
Guanhua Fang, Gennady Samorodnitsky, Zhiqiang Xu
2023-06-08T03:09:49Z
http://arxiv.org/abs/2306.04902v2
# A Cover Time Study of a non-Markovian Algorithm ###### Abstract Given a traversal algorithm, cover time is the expected number of steps needed to visit all nodes in a given graph. A smaller cover time means a higher exploration efficiency of traversal algorithm. Although random walk algorithms have been studied extensively in the existing literature, there has been no cover time result for any non-Markovian method. In this work, we stand on a theoretical perspective and show that the negative feedback strategy (a count-based exploration method) is better than the naive random walk search. In particular, the former strategy can locally improve the search efficiency for an arbitrary graph. It also achieves smaller cover times for special but important graphs, including clique graphs, tree graphs, etc. Moreover, we make connections between our results and reinforcement learning literature to give new insights on why classical UCB and MCTS algorithms are so useful. Various numerical results corroborate our theoretical findings. Introduction The cover time of a walk/an algorithm on a graph is the expectation of the number of steps required to visit every nodes/vertices. Formally, given a finite graph, we say there is a (directed) edge between node \(i\) and node \(j\) if we can take some action such that the agent could transit from state \(i\) to state \(j\). For time \(n=0,1,\ldots\), we use \(X_{n}\) to denote a sequence of nodes covered by the traversal algorithm. We define \[T_{C} := \text{the smallest $n$ such that }X_{0},X_{1},\ldots X_{n}\text{ \ visit all nodes of graph,} \tag{1}\] whose expectation \(\mathbb{E}[T_{C}]\) is called the _cover time_(Broder and Karlin, 1989; Kahn et al., 1989). It is of particular interest to study the cover time since it quantifies how fast/effectively a walk/an algorithm can traverse the whole graph. One of the most common but important walk is known as (simple) random walk Pearson (1905); Abdullah (2012); Spitzer (2013), which is a sequence of movements from one node to another where at each step an edge is chosen uniformly at random from the set of edges incident on the current node, and then transitioned to the next node. Cover time on random walk has been studied extensively in past several decades Aldous (1991); Lovasz and Winkler (1993); Grassberger (2017); Dembo et al. (2021). Other extended types of random walk, including lazy random walk Avin et al. (2008) and weighted random walk Abdullah (2012), have also been considered in the literature. Unfortunately, all such theoretical results on cover time pertain to memory-less random walk. There has been no result on the cover time of any non-Markovian traversal algorithm. In other words, it is relatively hard to analyze the covering property of history-dependent random walk algorithms when the Markovian property fails to hold. In this paper, we try to bridge the aforementioned gap. Specifically, we consider a simple but important non-Markovian traversal algorithm, which we call the _negative feedback_ strategy. To be more mathematically clear, the negative feedback algorithm is a count-based method. If \(X_{n}=i\) at time \(n\), the next state is uniformly randomly selected from a subset \(\mathit{Smin}_{i}^{(n)}\) of \(i\)'s neighbours, where \(\mathit{Smin}_{i}^{(n)}\) contains those nodes which are _least_ visited from state \(i\) up to time \(n\). Such procedure is called the **"favor least"** mechanism. Heuristically, it tends to move to un/less-visited nodes and hence can improve the cover time. Undoubtedly, it is history-dependent, since it requires counting transitions between each pair of neighbour nodes. Why should we consider the negative feedback algorithm? Reasons are of three-fold. First, it is one of the simplest non-Markovian random walk algorithms. There is less hope of making any theoretical claims on the cover time of a very complex traversal algorithm. Second, it only requires to store a count table where each entry represents the number of movements from one node to its neighbor. It can be updated very efficiently and fast. Third, it has strong connections with algorithms in reinforcement learning (RL) field. To be more concrete, given a discrete-state environment, we can treat each state as a node. The agent can take a certain policy to explore the whole environment. The negative feedback strategy is often treated as an exploration tool McFarlane (2018); Hazan et al. (2019) in an unknown Markov Decision Process (MDP) setting. Our main results in this work are summarized here. 1. [label=0.,ref=] 2. We first show a local improvement result for a general graph. To be specific, we consider a local version of the negative feedback algorithm where the "favor least" mechanism is only applied to the starting state \(X_{0}\). For an **arbitrary** graph, we have shown that \(\mathbb{E}_{\pi_{loc}}[N_{j}|X_{0}]\leq\mathbb{E}_{\pi_{rw}}[N_{j}|X_{0}]\) for any other node \(j\neq X_{0}\), where \(N_{j}\) is defined to be the number of excursions outside starting node \(X_{0}\) before state \(j\) is visited for the first time, \(\pi_{loc}\) and \(\pi_{rw}\) stand for the local negative feedback algorithm and random walk policy, respectively. This local improvement result implies that the negative feedback mechanism improves the exploration efficiency, at least locally, in the sense that the agent has the stronger tendency to visit other nodes instead of returning to the starting node. 3. We then make a step forward and show cover time improvement under several special graph structures. We are able to show that \(\mathbb{E}_{\pi_{neg}}[T_{C}]<\mathbb{E}_{\pi_{rw}}[T_{C}]\) (\(\pi_{neg}\) represents the negative feedback algorithm) under Star, Path, Clique and Tree graphs. In particular, in the case of a **balanced \(b\)-ary tree**, we establish that \(\mathbb{E}_{\pi_{neg}}[T_{C}]\leq 4H\frac{b+1}{b-1}b^{H}\), where \(b\) is the number of children of each non-leaf node and \(H\) is the depth of tree. By contrast, it was shown (Aldous, 1991) that \(\mathbb{E}_{\pi_{rw}}[T_{C}]\approx 2H^{2}b^{H+1}\frac{\log b}{b-1}\). Therefore, the negative feedback algorithm improves the cover time by order of \(H\log b\). In other words, in tree-like RL games, the naive random walk search becomes less efficient compared with negative feedback strategy as action and state spaces become more complex. The rest of the paper is organized as follows. In Section 2, we introduce the cover time formulation and provide an illustrative example to show why the negative feedback algorithm can improve over the random walk policy. In Sections 3 and 4, we establish the local and cover time improvement results, respectively. In Section 5, we make connections to the maximum-entropy exploration, UCB, and Monte Carlo tree search methods and make attempts on non-discrete cases. A concluding remark is given in Section 6. Numerical experiments, additional discussions and technical proofs are provided in the appendices. ## 2 Cover Time Formulation Let us imagine that an agent walks on a finite and connected graph. If an agent could take some action to transit from node \(i\) to node \(j\), then we say there is a (directed) edge from node \(i\) to \(j\). To help reader understand the terminology clearer, we provide the following examples. In a two-dimensional Grid Word environment, a node can be viewed as the position of the agent. Since the agent can choose to move up, down, right or left, two nodes have an edge between them if and only if these two positions are adjacent to each other. In a Go game, two players take turns to put stones on the board. A node here represents a 19 \(\times\) 19 board with white and black stones on it. There is a directed edge from node \(i\) to another node \(j\), only if node \(j\) can be reached from node \(i\) after a player takes a single action. Given a starting time \(n=0\) and an initial node \(X_{0}\), we let \(X_{n}\), \(n=0,1,\ldots\) describe the sequence of states/nodes governing by some exploration strategy and denote \[T_{C} = \mbox{the first time $n$, $X_{0},X_{1},\ldots,X_{n}$ visit all nodes of the graph.} \tag{2}\] The quantity \(\mathbb{E}[T_{C}]\) (expectation of \(T_{C}\)) is called the cover time (Broder and Karlin, 1989; Kahn et al., 1989). In this paper, we mainly focus on two exploration strategies, random walk algorithm (Aldous, 1991; Dembo et al., 2021) and negative feedback algorithm, whose formal mathematical formulations are described as below. **Random walk algorithm** This is a Markovian mechanism; if \(X_{n}=i\) at some time \(n=0,1,2,\ldots\) for some node \(i\), the next state is chosen uniformly at random among the neighbours of \(i\) in the graph. \[\mathbb{P}(X_{n+1}=j|X_{0}=i_{0},\ldots,X_{n-1}=i_{n-1},X_{n}=i) \tag{3}\] \[= \left\{\begin{array}{ll}1/d_{i}&\mbox{if $(i,j)$ is an edge}\\ 0&\mbox{if $(i,j)$ is not an edge,}\end{array}\right.\] where \(d_{i}\) is the degree of the node \(i\). **Negative feedback algorithm** For every node \(i\) of the graph and every neighbour \(j\) of \(i\) let \(N_{ij}^{(n)}\) be the number of times the agent moved from node \(i\) to node \(j\) prior to time \(n\) (so that \(N_{ij}^{(0)}=0\) for all nodes \(i,j\)) and denote \[Nmin_{i}^{(n)}=\min_{j:\,(i,j)\mbox{ an edge}}N_{ij}^{(n)}, \tag{4}\] \[Smin_{i}^{(n)}=\big{\{}j:\,(i,j)\mbox{ is an edge and }N_{ij}^{(n)}=Nmin_{i}^{(n)} \big{\}},\] \[Kmin_{i}^{(n)}=\mbox{ cardinality}(Smin_{i}^{(n)}).\] Then \(X_{n}=i\) at some time \(n=0,1,2,\ldots\) for some vertex \(i\), the next state is chosen uniformly at random among the neighbours of \(i\) in the graph with the smallest prior selection. That is, \[\mathbb{P}(X_{n+1}=j|X_{0}=i_{0},\ldots,X_{n-1}=i_{n-1},X_{n}=i) \tag{5}\] \[= \left\{\begin{array}{ll}1/Kmin_{i}^{(n)}&\mbox{if $j\in Smin_{i}^{(n)}$ }\\ 0&\mbox{otherwise.}\end{array}\right.\] In other word, a neighbour node \(j\) will be never chosen only if it becomes the least visited one from current node \(i\). It is not hard to see that negative feedback algorithm requires storing the counts of transitions between pair of nodes with edge among them. Therefore the algorithm is non-Markovian and does not have nice properties (e.g. regeneration property) as random walk algorithm does. It makes theoretical analysis extremely hard in studying general graph. **Remark 2.1**.: _There are quite a few existing works on the cover time of random walk algorithm (see Kahn et al. (1989); Feige (1995); Abdullah (2012) and references therein) and its variants (e.g. lazy random walk (Avin et al., 2008), random walk with heterogeneous step lengths (Guinard and Korman, 2020)). However, to our knowledge, there is no literature considering the cover time problem of any count-based algorithm._ Let us first consider the following specific toy example, which shows the advantage of negative feedback algorithm over random walk algorithm. **A toy grid world**. It is a three by three two-dimensional maze as shown in Figure 1. Black grids are obstacles which are not accessible. At starting time \(n=0\), the agent is placed at "Start" grid. The "End" grid is the target node. The positive reward will not be given until the agent arrives at "End" grid. One would wonder, under which policy between random walk and negative feedback algorithms, the agent can take fewer steps in this simple task? We define \(T_{task}^{x}\) to be the first time of arriving "End" under a policy \(\pi\). Let \(\pi_{rw}\) be the random walk policy and \(\pi_{neg}\) be the negative feedback algorithm. Is \(\mathbb{E}[T_{task}^{\pi_{neg}}]<\mathbb{E}[T_{task}^{\pi_{rw}}]\)? The answer is affirmative as indicated by the following proposition. **Proposition 2.1**.: _In the toy grid world described above, we have \(\mathbb{E}[T_{task}^{\pi_{neg}}]<\mathbb{E}[T_{task}^{\pi_{rw}}]\equiv 23.\)_ Moreover, we consider the _temporally-persistent/extended_ policy (Dabney et al., 2020), where one can randomly choose an action and perform it for consecutive times. To be specific, the agent first choose the direction (up, down, right or left) uniformly randomly and then choose the repetition time \(z\sim p(z)\) (\(z=1,2,\ldots\)). We denote such policy as \(\pi_{per}(p).\) When \(p(z=1)=1\), the policy reduces to the random walk strategy. In this toy grid word, it can be shown that the temporally-persistent strategy is even worse than random walk method. That is, \(\mathbb{E}[T_{task}^{\pi_{per}(p)}]\geq\mathbb{E}[T_{task}^{\pi_{rw}}]\). **Proposition 2.2**.: _In the same toy grid world as in Proposition 2.1, we have \(\mathbb{E}[T_{task}^{\pi_{rw}}]\leq\mathbb{E}[T_{task}^{\pi_{per}(p)}]\) for any distribution \(p.\)_ ## 3 Local Improvement Our goal is to provide a theoretical explanation why one can expect that, at least in some cases, the negative feedback algorithm has a smaller cover time than the random walk. Since a direct analytical analysis of the cover time on an arbitrary graph is extremely difficult, we will instead look at a related quantity in this section. For a node \(j\) of the graph, we denote the first hitting time of \(j\) by \[T_{j}=\inf\{n\geq 1:\,X_{n}=j\}.\] For an arbitrary node index \(i\), \(\mathbb{E}[T_{j}|X_{0}=i]\) represents the expected time to reach vertex \(j\) starting in vertex \(i\). It is intuitively clear that the quantities \(\left(\mathbb{E}[T_{j}|X_{0}=i]\), \(i,j\) are two nodes) are strongly related to the cover Figure 1: A 3 by 3 grid world. The black grid is inaccessible. The arrow represents the action can be taken in each grid. time. More precisely, we define \[\mu^{+}=\max_{i,j}\mathbb{E}[T_{j}|X_{0}=i],\ \ \mu^{-}=\min_{i,j}\mathbb{E}[T_{j}|X_{0}=i].\] Then for any starting node \(i\) of the graph, it holds \[\mu^{-}H_{m-1}\leq\mathbb{E}[T_{C}]\leq\mu^{+}H_{m-1}, \tag{6}\] where \(m\) is the total number of nodes in the graph and \[H_{k}:=1+1/2+\cdots+1/k\] is the \(k\)th harmonic number; see Matthews (1988) for more detailed explanations. Therefore, as a substitute for comparing directly the cover time under the negative feedback algorithm and the random walk, it is desired to compare the expected first hitting times under these two algorithms. For any two arbitrary nodes \(i\) and \(j\) in graph, a direct comparison of \(\mathbb{E}[T_{j}|X_{0}=i]\) under the two algorithms also seems to be prohibitively difficult. We further instead compare another related quantities. Let \(V_{0}=0\) and, for any integer \(k\geq 1\), also let \[V_{k}=\inf\bigl{\{}n>V_{k-1}:\,X_{n}=i\bigr{\}},\] the time of the \(k\)th visit to state \(i\). If \(V_{k-1}=\infty\), then we set \(V_{k}=\infty\) as well. We can think of the time interval \(\{V_{k-1}+1,\ldots,V_{k}\}\) as the \(k\)th excursion outside of the vertex \(i\). Let \[N_{j}=\inf\bigl{\{}k\geq 1:\,T_{j}\leq V_{k}\bigr{\}}\] be the number of the excursion outside of \(i\) during which vertex \(j\) is visited for the first time. It is clear that \(\mathbb{E}[T_{j}|X_{0}=i]\) is also closely related to \(\mathbb{E}[N_{j}|X_{0}=i]\). The intuition is that smaller \(N_{j}\) indicates that agent spends less time to discover node \(j\). In the rest of section, we will aim to compare the latter quantity between two algorithms. To make the comparison possible and circumvent the non-Markovian issue, we consider the **local version** of the negative feedback algorithm. _(Local negative feedback algorithm) The mechanism (5) is used_ **only when the agent is in the starting state \(i\)**_. In every other states, the random walk dynamics is used!_ By such modification, we are able to show that values of \(\mathbb{E}[N_{j}|X_{0}=i]\) under the local negative feedback algorithm is no larger than that of random walk exploration strategy. **Remark 3.1**.: _The technical reason of considering local negative feedback algorithm is explained here. A nice property of naive random walk strategy is known as regeneration property, where everything is reset once the agent returns back to the original state. It makes computation of mathematical recursive formula possible. Unfortunately, negative feedback algorithm relies on the past information and does not has such regeneration property. Instead, the local version of negative feedback can mitigate this issue. Despite of being non-Markovian, the regeneration can still happen at the time \(n\) when \(Kmin_{i}^{(n)}=d_{i}\)._ **Theorem 3.1**.: _For any given starting node \(i\), it holds_ \[\mathbb{E}_{\pi_{loc}}[N_{j}|X_{0}=i]\leq\mathbb{E}_{\pi_{rw}}[N_{j}|X_{0}=i] \tag{7}\] _for any node \(j\neq i\) in the graph._ Theorem 3.1 says that, on average/expectation, the local negative feedback algorithm takes less number of excursions (outside the starting state) to visit any other states. Therefore, a local modification at state \(i\) indeed improves the exploration efficiency, i.e., stronger tendency of visiting other states rather than returning back to the initial state. ## 4 Cover Time Improvement As described in the previous sections that direct analytical analysis of cover time is not an easy job for the negative feedback strategy whose non-Markovian mechanism makes computation prohibitively hard. However, fortunately, we are able to show that the negative-feedback strategy is strictly better than random walk strategy, \(\mathbb{E}_{\pi_{neg}}[T_{C}]<\mathbb{E}_{\pi_{rw}}[T_{C}]\), when the graph admits some special structures. To start with, we first provide a general property of the negative feedback algorithm, i.e. the worst case of \(T_{C}\) is always bounded. **Theorem 4.1**.: _For any connected graph, there exists a positive integer \(G\) such that \(T_{c}\leq G\). Here \(G\) may depend on the number of nodes in the graph, the maximum degree of a single node and the length of the longest path._ By Theorem 4.1, we know that the negative feedback policy can traverse all the nodes in finite time. By contrast, the random walk policy does not have such property. Mathematically, for any graph with \(d_{max}\geq 2\) (where \(d_{max}\) is the largest node degree in the graph), it holds \[\mathbb{P}_{\pi_{rw}}(T_{C}>N)>0\quad\text{for any integer }N\in\mathbb{N}. \tag{8}\] Another immediate conclusion from the above theorem is that \(\mathbb{E}_{\pi_{neg}}[T_{C}]\leq G\). But for most graphs, the constant \(G\) obtained in Theorem 4.1 is too loose. In the proof, we can see that \(G\) grows exponentially as the longest path length grows. We may get rid of such exponential relationships after taking into account specific graph structures. In the rest of this section, we make efforts to establish tighter bounds for special graphs including "Star", "Path", "Circle", "Clique" and "Tree", which are the most common graphs studied in the literature of cover time analysis for random walk policy. **Star Graph**. There is an central node (state) \(0\) and it connects to \(n\) leaf nodes. The starting position is node \(0\). See Figure 2 for graphical illustration. It is easy to see that the degree of node \(0\) is \(n\) and degree of each leaf node is \(1\). **Theorem 4.2**.: _In the star graph with \(n\geq 2\), it holds that \(\mathbb{E}_{\pi_{neg}}[T_{C}]=2n-1\) and \(\mathbb{E}_{\pi_{rw}}[T_{C}]=2n(\sum_{i=1}^{n}\frac{1}{i})-1\). Hence, cover time of negative feedback policy is strictly smaller than that of random walk policy._ **Path**. All \((n+1)\) nodes are aligned in a line. Node \(i\) (\(\neq 0\) or \(n\)) is connected to \(i-1\) and \(i+1\). The initial state is state \(0\). In this graph, nodes \(0\) and \(n\) have degree \(1\). All other nodes have degree \(2\). **Theorem 4.3**.: _In the path graph with \(n\geq 2\), it holds that \(\mathbb{E}_{\pi_{neg}}[T_{C}]<n^{2}\equiv\mathbb{E}_{\pi_{rw}}[T_{C}]\). Hence cover time of negative feedback policy is strictly smaller than that of random walk policy._ **Circle**. All \((n+1)\) nodes are aligned in a circle. Node \(i\) is connected to node \((i-1)\%(n+1)\) and node \((i+1)\%(n+1)\), where \(\%\) stands for modulo. The initial state is state \(0\). In this graph, all nodes have degree \(2\). **Theorem 4.4**.: _In the path graph with \(n\geq 2\), it holds that \(\mathbb{E}_{\pi_{neg}}[T_{C}]<\frac{1}{2}(n+1)n\equiv\mathbb{E}_{\pi_{rw}}[T_{C}]\). Hence cover time of negative feedback policy is strictly smaller than that of random walk policy._ **Clique Graph**. It is a graph of \(n\) nodes such that any pair of nodes has an edge between them. It is not hard to see that every node has degree \(n\). **Theorem 4.5**.: _In the clique graph with \(n\geq 3\), the strict inequality \(\mathbb{E}_{\pi_{neg}}[T_{C}]<\mathbb{E}_{\pi_{rw}}[T_{C}]\) holds._ **Tree Graph**. It is a graph of nodes with no circle. The tree has depth \(H\) and each non-leaf node has at most \(b\) child nodes. **Remark 4.1**.: _In fact, every RL environment can be reformulated as a tree graph if we treat every state-action trajectory as a single node. This idea is usually adopted in counter-factual regret minimization (CFR, Zinkevich et al. (2007))._ **Balanced \(b\)-ary Tree Graph**. It is a tree graph with depth \(H\) and each non-leaf node has exactly \(b\) children. It can be seen that the balanced \(b\)-ary tree is a special case of a general tree. In the literature, the following result of cover time under random walk policy has been established in 1990s (Aldous, 1991). **Proposition 4.6** (Aldous (1991)).: _For a balanced \(b\)-ary tree of depth \(H\), the cover time of random walk algorithm is asymptotically_ \[2H^{2}b^{H+1}(\log b)/(b-1)\] _as \(H\to\infty\)._ We first show there exists an upper bound of visiting each node under negative feedback algorithm. Therefore, it will not waste too much time on visited nodes before all node have been visited at least once. **Theorem 4.7**.: _In the tree graph, under negative feedback algorithm, each node is visited at most \(2(b+1)H\) times before all nodes have been visited at least once._ By counting the total number of nodes, a direct application of Theorem 4.7 will lead to that, in the balanced \(b\)-ary tree graph with depth \(H\), it holds \(\mathbb{E}_{\pi_{neg}}[T_{C}]\leq 2(b+1)H^{\frac{b^{H+1}}{b-1}}\). A more refined analysis will give us the following result. **Theorem 4.8**.: _In the balanced \(b\)-ary tree graph with depth \(H\), under negative feedback algorithm, it holds \(\mathbb{E}_{\pi_{neg}}[T_{C}]\leq 4H\frac{b+1}{b-1}b^{H}\)._ Figure 2: Upper left: Star graph with initial state at 0. Upper right: Path graph with initial state at 0. Center: Clique graph with an arbitrary initial state. Bottom left: Circle graph with an initial state at 0. Bottom right: a tree graph with root node as initial state. Compared with Proposition 4.6, the negative feedback algorithm is asymptotically \(H\) times faster than the random walk algorithm for any fixed \(b\). It is also asymptotically \(\log b\) faster for any fixed \(H\). Therefore, the negative feedback algorithm improves the search efficiency in terms of both tree width and tree depth. Moreover, there are \(\frac{b^{H^{H}1}-1}{b-1}\) nodes in a \(b\)-ary tree, which indicates that the cover time is no smaller than order of \(b^{H}\). In other words, the negative feedback algorithm visits each node times on average of order \(H\), while the random walk algorithm visits each node on average \(H^{2}\log b\). This is a substantial improvement. In practice, for many board games which can have large action spaces and form very deep trees, the random walk exploration is a less efficient strategy according to our theoretical explanation from the cover time perspective. Additionally, \(4Hb^{H}\frac{b+1}{b-1}\) is actually also the worst case bound of \(T_{C}\) (in addition to the bound on expectation \(\mathbb{E}[T_{C}]\)) under the proposed algorithm. By contrast, \(2H^{2}b^{H+1}(\log b)/(b-1)\) is only the upper bound of expectation, \(\mathbb{E}[T_{C}]\). In other words, with non-vanishing probability (in some extreme cases), \(T_{C}\) can be exponentially large under random walk policy as number of nodes grows. ## 5 Connections and Discussions In previous sections, we have established properties of negative feedback algorithm on the finite-node graph and show that it indeed improves cover times of several important graphs. In below, we make connections to reinforcement learning field and provide practical implications why negative feedback strategy is so interesting and important to many popular RL algorithms. Before going to detailed discussions, we would like to make the clarification that we are not trying to propose any new RL algorithm in our paper. Negative feedback algorithm considered here is just a counterpart/non-Markovian extension of random walk. It does not rely on any Markov decision process setting or any Bellman equation-related assumptions Puterman (1990). The following discussions are mainly heuristic without mathematical justifications and help readers to realize the importance of the negative feedback strategy. ### Connection with \(\epsilon\)-Greedy Methods The negative-feedback exploration strategy can be easily incorporated into any existing reinforcement learning algorithm. For example, in \(\epsilon\)-greedy type of methods (Sutton, 1995; Wunder et al., 2010), we can replace random action selection by using negative-feedback strategy. With \(1-\epsilon\) probability, agent adopts a learned policy for exploiting the current possible maximum reward. In the literature, such learner can be either model-based method (UCRL2 (Thomas et al., 2010), UCRL2B (Fruit et al., 2020), etc.) or model-free method (\(Q\)-learning (Watkins and Dayan, 1992), SARSA (Sutton and Barto, 2018), etc.). With \(\epsilon\) probability, exploration policy is used for exploring the entire state space and can help escaping the local optimum. As discussed in previous sections, the negative feedback algorithm is indeed a better strategy than random work from theoretical perspective. As a result, negative feedback strategy can theoretically improve the efficiency of any \(\epsilon\)-greedy-type RL algorithms in tree-like environment. ### Connection with RL Exploration Methods A fundamental problem in reinforcement learning is how to explore the state space faster, especially when the environment only provides sparse rewards or even no reward before reaching the final state. This question has received a lot of attentions, with approaches such as intrinsic-reward learning (Chentanez et al., 2004; Bellemare et al., 2016), curiosity-driven algorithm (Pathak et al., 2017; Burda et al., 2018), etc. Among those, maximum-entropy exploration policy (Hazan et al., 2019) arouses special interests in recent years. The policy needs to iteratively learn the unknown MDP. For un-explored / less-explored node \(s\) (see definition of \(m\)-known state in Hazan et al. (2019)), they select action, \[\arg\min_{a}N(s,a),\] where \(N(s,a)\) is the cumulative count number of choosing \(a\) at state \(s\) up to the current round. In other words, the negative feedback algorithm considered in our paper serves as an important role in learning unknown transition probabilities. Our theories answer that why popular RL exploration methods prefer to using "favor least" mechanism rather than using simple random walk strategy. ### Connection with UCB method The upper confidence bound (UCB) algorithm (Auer, 2002; Auer et al., 2002) is probably one of the most famous methods in balancing exploration and exploitation. At time \(n\) and state \(s\), the agent chooses the best action according to the following criteria, \[\arg\max_{a}\big{\{}\hat{r}(s,a)+c\sqrt{\frac{\log n}{N(s,a)}}\big{\}}, \tag{9}\] where \(\hat{r}(s,a)\) is the sample average of (accumulated) returns by choosing action \(a\) at state \(s\), \(N(s,a)\) is again the cumulative count number of choosing \(a\) at state \(s\) up to time \(n\). In many scenarios like Grid Word or chess board games, the reward is very sparse. Therefore, the reward estimates \(\hat{r}(s,a)\equiv 0\) before the agent can reach a non-zero reward state. (9) can be reduced to \(\arg\max_{a}\sqrt{\frac{\log n}{N(s,a)}}\) which is equivalent to \(\arg\min_{a}\ N(s,a)\). The latter criterion is exactly the negative feedback algorithm. Hence, by previous theorems, we know UCB algorithm is indeed theoretically better than naive random action selection in terms of exploration efficiency in very sparse reward environments. ### Connection with Monte Carlo Tree Search In computer science, Monte Carlo tree search (MCTS, Browne et al. (2012); Silver et al. (2016)) is a heuristic search algorithm for some kinds of decision processes, most notably those adopted in software that plays board games. In that context, MCTS is usually used to solve the game tree. MCTS consists of four main steps, selection, expansion, simulation and back propagation. Recall the fact that, in expansion step, MCTS will always randomly choose the un-visited node rather than choose the visited nodes. This exactly shares the same spirit as negative feedback algorithm does. Moreover, in selection step, the agent usually chooses Upper Confidence Trees (UCT, Kocsis and Szepesvari (2006); Browne et al. (2012); Couetoux et al. (2011)) criterion, \[\arg\max_{a}\frac{w_{a}}{n_{a}}+c\sqrt{\frac{\log N}{n_{a}}},\] (\(n_{a}\) is the number of simulations after choosing action \(a\); \(w_{a}\) is the number of wins after choosing action \(a\), \(N\) is the number of simulations after the current node) to select the successive child nodes. If the tuning constant \(c\rightarrow+\infty\), then UCT also reduces to \(\arg\min_{a}n_{a}\) which is exactly how negative feedback algorithm chooses the next action. Therefore, our new theories (partially) explains why combination of selection and expansion steps in MCTS is more efficient and effective than naive random tree search. ### Non-tabular Cases Up to now, we have only focused on graphs with finite nodes (i.e. discrete-state RL environments). One may wonder can negative feedback algorithm be extended to non-tabular cases (i.e. the state space is continuous instead of being discrete)? In below, we provide an approximate version of negative feedback algorithm without theoretical justification. For arbitrary state \(s\) and action \(a\), we define the cumulative approximate visiting number as \[N^{(n)}_{approx}(s,a)=\sum_{t^{\prime}=1}^{n-1}\kappa(s_{t^{\prime}},s) \mathbf{1}\{a_{t^{\prime}}=a\} \tag{10}\] at time \(n\). Here kernel \(\kappa(s_{1},s_{2})\) quantifies the similarity between two states. If states \(s_{1}\) and \(s_{2}\) are close, the value of \(\kappa(s_{1},s_{2})\) will be close to 1. Otherwise, \(\kappa(s_{1},s_{2})\) is close to 0. For example, in a Euclidean \(\mathbb{R}^{2}\) space, a state \(s\) can be represented by a two-dimensional coordinate, \((s_{x},s_{y})\). The kernel function can be simply chosen as the indicator function, \(\kappa(s_{1},s_{2}):=\mathbf{1}\{|s_{1x}-s_{2x}|\leq\delta\text{ and }|s_{1y}-s_{2y}|\leq\delta\}\), where \(\delta\) is a tuning parameter which adjusts the affinity level. Then the agent will choose action \(a\) in favor of the least approximate visiting number. That is, \[\pi^{(n)}_{approx}(s)=a;\text{ if }a=\arg\min_{a^{\prime}}N^{(n)}_{approx}(s,a^{ \prime}), \tag{11}\] where ties break randomly. ## 6 Conclusion In this work, we study the cover time problem of a non-Markovian algorithm, negative feedback strategy, which is based on "favor least" principle. To our knowledge, our work is the first theoretical work of this kind rather than empirical/synthetic study. We make attempts to show that why negative feedback algorithm is better than naive random walk policy. Specifically, we establish that the local version of negative feedback algorithm leads to a smaller expected number of excursions to visit any node in arbitrary graph. We also establish that the cover time of negative feedback algorithm has smaller cover time under many special graphs, including star, clique graph, tree graphs, etc. Connections are made with several important RL algorithms, including maximum-entropy exploration, UCB and MCTS methods. Various experimental results support our new theories and our findings. The result presented in this work may help practitioners to understand different exploration strategies better from mathematical angles. Theoretical analyses of cover time comparison in more complex graph structures or continuous-state environments can be considered as possible directions in future work.
2302.08417
GEMMFIP: Unifying GEMM in BLIS
Matrix libraries often focus on achieving high performance for problems considered to be either "small" or "large", as these two scenarios tend to respond best to different optimization strategies. We propose a unified technique for implementing matrix operations like general matrix multiplication (GEMM) that can achieve high performance for both small and large problem sizes. The key is to fuse packing -- an operation that copies data to a contiguous layout in memory and which is critical for large matrix performance -- with the first computational "pass" over that data. This boosts performance across the problem size spectrum. As a result, tuning general-purpose libraries becomes simpler since it obviates the need to carefully express and parameterize logic that chooses between a "small matrix" strategy and a "large matrix" strategy. A prototype implementation of the technique built with the BLAS-like Library Instantiation Software (BLIS) framework is described and performance on a range of architectures is reported.
RuQing G. Xu, Field G. Van Zee, Robert A. van de Geijn
2023-02-16T16:52:49Z
http://arxiv.org/abs/2302.08417v2
# GEMMFIP: Unifying GEMM in BLIS ###### Abstract Matrix libraries often focus on achieving high performance for problems considered to be either "small" or "large", as these two scenarios tend to respond best to different optimization strategies. We propose a unified technique for implementing matrix operations like general matrix multiplication (gemm) that can achieve high performance for both small and large problem sizes. The key is to fuse packing - an operation that copies data to a contiguous layout in memory and which is critical for large matrix performance - with the first computational "pass" over that data. This boosts performance across the problem size spectrum. As a result, tuning general-purpose libraries becomes simpler since it obviates the need to carefully express and parameterize logic that chooses between a "small matrix" strategy and a "large matrix" strategy. A prototype implementation of the technique built with the BLAS-like Library Instantiation Software (BLIS) framework is described and performance on a range of architectures is reported. ## 1 Introduction The Basic Linear Algebra Subprograms (BLAS) [3, 4, 11] interface has had a profound impact on scientific software development. It is now also of great importance to fields like machine learning and data analytics. By coding applications in terms of the BLAS, portable high performance can be achieved. For this reason, whenever a new high-performance computer architecture arrives, the instantiation of this interface is a high priority. Historically, it was expected that vendors leverage their expertise with the architecture to create proprietary matrix libraries, with key components coded in assembly language. IBM's algorithms and architectures approach demonstrated that by co-designing architectures, compilers, and libraries it was possible to achieve high performance with implementations coded in a high-level language (Fortran) [1]. This inspired a number of open-source efforts to provide portable implementations of the BLAS, including the Automatic Tuned Linear Algebra Software (ATLAS) [16], the GotoBLAS [6, 7], the OpenBLAS [23] (a fork of GotoBLAS), and the BLAS-like Library Instantiation Software (BLIS) [20, 22] upon which this paper implements its approach. An added benefit of BLIS is that it supports an analytical model for determining blocking parameters so that autotuning can be avoided [12]. Across all publically available efforts towards matrix libraries, a fundamental problem that complicates the implementation of matrix-matrix operations, a. k. a. level-3 BLAS, is that packing to improve data locality, which is necessary for high performance when targeting large matrix sizes, actually _impedes_ high performance for smaller matrix sizes. It has been thought that this is an inherent problem that can only be solved by implementing separate code paths for small and large matrix sizes and then selecting one of them based on the problem size characteristics. In this paper, we provide preliminary evidence that this conventional wisdom _may be wrong_: the two code paths can be unified in a way that mostly preserves the benefits of both. This is achieved by integrating packing - which is optional in the small code path - more tightly with the computation. Importantly, the problem sizes where optional packing should be turned on or off can be more easily justified and encoded. ## 2 Goto's Algorithm and its Instantiation in BLIS Goto's algorithm [6, 7, 8] was first developed for CPUs with two levels of cache and continues to be the algorithm that underlies most if not all vendor and open-source implementations of the level-3 BLAS. This section gives a high-level description of this algorithm and its instantiation in BLIS. ### Goto's algorithm for large matrices Goto's algorithm structures a prototypical gemm, \(C:=AB+C\), where \(A,B\), and \(C\) are matrices of size \(m\times k,k\times n\) and \(m\times n\) respectively, as five loops around the update of a small submatrix of \(C\) called the microtile, as illustrated in Figure 1. We only give the highlights here since the algorithm and this picture have been explained in many previously-published papers. In this discussion and the figure, \(m_{R}\) and \(n_{R}\) denote register blocking sizes, while \(m_{C}\), \(n_{C}\), and \(k_{C}\) denote cache blocking parameters. At the core is the _microkernel_, which updates an \(m_{R}\times n_{R}\)_microtile_ of \(C\) by multiplying a \(m_{R}\times k_{C}\)_micropanel_ of \(A\) by a \(k_{C}\times n_{R}\) micropanel of \(B\). On a typical architecture, the microtile of \(C\) is kept in registers while the micropanels of \(B\) and \(A\) are streamed from the L1 and L2 caches, respectively. Blocks of \(A\) and row panels of \(B\) are rearranged (packed) at strategic points in the algorithm to allow memory access with unit stride1 as well as to align micropanels from \(A\) and \(B\) so that they can fit into their designated levels of cache as indicated in Figure 2. We refer to this rearranged storage as "packed memory" in contrast to "unpacked memory" used to store the original \(A\) and \(B\) matrices. Footnote 1: This refers to loading consecutive memory addresses, which allows most CPUs to execute in the fewest number of cycles. ### BLIS's refactoring of Goto's algorithm The BLIS implementation of Goto's algorithm recognizes that as long as the microkernel is expressed with assembly code2, high performance can be achieved even if all remaining parts of the algorithm above the microkernel (including packing) are written in C. This reduces how much code must be customized for the gemm operation. It also allows other matrix-matrix operations supported by the BLAS (level-3 BLAS), such as Hermitian matrix-matrix multiplication (hemm), Hermitian rank-\(k\) update (herk), triangular matrix-matrix multiplication (trmm), and triangular solve with multiple right-hand sides (trsm), to employ the same microkernel [20, 22]. This contrasts with the original GotoBLAS implementation, inherited by OpenBLAS, where the two loops around the microkernel form what we call a _macrokernel_ that must be customized in assembly code for different gemm-like operations [7].3 Footnote 2: This can take the form of so-called extended inline assembly code in addition to pure assembly code. Vector intrinsics _may_ also work, depending on the compiler and instruction set being emitted. Footnote 3: When the assembly region extends to encompass the macrokernel, an OpenBLAS-like implementation may choose to maintain separate macrokernels for each operation or insert conditional logic into a single macrokernel that allows the code to handle multiple similar operations (e.g. syrk and herk). The former case yields more regions of assembly code with only minor differences between them while the latter results in less assembly code that is nonetheless more difficult to decipher due to its embedded branching. An important detail is how to handle situations where the matrix dimensions are not whole multiples of \(m_{R}\), \(n_{R}\), and/or \(k_{C}\). These so-called "fringe" or edge cases. During packing, BLIS pads micropanels with zeroes when a fringe case is encountered. And since the microkernel author only needs to target microtiles of one size - \(m_{R}\times n_{R}\) - the job of writing and optimizing microkernels becomes much simpler. The result is an easier-to-develop and easier-to-maintain code base at the cost of a minor decrease in performance for certain problem sizes [18]. Figure 1: Goto’s algorithm for gemm as five loops around the microkernel. This diagram, which is often used when explaining the fundamental techniques that underly the BLIS implementation of gemm, was modified from a similar image first published by Zee and Smith [21] and is used with permission. ### SUP: Supporting small-ish matrices More recently, projects and implementations like LIBXSMM [9] and BLASFEO [5] have sought to improve matrix-matrix performance for small-sized problems. Typically, these solutions either skip packing or require data to be pre-packed to exploit the fact that for small problems, matrices \(A\) and \(B\) have a chance to fit into the L2 (or even L1) cache in their entirety, thus avoiding the \(\mathcal{O}(mk+nk)\) cost of packing. BLIS's current approach to cases where at least one matrix dimension is small is referred to as Skinny/UnPacked (SUP). It combines the following techniques: * It skips packing. * Rather than isolating architecture specifics only in the microkernel, it employs a _millikernel_ that absorbs the first loop around the microkernel into the kernel primitive in an effort to reduce the frame stack cost (that is, memory overhead due to subroutine calls). This millikernel is written in a manner similar to that of a corresponding microkernel (i.e., in assembly code). * When a millikernel encounters a fringe case during its last iteration, it dispatches a helper microkernel that specializes in that size. Alternatively, a set of fringe cases may be called in sequence to emulate the net effect of a single, larger microkernel. For example, let us assume the size of the microtile is \(6\times 8\) and the millikernel encounters a fringe that is \(5\times 8\). This could result in a call to a single helper microkernel call that updates a \(5\times 8\) microtile, or it could result in a call to a \(3\times 8\) helper microkernel followed by one that computes the remaining \(2\times 8\) part. * one that dispatches to a different subset of helper microkernels. Building on the previous example, if the millikernel is called on an \(m\times 5\) Figure 2: A schematic illustrating how the storage format of a micropanel in memory affects how cache lines map to the associativity sets of a cache. Top: Unpacked columns with arbitrary leading dimensions tend to cause more cache lines to be mapped to fewer sets of the cache, leading to inefficient use and unnecessary evictions. Bottom: The same submatrix stored contiguously causes cache lines to be mapped across all sets, leading to fewer evictions and, by proxy, fewer subsequent cache misses. A more rigorous analysis can be found in Low et al. [12]. submatrix, a helper millikernel that operates on at most 5 columns of \(B\) is dispatched, which will then loop over a microkernel-like region of code targeting \(6\times 5\) and eventually, if needed at the fringe, dispatch helper microkernels whose \(m_{R}\) values are less than 6. While these changes to the conventional BLIS code path are conceptually simple, the examples above suggest that a nontrivial amount of code is required to support it. Specifically, if a base-2 "spanning" set of kernels is employed, based on a \(6\times 8\) microtile, the following millikernels and microkernels would be needed: \[\begin{array}{llll}6\times 8(\textsc{mm})&6\times 4(\textsc{hm})&6\times 2 (\textsc{hm})&6\times 1(\textsc{hm})\\ 4\times 8(\textsc{m}\mu)&4\times 4(\textsc{h}\mu)&4\times 2(\textsc{h}\mu)&4 \times 1(\textsc{h}\mu)\\ 2\times 8(\textsc{m}\mu)&2\times 4(\textsc{h}\mu)&2\times 2(\textsc{h}\mu)&2 \times 1(\textsc{h}\mu)\\ 1\times 8(\textsc{m}\mu)&1\times 4(\textsc{h}\mu)&1\times 2(\textsc{h}\mu)&1 \times 1(\textsc{h}\mu)\end{array}\] Here, mm denotes the main millikernel that is called from the 2nd loop, and m\(\mu\) denotes its helper microkernels. Similarly, the nm labels denote helper millikernels, each of which calls its own helper microkernels labeled by h\(\mu\). ### Combining the two code paths Figure 3 reports performance of the conventional BLIS gemm and the gemm implemented via the SUP approach (gemmsup) on two different architectures. Clearly, SUP outperforms the conventional algorithm for small problem sizes. However, as the problem size becomes large, data would unavoidably spill out of the cache, degrading SUP's performance and creating periodic spikes down on the Xeon E5-2690 processor. Importantly, to roll out solutions to a matrix library, the cross-over point must be determined and encoded. The heuristic for this is complicated by the fact that it is not only a function of the problem size but also of the row and column strides for matrices \(A\) and \(B\). Additionally, there may be an architecture-dependent range of sizes where both algorithms suffer performance degradation (e.g., around \(m=n=k=200\) on the Xeon E5-2690). In such a region, unaligned SUP may spill data out of the cache, while the \(\mathcal{O}(mk+nk)\) packing cost for the conventional algorithm is non-negligible. We will soon show that fusing the packing with computation can address these issues. ## 3 A Unified Approach It would appear that the packing process inherently imposes unreasonable overhead for small- to medium-sized matrix cases. We now discuss how that is not always so. Figure 3: Performance of: BLIS’s conventional gemm and gemm implemented with the SUP approach (gemmsup) on the Intel Xeon E5-2690 (left) and AWS Graviton 3 with Arm SVE (right) architectures. Note the different cross-over points for each system. ``` for\(\cdots\) (the 5th and 4th loops around the microkernel proceed as before but without packing) \(\cdots\)do for first iteration of 3rd loop around the microkerneldo for the first iteration of 2nd loop around the microkerneldo Pack the first micropanel of \(B_{p,j}\rightarrow\widetilde{B}_{p,j}\); for each iteration of 1st loop around the microkerneldo Pack current micropanel of \(A_{i,p}\rightarrow\widetilde{A}_{i,p}\); Call the microkernel with packed micropanels \(\widetilde{A}_{i,p}\) and \(\widetilde{B}_{p,j}\); end for Upon completion, \(A_{i,p}\) is left packed in \(\widetilde{A}_{i,p}\); end for for remaining iterations of 2nd loop around microkerneldo Pack current micropanel of \(B_{p,j}\rightarrow\widetilde{B}_{p,j}\); for each iteration of 1st loop around the microkerneldo Call the microkernel with packed micropanels \(\widetilde{A}_{i,p}\) and \(\widetilde{B}_{p,j}\); end for end for Upon completion, \(B_{p,j}\) is left packed in \(\widetilde{B}_{p,j}\); end for for remaining iterations of 3rd loop around microkerneldo for the first iteration of 2nd loop around the microkerneldo for each iteration of 1st loop around the microkerneldo Pack current micropanel of \(A_{i,p}\rightarrow\widetilde{A}_{i,p}\); Call the microkernel with packed micropanels \(\widetilde{A}_{i,p}\) and \(\widetilde{B}_{p,j}\); end for Upon completion, \(A_{i,p}\) is left packed in \(\widetilde{A}_{i,p}\); end for for remaining iterations of 2nd loop around microkerneldo for each iteration of 1st loop around the microkerneldo Call the microkernel with packed micropanels \(\widetilde{A}_{i,p}\) and \(\widetilde{B}_{p,j}\); end for end for end for ``` **Algorithm 1**Algorithm with interleaved packing and computation. ### Interleaving of packing and computing The basic idea behind unifying is simple: Modify Goto's algorithm so that elements of the micropanels of \(A\) and \(B\) are packed _just before_ their first use by a call to the microkernel. More precisely, consider that Goto's algorithm has proceeded to the first iteration of the 4th loop, in which a column-panel of \(A\) is multiplied by a row-panel of \(B\), except let us assume that neither is packed: This operation can be implemented by modifying Goto's algorithm as described in Algorithm 1. It allows the microkernel from the conventional BLIS implementation to be used without modification. The benefit of this interleaving is that after packing, micropanels of \(\widetilde{B}_{p,j}\) and \(\widetilde{A}_{i,p}\) are still in the L1 cache when used for the first time by the microkernel, while their eventual migration back to the L3 and L2 caches, respectively, are (likely) masked by computation. This can be expected to improve performance in general, but in particular for small matrices. ### FIP: Fused packing in the microkernel While the simple solution mentioned in the previous subsection should reduce the net execution time of the microkernel for the first time a packed micropanel is involved in computation, packed data still moves from registers to the L1 cache and back. This "reflowing" of data adds cycles to the total execution time, and can sometimes trigger a performance penalty for _read-after-write_ (RAW) data hazards especially on some x86_64 architectures. Our solution is to fuse individual packing instructions into the microkernel itself. We call this technique "fused-in packing" (FIP). Conceptually, this is similar to the previous approach, except that the unit of data around which the packing and computation are interleaved is reduced from whole micropanels of \(A\) and \(B\) to a mere handful of elements. This has at least two benefits: (1) Data that is loaded into registers in the normal course of packing is reused immediately for useful computation; (2) The cost of loading data from the cache into registers is partially hidden by the computation on previously-loaded values. Implementing FIP requires instantiating four cases of the microkernel: * The conventional microkernel, where microtiles from both sides are already packed. * A microkernel where \(\widetilde{A}_{i,p}\) is already packed but \(B_{p,j}\) is unpacked (and thus needs to be packed). * A microkernel where \(\widetilde{B}_{p,j}\) is already packed but \(A_{i,p}\) is unpacked (and thus needs to be packed). * A microkernel where both \(A_{i,p}\) and \(B_{p,j}\) are unpacked. Each of these might be encountered while computing \(C:=AB+C\) depending on the sizes of the matrices. ### Options in fused packings There are some situations in which there is little benefit from packing \(B_{p,j}\) and/or \(A_{i,p}\). Consider a gemm call where \(C\) is \(m\times n\), \(A\) is \(m\times k\), and \(B\) is \(k\times n\). If \(n\leq n_{R}\), then the 2nd loop around the microkernel is only executed once, and hence packing \(A_{i,p}\) would yield no benefit. And if \(m\leq m_{R}\) then \(B_{p,j}\) is similarly not reused. The resulting spectrum of options can be summarized as \begin{tabular}{|c||c|c|} \hline & \(n\leq n_{R}\) & \(n>n_{R}\) \\ \hline \hline \(m\leq m_{R}\) & no packing & pack \(A_{i,p}\) \\ \hline \(m>m_{R}\) & pack \(B_{p,j}\) & pack \(A_{i,p}\) and \(B_{p,j}\) \\ \hline \end{tabular} This provides a decent heuristic from which further tuning may be explored. Another observation concerns the packing of \(A_{i,p}\). Let us assume that \(A\) is stored in column-major order. If \[\mathrm{csb}(A)\times k_{C}\leq(\text{L2 cache size in bytes}),\] where \(\mathrm{csb}(A)\) equals the stride in bytes between elements in a row of \(A\), then the unpacked storage of \(A_{p}\) will never cause an L2 cache spill - that is, elements of \(A_{p}\) will not be evicted from the L2 cache by other elements. This constitutes another case where we can skip the packing of \(A\). If \(A\) is stored in row-major order, the condition becomes: \[m_{C}\times\mathrm{rsb}(A)\leq(\text{L2 cache size in bytes}),\] where \(\mathrm{rsb}(A)\) equals the stride in bytes between elements in a column of \(A\). The details behind these inequalities go beyond the scope of this paper and require an understanding of the results by Low et al. [12]. Figure 4 illustrates how interleaving packing and computing as Algorithm 1 benefits the performance of gemm and how deploying the FIP technique further improves it. Additional experiments are available in Section 4. ### Coding effort We implemented FIP kernels based on the microkernel portion of the SUP code path in BLIS introduced in Section 2.3. Though Sections 3.2 and 3.3 imply that the approach requires handling all four cases within one set of microkernels, most of the redundancies between the four cases can be tamed with some careful refactoring. BLIS kernels are written with inlined assembly code in C. Meanwhile, fused packing merely requires data from registers to be moved to memory with hard-codable strides. These two facts allowed us to leverage the C preprocessor to expand a single code template into the four specializations in a fashion shown in Algorithm 2. ### Multithreading We now take a brief look at the multithreading potential of the FIP technique, which is left for future research on this approach. BLIS's refactoring of Goto's algorithm conveniently exposes five loops coded in C99 in which multithreading can be introduced [14]. In practice, parallelism is usually gained in the 2nd loop around the microkernel to let multiple threads operate on the same \(m_{C}\times k_{C}\) tile from \(A\) or in the 3rd loop around the microkernel to let multiple threads operate on the same \(k_{C}\times n_{C}\) panel from \(B\). On a multi-core chip with non-uniform memory access (NUMA) architecture, cores usually share L2 caches within each NUMA, making it reasonable to parallelize over the 2nd loop within and the 3rd loop across NUMA nodes. Figure 4: Performance improvements observed on the Intel® Xeon E5-2690 (top) and the AWS Graviton 3 (bottom) processors from interleaving packing and computing and optionally fusing their kernels (FIP/gemmflip). ``` #definecond_inst_false(_1) #definecond_inst_true(_1)_1 #definekernel_def(pa,pb) \ voidfused_kernel_##pa##pb(/*funcparams*/) \ { __asm_volatile \ ( \ /*...*/ \ \ /*Multiplyandaccumulate:ymm0holdsA,ymm2holdsBandymm4holdsCfractions*/ \ "vfmadd231pt%ymm0,%ymm2,%ymm4%n\t" \ \ /*Optionallystoreymm2tothepackingspaceindicatedbyrdz.*/ \ cond_inst_##pb("movapd%ymm2,16(%rdx)\n\t") \ /*...*/ \ ) } kernel_def(true,true) kernel_def(true,false) kernel_def(false,true) kernel_def(false,false) ``` **Algorithm 2**A code sample for multi-instantiating SUP-based kernels with packing instructions conditionally (depending on the values of macro arguments pa and pb) fused in to handle all the four cases required by the FIP approach. With this insight, if we want to apply our insight to the consumption of tiles from \(A\), it seems appropriate to let each thread work on the unpacked memory first and collaboratively write the microtiles they have loaded to the packing space for later consumption. Since threads begin referring to the packed data only from their second iteration of the 2nd loop around the microkernel, for \(A_{p}\) it is only required that all threads working on the same \(m_{C}\times k_{C}\) tile from \(A\) synchronize _once_ after finishing their first iteration there. This synchronization cost is identical to collaboratively packing everything from \(A\) beforehand. Furthermore, if one still wants to ensure each tile from unpacked memory is accessed only once, we can even change the order when each iteration of the first loop around the microkernel gets executed. Letting each thread enter the microkernel at a different microtile of \(C\) allows the requested \(m_{C}\times k_{C}\) to become readily packed after each thread has finished their first \(\frac{m_{C}}{m_{R}}\)\(n_{\text{thr}}\) iterations, as is illustrated in Figure 5. Here \(n_{\text{thr}}\) is denoted as the number of threads working on the same \(m_{C}\times k_{C}\) tile from \(A\) and \(m_{C}\) is supposed to be multiples of \(m_{R}n_{\text{thr}}\). On the \(B\) side, in each iteration within the 2nd loop around the microkernel, the microtile from \(B\) is reused from the thread-private L1 cache and no distinct gain in performance can be expected from reusing micropanels of \(B\) from the L3 cache. In addition, thread synchronization across NUMA sockets is likely to cause nontrivial overhead. This suggests that it is better to let each thread pack its relevant panels into a separate space despite the modest storage redundancy. The threads then flow data from their private L1 cache to the registers without interfering with each other. ## 4 Performance results We now illustrate the benefits of the discussed technique on a broad range of architectures, for a single core. ### Experimental setup We implemented gemm with the FIP approach (gemmflip) based on BLIS (Release 0.9.0). Our implementation can be integrated back into BLIS as an alternative backend for gemm through its _sandbox_ interface. The macrokernel structure of our implementation is essentially the same as BLIS's refactoring of Goto's algorithm in Figure 1 with a few additional lines to handle kernel selection for the four cases mentioned in Section 3.2. A non-trivial modification here is for x86_64 kernels. As we have already mentioned in Section 2.3, to minimize costs associated with pushing to the frame stack, BLIS's SUP kernels include the first loop around the microkernel (i. e. the millikernel) in the assembly region of the code. In our implementations, this was also incorporated for x86_64-based architectures since they have a limited number of general-purpose registers compared to Arm(r), Power ISA, and RISC-V(r), making it harder for compilers to transition between microkernel calls without interfering with memory consistency or causing frame stack RAW data hazards. We developed microkernels to support double-precision gemmfp on 3 architectures: Intel AVX2, Arm NEON and Arm SVE. Performance experiments were performed on five different processors: Intel(r) Xeon E5-2690 (3500 MHz, compiler: GCC 10.2), AMD Epy(tm) 7R32 with x86_64 AVX2 architecture (3300 MHz, compiler: GCC 11.3), AWS Graviton 2 with Arm NEON (2500 MHz, compiler: Clang 14.0), AWS Graviton 3 with Arm SVE (2600 MHz, compiler: Clang 14.0), and Apple M2 with Arm NEON (3490 MHz, compiler: Apple Clang 14.0). All our experiments used a single core. The uppermost depicted \(y\)-axis value represents the single-core theoretical peak performance for each processor tested. Figure 5: An illustration of how multithreading can be added to our FIP approach. Each thread starts from a different unpacked micropanel of \(A_{p}\) and packs the micropanel to its designated space in \(\widetilde{A}_{i,p}\) while performing the corresponding update of a microtile of \(C:C_{i,j}\). Once all threads have finished packing their \(\frac{m_{\text{c}}}{m_{R}}\Big{/}n_{\text{thr}}\) micropanels, _one_ synchronization occurs so they can use all micropanels in \(\widetilde{A}_{p}\). This collaborative packing and computing only happen when each thread is working on its first \(B_{p,j}\) micropanel within the \(2^{\text{nd}}\) loop around the microkernel. Figure 6: Performance on the Intel® Xeon E5-2690 (left) and AMD EpycTM 7R32 with x86_64 AVX2 (right) architectures. Figure 7: Performance on various Arm architectures. Left: AWS Graviton 2 with Arm NEON. Middle: AWS Graviton 3 with Arm SVE architecture. Right: Apple M2 with Arm NEON. ### Evaluation In Figures 6 and 7 (top), we report the performance of the conventional BLIS code path, its SUP code path, and gemmfp. In the top graphs, the leading dimension (LDim) - that is, the stride between logically adjacent elements within in a single row - equals the (row) dimension of the matrices. In Figures 6 and 7 (center-top), the leading dimension equals 2000, which can negatively affect performance, particularly when computing on unpacked data. We see that the curve for gemmfp uniformly matches or outperforms the conventional BLIS and SUP paths while smoothly bridging the "medium-sized" region where both underperform. This result supports the claim that the new method can provide a unified approach that yields high performance across the range of problem sizes without having to deploy heuristics to determine a crossover point between the paths. ### Comparison with other BLAS implementations We now compare how our implementation of gemmsup performs against other matrix libraries, including BLIS itself, OpenBLAS, and numerous vendor-specific BLAS implementations. On the x86_64 side, Intel's Math Kernel Library (MKL) is akin to the "gold standard." Since MKL is a closed-source implementation, it is difficult to discern what allows that code to perform so well. In Figure 6 (center-bottom and bottom), our implementation handily outperforms OpenBLAS and matches or surpasses BLIS, especially for medium and even large problem sizes. On AMD's ZenTM-microarchitecture-based processors, BLIS is integrated into the AMD Optimizing CPU Libraries (AOCL) with vendor-side tunings. AMD also has an open-source fork of BLIS known as AMD BLIS, which we have built and tested apart from AOCL. On that processor, gemmfp essentially matches AOCL's performance (with the exception of a narrow range from about 150 to 250 when LDim \(=m\)) while handily outperforming BLIS, AMD BLIS, and OpenBLAS. Meanwhile, MKL yields mediocre (and inconsistent) performance on the 7R32. We imagine that Intel made a conscious decision to throttle the performance of MKL when running on Epyc hardware in hopes of discouraging their users from switching to their competitor's products. For Arm processors, Arm Performance Libraries (ArmPL) provides a state-of-the-art matrix library solution. Comparisons of gemmfp against ArmPL, OpenBLAS, and BLIS are plotted in Figure 7 (center-bottom and bottom). On the AWS Graviton 2, gemmfp outperforms ArmPL and slightly underperforms OpenBLAS. This defect is presumably because our microkernels for unpacked memories still leave room for improvement on this specific microarchitecture, as can be deduced from the fact that gemmsup performance using a similar microkernel lags far behind. For Apple's M2 processor, its Accelerate framework was, upon closer inspection, found to be using a hidden co-processor [2] shared by the whole chip instead of the Arm NEON pipelines, and the peak performance turned out to be around 380 GFLOPS/sec for \(m=n=k=(\text{LDim of }A,B,\text{and }C)\approx 800\), regardless of threading options given. This difference on the hardware side makes it impossible to measure an isolated single-core throughput. Therefore, the Accelerate framework is producing curves in Figure 7 that exceed the \(y\)-axis limits of the graphs. Finally, on the Arm SVE architecture of the AWS Graviton 3 processor, gemmfp yields the highest throughput and the best consistency among all tested libraries, demonstrating its value in this relatively nascent architecture. We have demonstrated the method's benefits by creating implementations for multiple architectures. Only on the Intel Xeon E5-2690 processor does another implementation - Intel's MKL - outperform FIP. Given MKL's reputation for achieving extremely high performance, we believe this clearly illustrates the importance of the work. We expect these techniques to be adopted by BLIS and other libraries in the near future, thus further positively impacting the user community, regardless of which library they use. The results in this paper suggest future work that will have an extended impact. The most obvious is that the techniques can be applied to other precisions and other matrix-matrix multiplications (level-3 BLAS). In addition, a body of papers shows how BLIS's refactoring of Goto's algorithm can be used to attain high performance and/or reduce the development effort for various gemm-like operations: * In the work by Zee [19], BLIS's 1m method leverages the real-domain microkernel to implement complex-domain matrix-matrix multiplication operations by cleverly encoding the definition of complex scalar multiplication within the packing stage of Goto's algorithm. * The work by Huang et al. [10] shows how Strassen's algorithm can already attain high performance for rank-\(k\) updates and relatively small matrices. * Yu et al. [17] give a high-performing implementation for solving the \(k\)-nearest neighbor problem by fusing computation into Goto's algorithm. * In the work by Smith and Geijn [15], it is reasoned and demonstrated that Goto's algorithm is but one algorithm in a larger family of algorithms, the Multilevel Optimized Matrix-matrix Multiplication Sandbox (MOMMS) family. The idea is that as the speed ratio between CPU arithmetics and memory access becomes worse in the future, blocking for caches must be modified. * Matthews [13] uses BLIS's refactoring of Goto's algorithm to instead implement tensor contractions, yielding TBLIS, by recognizing that rearrangement of data to cast tensor contraction as a matrix multiplication can be incorporated into packing. Insights demonstrated in this paper can potentially accord base-performance benefits for all these algorithms. Finally, if the strategy in Section 3.5 could bring our insight's unifying effect to the multithreading regime, perhaps it could extend to and yield speedup on GPUs whose intermediate-level memory operates in a cache-like way or as a register-controlled scratchpad memory (SPM)4 as well. Footnote 4: For GPUs whose L1-level storage is an SPM with controllable direct memory access (DMA), it is expected that packing via DMA will provide better performance. ## 6 Code Availability Our code is developed under a fork of BLIS available at [https://github.com/xrq-phys/blis](https://github.com/xrq-phys/blis). It may be enabled as an "sandbox" that is optionally integrated into the library by configuring BLIS as: ./configure -s gemmflip -t none x86_64 for x86_64 microarchitectures or: ./configure -s gemmflip -t none arm64 for Arm hardware. ## Acknowledgements We thank Prof. S. Todo for supervising RuQing Xu's research and providing access to various architectures. We also thank members of the BLIS community for their input. RuQing Xu is funded by The University of Tokyo's GSGC scholarship. The researchers at The University of Texas at Austin are funded in part by the National Science Foundation (Award CSSI-2003921) and gifts from AMD, Arm, and Oracle. _Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation._
2310.07535
Fairness under Covariate Shift: Improving Fairness-Accuracy tradeoff with few Unlabeled Test Samples
Covariate shift in the test data is a common practical phenomena that can significantly downgrade both the accuracy and the fairness performance of the model. Ensuring fairness across different sensitive groups under covariate shift is of paramount importance due to societal implications like criminal justice. We operate in the unsupervised regime where only a small set of unlabeled test samples along with a labeled training set is available. Towards improving fairness under this highly challenging yet realistic scenario, we make three contributions. First is a novel composite weighted entropy based objective for prediction accuracy which is optimized along with a representation matching loss for fairness. We experimentally verify that optimizing with our loss formulation outperforms a number of state-of-the-art baselines in the pareto sense with respect to the fairness-accuracy tradeoff on several standard datasets. Our second contribution is a new setting we term Asymmetric Covariate Shift that, to the best of our knowledge, has not been studied before. Asymmetric covariate shift occurs when distribution of covariates of one group shifts significantly compared to the other groups and this happens when a dominant group is over-represented. While this setting is extremely challenging for current baselines, We show that our proposed method significantly outperforms them. Our third contribution is theoretical, where we show that our weighted entropy term along with prediction loss on the training set approximates test loss under covariate shift. Empirically and through formal sample complexity bounds, we show that this approximation to the unseen test loss does not depend on importance sampling variance which affects many other baselines.
Shreyas Havaldar, Jatin Chauhan, Karthikeyan Shanmugam, Jay Nandy, Aravindan Raghuveer
2023-10-11T14:39:51Z
http://arxiv.org/abs/2310.07535v3
# Improving Fairness-Accuracy tradeoff with few Test Samples under Covariate Shift ###### Abstract Covariate shift in the test data can significantly downgrade both the accuracy and the fairness performance of the model. Ensuring fairness across different sensitive groups in such settings is of paramount importance due to societal implications like criminal justice. We operate under the unsupervised regime where only a small set of unlabeled test samples along with a labeled training set is available. Towards this problem, we make three contributions. First is a novel composite weighted entropy based objective for prediction accuracy which is optimized along with a representation matching loss for fairness. We experimentally verify that optimizing with our loss formulation outperforms a number of state-of-the-art baselines in the pareto sense with respect to the fairness-accuracy tradeoff on several standard datasets. Our second contribution is a new setting we term Asymmetric Covariate Shift that, to the best of our knowledge, has not been studied before. Asymmetric covariate shift occurs when distribution of covariates of one group shifts significantly compared to the other groups and this happens when a dominant group is over-represented. While this setting is extremely challenging for current baselines, We show that our proposed method significantly outperforms them. Our third contribution is theoretical, where we show that our weighted entropy term along with prediction loss on the training set approximates test loss under covariate shift. Empirically and through formal sample complexity bounds, we show that this approximation to the unseen test loss does not depend on importance sampling variance which affects many other baselines. ## 1 Introduction Predictions of machine learnt models are used to make important decisions that have societal impact, like in criminal justice, loan approvals, to name a few. Therefore, there is a lot of interest in understanding, analyzing and improving model performance along other dimensions like robustness [50], model generalization [62] and fairness [42]. In this work, we focus on the algorithmic fairness aspect. Datasets used for training could be biased in the sense that some groups may be under-represented, thus biasing classifier decisions towards the over-represented group or the bias could be in terms of undesirable causal pathways between sensitive attribute and the label in the real world data generating mechanism [42]. It has often been observed [7], [9] that algorithms that optimize predictive accuracy that are fed pre-existing biases further learn and then propagate the same biases. Improving fairness of learning models has received significant attention from the research community [39]. Another common challenge that models deployed in real world situations face is that of _Covariate Shift_. In covariate shift, the distribution of covariates (feature vectors) across training and testing changes, however the optimal label predictor conditioned on input remains the same. Therefore the model may make wrong predictions when deployed or more seriously can slowly degrade over time when the covariate shift is gradual. Due to the practical importance of this problem, there has been a significant amount of research in detecting covariate shift and modeling methodologies to address it [63, 44]. The problem that we study in this paper is at the juncture of the above two hard problems: ensuring fairness under covariate shift. While this question has not received much attention, some recent works like [45] have begun to address this problem. We also introduce a new variant of covariate shift called _Assymetric covariate shift_ where distribution of covariates of one group shifts significantly compared to the other groups. Asymmetric covariate shift is a very common practical situation when there is long tail of underrepresented groups in the training data. For example, consider the popular and important task of click through prediction of advertisements [33]. Small and medium sized advertisers have poorer representation in the training data because they do not spend as much as the large businesses on advertising. Therefore during inference SMB advertisement clicks will see significantly more co-variate shift compared to those clicks on ads from large advertisers. Also, due to the nature of the problem of covariate shift, access to large labeled test is often not possible. In summary, the problem we aim to tackle is "Provide a high fairness-accuracy tradeoff under both symmetric and asymmetric covariate shift while having acesss to a very small set of unlabeled test samples". To this end, we make three key contributions in this paper. 1. We introduce a composite objective to approximate the prediction loss on the unlabeled test that involves _a novel weighted entropy objective on the set of unlabeled test samples_ along with ERM objective on the labeled training samples. We optimize these weights using _min-max_ optimization that implicitly drives these weights to importance sampling ratios with no density estimation steps. We show that our proposed objective has _provably_ lower variance compared to the importance sampling based methods. This composite objective is then combined with a representation matching loss to train fair classifiers. (Section 5). 2. We introduce a new type of covariate shift called _asymmetric covariate shift_ wherein one protected group exhibits large covariate shift while the other does not. We highlight that fairness-accuracy tradeoff degrades under this case for existing methods (Section 4). We show empirically that the combination of our objective and representation matching achieves the best accuracy fairness-tradeoff even in this case. 3. By incorporating our proposed weighted entropy objective with the Wasserstein based representation matching across sub-groups, we empirically compare against a number of baseline methods on benchmark datasets. In particular, we achieve the best accuracy-equalized odds tradeoff in the _pareto sense_. ## 2 Related Work **Techniques for imposing fairness:**_Pre-processing_ techniques aim to transform the dataset [10, 58, 18, 27] followed by a standard training. _In-processing_ methods directly modify the learning algorithms using techniques, such as, adversarial learning [36, 66], [1, 15, 16, 19, 65, 11]. _Post-processing_ approaches, primarily focus on modifying the outcomes of the predictive models in order to make unbiased predictions [43, 71, 25]. **Distribution Shift:** Research addressing distribution shift in machine learning is vast and is growing. The general case considers a joint distribution shift between training and testing data [5, 6, 40] resulting in techniques like domain adaptation [21], distributionally robust optimization [46, 17] and invariant risk minimization and its variants [3, 30, 48]. A survey of various methods and their relative performance is discussed by [62]. We focus on the problem of _Covariate Shift_ where the _Conditional Label_ distribution is invariant while there is a shift in the marginal distribution of the covariates across training and test samples. This classical setup is studied by [49, 55, 23]. _Importance Weighting_ is one of the prominently used techniques for tackling covariate shifts [54, 32]. However, they are known to have high variance under minor shift scenarios [13]. Recently methods that emerged as the de-facto approaches to tackle distribution shifts include popular entropy minimization [59], pseudo-labeling [20; 64], batch normalization adaptation [47; 41], because of their wide applicability and superior performance. Our work provides a connection between a version of weighted entropy minimization and traditional importance sampling based loss which may be of independent interest. **Fairness under Distribution shift:** The work by [45] is by far the most aligned to ours as they propose a method that is robust to covariate shift while ensuring fairness when unlabeled test data is available. However, this requires the density estimation of training and test distribution that is not efficient at higher dimensions and small number of test samples. In contrast our method avoids density estimation and uses a weighted version of entropy minimization that is constrained suitably to reflect importance sampling ratios implicitly. [37] proposed a method for fair classification under the worst-case weighting of the data via an iterative procedure, but it is in the agnostic setting where test data is not available. [51] studied fairness under shifts through a causal lens but the method requires access to the causal graph, separating sets and other non-trivial data priors. [68] proposed FARF, an adaptive method for learning in an online setting under fairness constraints, but is clearly different from the static shift setting considered in our work. [52] proposed a MAML based algorithm to learn under fairness constraints, but it requires access to labeled test data. [2] propose a consistency regularization technique to ensure fairness under subpopulation and domain shifts under a specific model, while we consider covariate shift. ## 3 Problem Setup Let \(\mathcal{X}\subseteq\mathcal{R}^{d}\) be the \(d\) dimensional feature space for covariates, \(\mathcal{A}\) be the space of categorical _group_ attributes and \(\mathcal{Y}\) be the space of class labels. In this work, we consider \(\mathcal{A}=\{0,1\}\) and \(\mathcal{Y}=\{0,1\}\). Let \(X\in\mathcal{X},\mathrm{A}\in\mathcal{A},\;\mathrm{Y}\in\mathcal{Y}\) be realizations from the space. We consider a training dataset \(\mathcal{D}^{S}=\{(X_{i},\mathrm{A}_{i},\mathrm{Y}_{i})|i\in[n]\}\) where every tuple \((X_{i},\mathrm{A}_{i},\mathrm{Y}_{i})\in\mathcal{X}\times\mathcal{A}\times \mathcal{Y}\). We also have an _unlabeled_ test dataset, \(\mathcal{D}^{T}=\{X_{i},\mathrm{A}_{i}|i\in[m]\}\). We focus on the setup where \(m<<n\). The training samples \((X_{i},A_{i},Y_{i}\in\mathcal{D}^{S})\) are sampled i.i.d from distribution \(\mathbb{P}^{S}(X,\mathrm{Y},\mathrm{A})\) while the unlabeled test instances are sampled from \(\mathbb{P}^{T}(X,\mathrm{A})\). Let \(\mathcal{F}:\mathcal{X}\rightarrow[0,1]\) be the space of soft prediction models. In this work, we will consider \(\mathsf{F}\in\mathcal{F}\) of the form \(\mathsf{F}=h\circ g\) where \(g(X)\in\mathbb{R}^{k}\) (for some dimension \(k>0\)), is a representation that is being learnt while \(h(g(X))\in[0,1]\) provides the soft prediction. Note that we don't consider A as an input to \(\mathsf{F}\), as explained in the work of [69]. \(\mathsf{F}\) is assumed to be parametrized via \(\theta\). Instead of representing the network as \(\mathsf{F}_{\theta}\), we drop the subscript and simply use \(\mathsf{F}\) when its clear from the context. The class prediction probabilities from \(\mathsf{F}\) are denoted with \(P(\hat{\mathrm{Y}}=y|X_{i})\), where \(y\in\{0,1\}\). The supervised in-distribution training of \(\mathsf{F}\) is done by minimizing the _empirical risk_, \(\widehat{\mathsf{ER}}^{S}\) as the proxy for _population risk_, \(\mathcal{R}^{S}\). Both risk measures are computed using the _Cross Entropy (CE)_ loss for classification (correspondingly we use \(\widehat{\mathsf{ER}}^{T}\) and \(\mathcal{R}^{T}\) over the _test distribution_ for \(\mathsf{F}\)). \[\mathcal{R}^{S}=\mathbb{E}_{\mathbb{P}^{S}(X,\mathrm{A},Y)}\left(-\log P(\hat {\mathrm{Y}}=Y|X)\right),\widehat{\mathsf{ER}}^{S}=\frac{1}{n}\sum_{(X_{i},Y_ {i},A_{i})\in\mathcal{D}^{S}}\left(-\log P(\hat{\mathrm{Y}}=Y_{i}|X_{i})\right) \tag{1}\] ### Covariate Shift Assumption For our work, we adopt the _covariate shift_ assumption as in [49]. Covariate shift assumption implies that \(\mathbb{P}^{S}(\mathrm{Y}|X,\mathrm{A})=\mathbb{P}^{T}(\mathrm{Y}|X,\mathrm{ A})\). In other words, shift in distribution only affects the joint distribution of covariates and sensitive attribute, i.e. \(\mathbb{P}^{S}(X,\mathrm{A})\neq\mathbb{P}^{T}(X,\mathrm{A})\). We note that our setup is identical to a recent work of fairness under covariate shift by [45]. We also define and focus on a special case of covariate shift called _asymmetric covariate shift_. **Definition 3.1** (Asymmetric Covariate Shift).: Asymmetric covariate shift occurs when distribution of covariates of one group shifts while the other does not, i.e. \(\mathbb{P}^{T}(X|\mathrm{A}=1)\neq\mathbb{P}^{S}(X|\mathrm{A}=1)\) while \(\mathbb{P}^{T}(X|\mathrm{A}=0)=\mathbb{P}^{S}(X|\mathrm{A}=0)\) in addition to \(\mathbb{P}^{S}(\mathrm{Y}|X,\mathrm{A})=\mathbb{P}^{T}(\mathrm{Y}|X,\mathrm{ A})\) This type of covariate shift occurs when a sub-group is over represented (sufficiently capturing all parts of the domain of interest in the training data) while the other sub-group being under represented and observed only in one part of the domain. In the test distribution, covariates of the under-represented group assume a more drastic shift. ### Fairness Measure To quantify fairness, we follow [45] and use _Equalized Odds (EOdds)_, proposed by [24]: \(\Delta_{\mathrm{EOdds}}=\frac{1}{2}\sum_{y\in\{0,1\}}|P(\hat{\mathrm{Y}}=1| \mathrm{A}=0,\mathrm{Y}=y)-P(\hat{\mathrm{Y}}=1|\mathrm{A}=1,\mathrm{Y}=y)|\). EOdds requires parity in both true positive rates and false positive rates across the groups. [24] have raised several concerns regarding other widely used fairness metrics, e.g., Demographic Parity (DP) and Equalized Opportunity (EOpp). Therefore, we don't emphasize them in this work. Another way to interpret EOdds is that it requires \(\mathrm{I}(\hat{\mathrm{Y}};\mathrm{A}|\mathrm{Y})\) to be small, where \(\mathrm{I}(;|\cdot)\) is the _conditional mutual information_ measure. Ideally, we are interested in a classifier, \(F\) that minimizes the objective: \(\mathcal{R}^{T}+\lambda\mathrm{I}_{T}(\hat{\mathrm{Y}};\mathrm{A}|\mathrm{Y})\); where \(\mathrm{I}_{T}(\cdot)\) is the mutual information measure with respect to the test distribution. However, EOdds metric requires the true labels \(Y\) from the test distribution. Therefore, to ground our work with appropriate theoretical justification, we consider optimizing for a related weaker notion, called _accuracy parity_, i.e. \(\Delta_{\mathrm{Apar}}=|P(\hat{\mathrm{Y}}\neq\mathrm{Y}|\mathrm{A}=0)-P(\hat {\mathrm{Y}}\neq\mathrm{Y}|\mathrm{A}=1)|\). In information theoretic terms, minimizing accuracy parity entails keeping \(\mathrm{I}_{T}(\hat{\mathrm{Y}}\neq Y;\mathrm{A})\) small. We now state the main goal of this work: \[\textbf{Objective}\quad\min_{\mathbf{F}_{\theta}}\mathcal{R}^{T}+\lambda \Delta_{\mathrm{Apar}}. \tag{2}\] ### Accuracy Parity via Representation Matching Our objective is to learn a highly accurate classifier on the test distribution while ensuring accuracy parity as in (2). Despite the lack of test labels, accuracy parity admits a simpler sufficient condition: Train a classifier \(\mathsf{F}=h\circ g(X)\) by matching representation \(g(X)\) across the protected sub groups and learning a classifier on top of that representation [70]. Several variants for representation matching loss have been proposed in the literature for both classification [26; 61] and regression [69; 12]. For implementation ease, we pick Wasserstein-2 metric to impose representation matching. We recall the definition of Wasserstein distance: **Definition 3.2**.: Let \((\mathcal{M},d)\) be a metric space and \(P_{p}(\mathcal{M})\) denote the collection of all probability measures \(\mu\) on \(\mathcal{M}\) with finite \(p^{th}\) moment. Then the \(p\)-th Wasserstein distance between measures \(\mu\) and \(\nu\) both \(\in P_{p}(\mathcal{M})\) is given by: \(\mathcal{W}_{p}(\mu,\nu)=\left(\inf_{\gamma}\int_{\mathcal{M}\times\mathcal{M }}d(x,y)^{p}d\gamma(x,y)\right)^{\frac{1}{p}}\); \(\gamma\in\Gamma(\mu,\nu)\), where \(\Gamma(\mu,\nu)\) denotes the collection of all measures on \(\mathcal{M}\times\mathcal{M}\) with marginals \(\mu\) and \(\nu\) respectively. We minimize the \(\mathcal{W}_{2}\) between the representation \(g(\cdot)\) of the test samples from both groups. Empirically, our representation matching loss is given by: \(\hat{\mathcal{L}}_{Wass}(\mathcal{D}^{T})=\mathcal{W}_{p}(\hat{\mu},\hat{\nu} ),\hat{\mu}=\frac{\sum_{(\hat{\mathrm{X}}_{i},\hat{A}_{i}=0)\in\mathcal{D}^{T }}\delta_{g}(X_{i})}{|(\hat{\mathrm{X}}_{i},\hat{A}_{i}=0)\in\mathcal{D}^{T}|}\), \(\hat{\nu}=\frac{\sum_{(\hat{\mathrm{X}}_{i},\hat{A}_{i}=1)\in\mathcal{D}^{T}} \delta_{g}(X_{i})}{|(\hat{\mathrm{X}}_{i},\hat{A}_{i}=1)\in\mathcal{D}^{T}|}\) We arrive at the following objective which is of central interest in the paper: \[\min_{F_{\theta}=h\circ g}\widehat{\mathsf{E}}\widehat{\mathsf{R}}^{T}+ \lambda\hat{\mathcal{L}}_{Wass}(\mathcal{D}^{T}) \tag{3}\] Figure 1: Asymmetric Shift Illustrated ## 4 Issues with Existing Methods **Naive Representation Matching:** Since we don't have labels for the test set one cannot implement the first term in (3). It is natural to optimize the following objective: \(\widehat{\mathsf{FR}}^{S}+\lambda\hat{\mathcal{L}}_{Wass}(\mathcal{D}^{T})\) where the first term optimizes prediction accuracy on labeled training data while the second term matches representation across groups in the unlabeled test. We illustrate that under asymmetric covariate shift, this above objective is ineffective. The issue is illustrated best through Figures 0(a) and 0(b). \(A1\) and \(B1\) represent group \(1\) and \(0\) feature distributions in the training set. Under asymmetric covariate shift, \(B2\approx B1\) while group \(1\) shifts drastically to \(A2\). Now, representation matching loss on the test would map \(A2\) and \(B2\) to the same region in the range space of \(g(\cdot)\) as in Fig. 0(a). However, the classifier \(h\) would be exclusively trained on samples from group \(B\) (i.e. \(g(B1)\)) although both \(A2\) and \(B2\) overlap there as shown in Fig. 0(b). Training predictors on training samples but representation matching under test creates this issue. This highlights also the central issue we tackle in this work. Adversarial debiasing [67] is another method that in principle does representation matching and suffers drastically due to the same issue. **Distributional Robustness Methods:** Another option to implement (3) would be to use a distributional robust learner (DRO) on the source distribution simultaneously with the representation matching penalty for the target. We consider a very recent SOTA method RGD-Exp [31] that implements a form of DRO. We effectively replace \(\widehat{\mathsf{FR}}^{T}\) from eqn (3) with a robust loss term from the paper and perform the same optimization as us and notice that it does not achieve as good a accuracy-fairness tradeoff as our algorithm, thus establishing that trivially combining a SOTA distributionally robust method with Wasserstein Loss (2nd term from eqn. 3: \(\hat{\mathcal{L}}_{Wass}(\mathcal{D}^{T}))\) does not suffice to achieve fairness under shift and something more nuanced is required. **Importance Sampling/Density Ratio Estimation based methods:** Another way to implement (3) is to use importance sampled prediction loss on training samples to mimic the test loss (first term) in (3). For this, one estimates ratio between training and test density directly using KLIEP/LSIF losses ([55; 28]) or perform density estimation which does not scale in higher dimensions. Sample complexity of these techniques directly scales with importance sampling variance which is large with very few test samples. We show this via formal sample complexity bounds in Section 5.1 and empirically in Figure 7 in the appendix where we see large variances in accuracy for these methods. Robust Shift Fair Method of [45] also involves density estimation steps which suffer from the same issue. ## 5 Method and Algorithm Recall that the objective we are interested in is (3). One needs a proxy for the first term due to lack of labels. From considerations in the previous section, training has to be done in a manner that can tackle covariate shift despite using representation matching. Building over the analysis from the previous section, we derive a novel objective in Theorem 5.1 based on the weighted entropy over instances in \(\mathcal{D}^{T}\) along with empirical loss over \(\mathcal{D}^{S}\) and show that is an upper bound to \(\mathcal{R}^{T}\). **Theorem 5.1**.: _Suppose that \(\mathbb{P}^{T}(\cdot)\) and \(\mathbb{P}^{S}(\cdot)\) are absolutely continuous with respect to each other over domain \(\mathcal{X}\). Let \(\epsilon\in\mathbb{R}^{+}\) be such that \(\frac{\mathbb{P}^{T}(\mathrm{Y}=y|X)}{P(\hat{\mathrm{Y}}=y|X)}\leq\epsilon\), for \(y\in\{0,1\}\) almost surely with respect to distribution \(\mathbb{P}^{T}(X)\). Then, we can upper bound \(\mathcal{R}^{T}\) using \(\mathcal{R}^{S}\) along with an unsupervised objective over \(\mathbb{P}^{T}\) as:_ \[\mathcal{R}^{T}\leq\mathcal{R}^{S}+\epsilon\times\mathbb{E}_{ \mathbb{P}^{T}(X)}\left[e^{\left(-\frac{\mathbb{P}^{S}(X)}{\mathbb{P}^{T}(X)} \right)}\mathcal{H}(\hat{\mathrm{Y}}|\mathcal{X})\right] \tag{4}\] _where \(\mathcal{H}(\hat{\mathrm{Y}}|\mathcal{X})=\sum_{y\in\{0,1\}}-P(\hat{\mathrm{ Y}}=y|X)\log(P(\hat{\mathrm{Y}}=y|X))\) is the conditional entropy of the label given a sample \(X\)._ Proof.: The proof is relegated to the supplementary section A.1. **Note**: Sections marked as A.x, B.y and C.z refer to sections in the supplementary section. The mild assumption on \(\epsilon\) in the theorem is also justified via extensive experiments in section B.4.3. We _emphasize_ that this result also provides an important connection and a rationale for using entropy based objectives as an unsupervised adaptation objective from an importance sampling point of view that has been missing in the literature [59; 57]. Entropy objective is imposed on points that are more typical with respect to the test than the training. Conversely, in the region where samples are less likely with respect to the test distribution, since it has been optimized for label prediction as part of training, the entropy objective is not imposed strongly. The above bound however hinges on the assumption that pointwise in the domain \(\mathcal{X}\), \(\mathsf{F}\) approximates the true soft predictor by at most a constant factor \(\epsilon\). To ensure a small value of \(\epsilon\), we resort to pre-training \(\mathsf{F}\) with only \(\mathcal{D}^{S}\) samples for a few epochs before imposing any other type of regularization. ### Theoretical Analysis The most widely used objective to optimize for L.H.S of (4), i.e. \(\mathcal{R}^{T}\), leverages _importance sampling_[55], which we denote as \(\mathcal{R}_{IS}\) here for clarity. We denote R.H.S of (4) by \(\mathcal{R}_{WE}\). Our method is motivated by the R.H.S of (4). Here, we compare the generalization bounds for \(\mathcal{R}_{IS}\) and \(\mathcal{R}_{WE}\). We make the following assumptions to simplify the analysis as the task is to compare \(\mathcal{R}_{IS}\) against \(\mathcal{R}_{WE}\) only, however some of these can be relaxed trivially. **Assumption 5.2**.: * Let \(\Theta=\{\theta_{1}\dots\theta_{k}\}\) be finite parameter space. * Let the losses \(l_{1}(\cdot)=-log(P_{\theta}(\hat{Y}=Y|X))\) and \(l_{2}(\cdot)=\sum_{y\in\{0,1\}}-P_{\theta}(\hat{Y}=y|X)\log P_{\theta}(\hat{Y }=y|X)\) be bounded between \([0,1]\) in the domain \(\{0,1\}\times\mathcal{X}\) for all \(\theta\in\Theta\). This is not a heavy assumption and can be achieved via appropriate Lipschitz log loss over bounded domain. * Denoting the important weights \(z(X)=\frac{\mathbb{P}^{T}(X)}{\mathbb{P}^{S}(X)}\) (assuming we have access to exact importance weights), let \(\sup\limits_{X\in\mathcal{X}}z(X)=M\) and the variance of the weights with respect to the training distribution be \(\sigma^{2}\). For the \(\mathcal{R}_{IS}\) objective, we have the following the result, **Theorem 5.3**.: _Under Assumption 5.2, with probability \(1-\delta\) over the draws of \(\mathcal{D}^{S}\sim\mathbb{P}^{S}\), we have \(\forall\theta\in\Theta\): \(\mathbb{E}_{\mathbb{P}^{S}}[\mathcal{R}_{IS}(\theta)]\leq\widehat{\mathcal{R} }_{IS}(\theta)+\frac{2M(\log[\Theta]+\log(1/\delta))}{3|\mathcal{D}^{S}|}+ \sqrt{2\sigma^{2}\frac{(\log[\Theta]+\log(1/\delta))}{|\mathcal{D}^{S}|}}\)_ Whereas for our objective \(\mathcal{R}_{WE}\) (posing \(\epsilon\) as a hyperparameter \(\lambda\)), **Theorem 5.4**.: _Under Assumption 5.2, we have that with probability \(1-2\delta\) over the draws of \(\mathcal{D}^{S}\sim\mathbb{P}^{S}\) and \(\mathcal{D}^{T}\sim\mathbb{P}^{T}\), we have \(\forall\theta\in\Theta\)\(\mathbb{E}_{\mathbb{P}^{S},\mathbb{P}^{T}}[\mathcal{R}_{WE}(\theta)]\leq \widehat{\mathcal{R}}_{WE}(\theta)+2\sqrt{\frac{2\log[\Theta]}{|\mathcal{D}^{S}| }}+2\lambda\sqrt{\frac{2\log[\Theta]}{|\mathcal{D}^{T}|}}+3\sqrt{\frac{\ln(2/ \delta)}{2|\mathcal{D}^{S}|}}+3\lambda\sqrt{\frac{\ln(2/\delta)}{2|\mathcal{D }^{T}|}}\)_ The proofs can be found in section B.5. Comparing Theorem 5.3 and Theorem 5.4, we see that the generalization bound for importance sampled objective, \(\mathcal{R}_{IS}\), depends on variance of importance weights \(\sigma^{2}\) and also the worst case value \(M\). In contrast, our objective, \(\mathcal{R}_{WE}\), _does not_ depend on these parameters and thus does not suffer from high variance. These results are further justified empirically in section 6.2. ### Weighted Entropy Objective Implementing the objective in (4), requires computation of \(\frac{\mathbb{P}^{S}(X)}{\mathbb{P}^{T}(X)}\). This is challenging when \(m\) (amount of unlabeled test samples) is small and typical way of density estimation in high dimensions is particularly hard. Therefore, we propose to estimate the ratio \(\frac{\mathbb{P}^{S}(X)}{\mathbb{P}^{T}(X)}\) by a parametrized network \(\mathsf{F}_{w}:\mathcal{X}\rightarrow\mathbb{R}\), where \(\mathsf{F}_{w}(X)\) shall satisfy the following constraints: \(\mathbb{E}_{X\sim\mathbb{P}^{T}(X)}[\mathsf{F}_{w}(X)]=1,\ \ \text{and}\ \mathbb{E}_{X\sim \mathbb{P}^{S}(X)}[1/(\mathsf{F}_{w}(X))]=1\). By definition, these constraints must be satisfied. Building on (4), we solve for the following upper bound in Theorem 5.1: \[\max_{\theta(\mathsf{F}_{w})}\mathcal{R}^{S}+\epsilon\times\mathbb{E }_{\mathbb{P}^{T}(X)}\left[\epsilon^{(-\mathsf{F}_{w}(X))}\mathcal{H}(\hat{ \mathsf{Y}}|\mathfrak{X})\right]\] \[\mathrm{s.t.}\ \mathbb{E}_{X\sim\mathbb{P}^{T}(X)}[\mathsf{F}_{w}(X)]=1, \ \mathbb{E}_{X\sim\mathbb{P}^{S}(X)}[1/(\mathsf{F}_{w}(X))]=1 \tag{5}\] Finally, we plug in the empirical risk estimator for \(\mathcal{R}^{S}\), approximate the expectation in second term with the empirical version over \(\mathcal{D}^{T}\), posit \(\epsilon\) as a hyperparameter and add the unfairness objective \(\hat{\mathcal{L}}_{Wass}(\mathcal{D}^{T})\). Furthermore, we utilize the output of the representation layer \(g\) (denoting \(F=h\circ g\), where \(g\) is the encoder subnetwork and \(h\) is the classifier subnetwork, refer to figure 5) as input to \(\mathsf{F}_{w}\) rather than raw input \(X\) (provable benefits of representation learning [4]). The optimization objective thus becomes: \[\min_{\mathsf{F}_{\theta}}\max_{\mathsf{F}_{w}}\mathcal{L}( \mathsf{F}_{\theta},\mathsf{F}_{w})=\widehat{\mathsf{FR}}^{S}+\lambda_{1} \frac{1}{m}\sum_{X_{i}\in\mathcal{D}^{T}}\left[e^{(-\mathsf{F}_{w}(g(X_{i}))) }\mathcal{H}(\hat{\mathsf{Y}}|\mathfrak{X})\right]+\lambda_{2}\hat{\mathcal{ L}}_{Wass}(\mathcal{D}^{T})\] \[\mathrm{s.t.}\ \mathcal{C}_{1}=\frac{1}{m}\sum_{X_{i}\in\mathcal{D}^{T}} \mathsf{F}_{w}(g(X_{i}))=1,\ \mathrm{and}\ \mathcal{C}_{2}=\frac{1}{n}\sum_{X_{i}\in\mathcal{D}^{S}} \frac{1}{\mathsf{F}_{w}(g(X_{i}))}=1\] Here \(\lambda_{1}\) and \(\lambda_{2}\) are hyperparameters governing the objectives. \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) refer to the constraints. We use alternating gradient updates to solve the above min-max problem. Our entire learning procedure consists of _two stages_: (1) pre-training \(\mathsf{F}\) for some epochs with only \(\mathcal{D}^{S}\) and (2) further training \(\mathsf{F}\) with (6). The procedure is summarized in Algorithm 1 and a high level architecture is provided in Figure 5. ``` Input: Training data \(\mathcal{D}^{S}\), Unlabelled Test data \(\mathcal{D}^{T}\), model \(\mathsf{F}\), weight estimator \(\mathsf{F}_{w}\), decaying learning rate \(\eta_{t}\), number of pre-training steps \(\tilde{\mathcal{E}}\), number of training steps \(\mathcal{E}\) for eq 6, \(\lambda_{1},\lambda_{2}\) Output: Optimized parameters \(\theta^{*}\) of the model \(\mathsf{F}\) \(\theta^{0}\leftarrow\) random initialization for\(t\gets 1\)to\(\tilde{\mathcal{E}}\)do \(\theta^{t}\leftarrow\theta^{t-1}-\eta_{t}\nabla_{\theta^{t-1}}\widehat{ \mathsf{FR}}^{S}\) endfor \(w^{\mathcal{E}}\leftarrow\) random initialization for\(t\leftarrow\tilde{\mathcal{E}}+1\)to\(\mathcal{E}+\tilde{\mathcal{E}}\)do \(w^{t}\gets w^{t-1}+\eta_{t}\nabla_{w^{t-1}}\mathcal{L}(\theta^{t-1},w^{t-1 })\) ; /* subject to \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\)*/ \(\theta^{t}\leftarrow\theta^{t-1}-\eta_{t}\nabla_{\theta}\mathcal{L}(\theta^{t- 1},w^{t})\); /* gradient stopping is applied through \(\mathsf{F}_{w}\) in this step*/ endfor \(\theta^{*}\leftarrow\theta^{\mathcal{E}+\tilde{\mathcal{E}}}\) ``` **Algorithm 1** Gradient Updates for the proposed objective to learn fairly under covariate shift ## 6 Experiments We demonstrate our method on 4 widely used benchmarks in the fairness literature, i.e. Adult, Communities and Crime, Arrhythmia and Drug Datasets with detailed description in appendix A.2. The baseline methods used for comparison are: MLP, Adversarial Debias (AD) [66], Robust Fair (RF) [37], Robust Shift Fair (RSF) [45], Z-Score Adaptation (ZSA). Along these, we also compare against two popular Density ratio estimation techniques, [56] (KLIEP) and [28] (LSIF), that estimate the ratio \(\frac{\mathbb{P}^{T}(X)}{\mathbb{P}^{S}(X)}\) via a parametrized setup. The estimates are then used to compute the _importance weighted_ training loss \(\mathcal{R}_{IS}\) described previously. [38] analysed both these methods in a unifying framework. The detailed description for all the baselines is provided in appendix A.3. These baselines also cover the important works highlighted in Section 2. The implementation details of all the methods with relevant hyperparameters are provided in section A.4. The evaluation of our method against the baselines is done via the trade-off between fairness violation (using \(\Delta_{\mathrm{E}\mathrm{O}\mathrm{d}\mathrm{s}}\)) and error (which is \(100-\) accuracy). All algorithms are run \(50\) times before reporting the mean and the standard deviation in the results. All experiments are run on single NVIDIA Tesla V100 GPU. Apart from the primary results on standard and asymmetric shift below, extensive analyses across multiple settings are provided in appendix (due to space limitations). ### Shift Construction To construct the covariate shift in the datasets, i.e., to introduce \(\mathbb{P}^{S}(X,\text{A})\neq\mathbb{P}^{T}(X,\text{A})\), we utilize the following strategy akin to the works of [45; 22]. First, all the non-categorical features are normalized by _z-score_. We then obtain the _first principal component_ of the of the covariates and further project the data onto it, denoting it by \(\mathcal{P}_{\mathcal{C}}\). We assign a score to each point \(\mathcal{P}_{\mathcal{C}}[i]\) using the density function \(\Xi:\mathcal{P}_{\mathcal{C}}[i]\to\nicefrac{{\mathrm{e}^{\gamma(\mathcal{P}_ {\mathcal{C}}[i]-b)}}}{{2}}\). Here, \(\gamma\) is a hyperparameter controlling the level of distribution shift under the split, \(b\) is the \(60^{th}\%\) (perencile) of \(\mathcal{P}_{\mathcal{C}}\) and \(\mathcal{Z}\) is the normalizing coefficient computed empirically. Using this, we sample \(40\%\) instances from the dataset as the test and remaining \(60\%\) as training. To construct the validation set, we further split the training subset to make the final train:validation:test ratio as \(5:1:4\), where the test is distribution shifted. Similar procedure is used to construct the shifts for asymmetric analysis in section 6.3. Note that for large values of \(\gamma\), all the points with \(\mathcal{P}_{\mathcal{C}[i]}>b\) will have high density thereby increasing the probability of being sampled into the test set. This generates a sufficiently large distribution shift. Correspondingly, for smaller values of \(\gamma\), the probability of being sampled is not sufficiently high for these points thereby leading to higher overlap between the train and test distributions. ### Fairness Accuracy Tradeoff The experimental results for the shift constructed using procedure in section 6.1 are shown in Figure 2. The results closer to the _bottom left_ corner in each plot are desirable. Our method provides better error and fairness tradeoffs against the baselines on all the benchmarks. For example, on the Adult dataset, we have the lowest error rate at around 15% with \(\Delta_{\mathrm{Eodds}}\) at almost \(0.075\) while the closest baselines MLP and RF fall short on either of the metrics. On Arrhythmia and Communities, our method achieves very low \(\Delta_{\mathrm{Eodds}}\) (best on Arrhythmia with a margin of \(\sim 30\%\)) with only marginally higher error as compared to MLP and RF respectively. On the Drug dataset, we achieve the best numbers for both the metrics. For the same accuracy, we obtain 1.3x-2x improvements against the baselines methods on most of the benchmarks. Similarly for the same \(\Delta_{\mathrm{Eodds}}\), we achieve up to 1.5x lower errors. It is also important to note that all the other unsupervised adaptation algorithms perform substantially worse and are highly unreliable. For example, ZSA performs well only on the Drug dataset, but shows extremely worse errors (even worse than _random predictions_) on Communities and Adult. The adaptation performed by ZSA is insufficient to handle covariate shift. RSF baseline is consistently worse across the board. This is because it tries to explicitly estimate \(\mathbb{P}^{S}(X)\) and \(\mathbb{P}^{T}(X)\) which is extremely challenging whereas we implicitly estimate the importance ratio. For KLIEP and LSIF, we equip both of these with the Wasserstein penalty term to provide a fair comparison, section A.3. First, we observe that our method consistently outperforms these algorithms across the datasets. The relative improvement of our method is as high as \(\sim 31\%\) in error on Adult dataset and \(\sim 32.5\times\) in \(\Delta_{\mathrm{Eodds}}\) on Drug dataset against LSIF. Similar non-trivial margins can be Figure 2: Fairness-Error Tradeoff Curves for our method (Pareto Frontier) against the optimal performance of the baselines. Our method provides better tradeoffs in all cases. (On Drug dataset, the performance is concentrated around the optimal point). All figures best viewed in colour. noted on other datasets. Second, as highlighted in the figure 7, the variance in error rates of the KLIEP and LSIF based importance is very high on the Drug dataset. Particularly, both KLIEP and LSIF exhibit up to \(20-40\) times higher variance in error and up to \(10-12\) times in \(\Delta_{\mathrm{EOdds}}\). We can attribute this difference to the phenomenon that in the small sample regime, importance weighted objective on training dataset alone may not bring any improvements for covariate shift due to variance issues and thus estimating the ratio can be insufficient. **Variance Details**: Figure 7 (in appendix) provides the detailed plots with variance corresponding to figure 2. In some cases, the standard deviation bars in the figure stretch beyond \(0\) in \(\mathbb{R}^{-}\) due to skewness when error bars are plotted, however numbers across all the runs are _positive_. Low variance results of our method are notable, as discussed in section 6.2 especially against KLEIP and LSIF. ### Results on Asymmetric Shift Here, we present empirical results for Asymmetric Covariate Shift where the degree of shift is substantially different across the groups. To construct data for this setting, we follow the same procedure as described in section 6.1, but operate on data for the two groups differently. The shift is introduced in one of the groups while for the other group, we resort to splitting it randomly into train-val-test. Figure 3 provides the results for the setup when shift is created in group \(\text{A}=0\). We again observe that our method provides better tradeoffs across the board. For the shift in group \(\text{A}=0\), we have substantially better results on Adult and Arrhythmia with up to \(\sim\) 2x improvements on \(\Delta_{\mathrm{EOdds}}\) for similar error and up to \(\sim\) 1.4x improvements in error for similar \(\Delta_{\mathrm{EOdds}}\). On the Communities dataset, MLP and AD show similar performance to ours, but much worse on the Drug dataset for both the metrics. ZSA performs comparably to our method only on Drug, but is substantially worse on other datasets. This confirms the inconsistency of the baselines under this setup as well. The results for shift in group \(\text{A}=1\) are plotted in figure 4 (relegated to the appendix) and shows analogous trends. We reiterate the improvements our method achieves even in the asymmetric shift case, without suffering from large variance issues in both cases, when shifting Group \(A=0\) and Group \(A=1\) more severely than the other respectively. As visible, in figure 3 and figure 4 on the Drug dataset we are **10x** and **5x** better than the two importance sampling baselines on \(\Delta_{\mathrm{EOdds}}\) without the significant variance and lower error % as well. Even on other datasets we notice strong trends for our method with lower error and lower \(\Delta_{\mathrm{EOdds}}\) across the board. This shows that our method performs well in the Asymmetric Covariate Shift setting against importance sampling methods It is also important to note that the errors are lower for all the methods as compared to figure 4 since only one group exhibits substantial shift while degradation in equalized odds is higher. This is in line with the reasoning provided in section 3.3 based on theorem C.1. Figure 4: Comparison of our method against the baselines under Asymmetric Covariate Shift for group \(\text{A}=1\) Figure 3: Comparison of our method against the baselines under Asymmetric Covariate Shift for group \(\text{A}=0\). Conclusion In this work, we considered the problem of unsupervised test adaptation under covariate shift to achieve good fairness-error trade-offs using a small amount of unlabeled test data. We proposed a composite loss, that apart from prediction loss on training, involves a representation matching loss along with weighted entropy loss on the unsupervised test. We experimentally demonstrated the efficacy of our formulation on diverse benchmarks.
2301.11416
Feature space exploration as an alternative for design space exploration beyond the parametric space
This paper compares the parametric design space with a feature space generated by the extraction of design features using deep learning (DL) as an alternative way for design space exploration. In this comparison, the parametric design space is constructed by creating a synthetic dataset of 15.000 elements using a parametric algorithm and reducing its dimensions for visualization. The feature space - reduced-dimensionality vector space of embedded data features - is constructed by training a DL model on the same dataset. We analyze and compare the extracted design features by reducing their dimension and visualizing the results. We demonstrate that parametric design space is narrow in how it describes the design solutions because it is based on the combination of individual parameters. In comparison, we observed that the feature design space can intuitively represent design solutions according to complex parameter relationships. Based on our results, we discuss the potential of translating the features learned by DL models to provide a mechanism for intuitive design exploration space and visualization of possible design solutions.
Tomas Cabezon Pedroso, Jinmo Rhee, Daragh Byrne
2023-01-26T21:03:51Z
http://arxiv.org/abs/2301.11416v1
Feature Space Exploration as an Alternative for Design Space Exploration Beyond the Parametric Space ###### Abstract This paper compares the parametric design space with a feature space generated by the extraction of design features using deep learning (DL) as an alternative way for design space exploration. In this comparison, the parametric design space is constructed by creating a synthetic dataset of 15.000 elements using a parametric algorithm and reducing its dimensions for visualization. The feature space -- reduced-dimensionality vector space of embedded data features -- is constructed by training a DL model on the same dataset. We analyze and compare the extracted design features by reducing their dimension and visualizing the results. We demonstrate that parametric design space is narrow in how it describes the design solutions because it is based on the combination of individual parameters. In comparison, we observed that the feature design space can intuitively represent design solutions according to complex parameter relationships. Based on our results, we discuss the potential of translating the features learned by DL models to provide a mechanism for intuitive design exploration space and visualization of possible design solutions. Deep Learning, VAE, Design Space, Feature Design Space, Parametric Design Space, Design Exploration. ## 1 Introduction Parametric modeling has acquired widespread acceptance among creative practitioners as it allows the synthesis of various design options and solutions. Changing the parameters in this modeling process, either manually or randomly, can rapidly create a vast set of design variations (Toulkeridou, 2019). Navigating the resulting _parametric design space_ -- where the design variants are topologically placed by their parameters -- is part of the _design exploration_ process -- a crucial step in the development of new alternatives and design solutions. Exploration of the parametric design space allows creative practitioners many benefits: to reach satisfying solutions, better define design problems, and understand the opportunities and limitations of the possible solutions. Despite these benefits, design exploration is laborious within the parametric space and challenged along two fronts: comparison and selection (Fuchkina et al., 2018). Parametric design exploration is an iterative process that focuses on the variation of these individual parameters, rather than on the relationship among them (Yamamoto and Nakakoji, 2005). Hence, comparing one design solution with others by their parameters alone does not always result in a superior solution; for example, the variants generated by the local combination of parameters might not match the design requirements. Moreover, infinite alternative design solutions can be generated by inputting new parameter values. Thus, the parametric design space consists of a huge amount of design variants that cannot be fully or sufficiently explored. We propose an alternative way to construct and examine the design space, by extracting features from a DL model. By comparing and analyzing how the DL _feature design space_ differs from the parametric design space, we illustrate the potential of feature design space for design practitioners during the design exploration process and provide a new way to compare, examine and select the design alternatives based on the exploration of a properly constrained design space. No previous approach to compare the parametric design space and feature design space as design exploration tools has been found. To demonstrate how the feature space compares to the parametric space, we designed an experiment to construct both a parametric design space and a feature design space using the same dataset. The dataset consists of 15.000 synthetic 3D models produced by a parametric algorithm with five parameters. This parametric design space consists of five axes; each axis corresponds to each of the parameters that are used as inputs of the parametric algorithm. Subsequently, this same dataset is used to train a DL model to compress the data into a feature vector of 128 dimensions. Both the parametric space (five-axes) and the feature space (128 axes) are not directly visualizable due to their high dimensionality. Nevertheless, as visual feedback plays an important role in design exploration (Bradner, Iorio and Davis, 2014), we employ a dimensionality reduction algorithm (t-SNE) to the design space. We are able to illustrate the design exploration space, showing how the data is distributed across both the parametric and feature design spaces. In the next section, we describe the generation of the dataset, as well as the construction of parametric design space and its visualization. In Section 3., we illustrate how training a DL model resulted in a feature space for design exploration and comparison with the parametric approach. Then, in Section 4., we will compare, contrast, and discuss the characteristics of the DL feature space and the parametric space. (Figure 1.) ## 2 Constructing Parametric Design Space ### Dataset Generation To conduct a design space comparison, a simple parametric modeling system was designed: a parametric algorithm for generating different styles of vessels. As with handcraft of pottery wheel throwing, a simple Bezier curve with three control points was turned around an axis to generate each 3D digital vessels; the form of each vessel is specified by the five parameters that were used as inputs. These parameters, as can be seen in Figure 2, are: the height of the vessel, the width of the base, the width of the top opening, and the horizontal and vertical coordinates of the central control point of the Bezier curve that are used to create the curve of the form. The five parameters are represented as a vector, and each vector corresponds to a specific 3D model of a vessel. Using this system, we created a 3D vessel dataset by randomly generating a total of 15.000 different vessels. The total shape of the parametric representation of the vessel dataset is [15.000, 5], however, as it will be explained in the next section, Figure 1: The overall process of comparing parametric design space and feature space from deep learning only 3.000 vessels were used for the space exploration and visualization, so this will be a design space of size [3.000, 5]. ### Dimensionality Reduction As a five-dimensional space makes it hard to compare models and to visualize and compare the characteristics, we employed a dimensionality reduction process to reduce the space to two-dimensions and enable the objects to be plotted and compared to one another. Figure 3. shows the overall process of visualizing the space using t-Distributed Stochastic Neighbour Embedding (t-SNE) algorithm (van der Maaten and Hinton, 2008). t-SNE is a popular dimensionality-reduction algorithm for visualizing high-dimensional data. The hyper-parameters used for this reduction are: perplexity: 30; learning rate: 200; and iterations: 1.000. After dimensional reduction, each point in the plot represents the corresponding embedding of a vessel in the parametric design space. Each point is expressed as Figure 3: Illustration of the dimensional reduction process for the 3D vessel dataset, and the construction of a parametric design space. Figure 2: Upper: An illustration of the dataset parameters. Lower: Three illustrative examples from the dataset with the parameters and the resulting 3D form side-by-side. a 2D image of the profile cut section of the corresponding vessel. Figure 4. represents the reduced parametric design space of the dataset. ## 3 Constructing the Feature Space To construct the design space based on the features and not the parameters, we used a Variational Autoencoder (VAE) as a tool for extracting the morphological features of the vessels. VAEs (Kingma and Welling, 2013) are a type of generative deep neural network used for statistical inference problems as they generalize a probabilistic distribution of the given dataset and synthesize new data samples from that distribution. VAEs are composed of two modules: _encoder_ and _decoder_. The encoder abstracts the input data into smaller dimensional vectors, latent vectors, and the decoder reconstructs the latent vector back into a 3D shape. During the encoding process, the network captures and extracts the features of the input data. These features can be topologically placed in the data space, namely, _latent space_. In the latent space, the distance between two data points represents the degree of resemblance of data: the closer points, the more resembled. We translate this latent space as the feature space for an alternative way to explore the design space. ### Data Pre-Processing Different representations of 3D data have been used in DL research, like point clouds (Achlioptas et al., 2018), meshes (Ranjan et al., 2018), or _voxels_(Wu et al., 2017). As resolutions of the data is not key for our purpose rather than the extracted features of it; and because we will implement a VAE for this experiment that needs fixed space inputs for the Convolutional Neural Networks (CNNs), we will be representing our 3D data with voxels. Voxels are discretized three-dimensional grids containing a binary value of volumetric occupancy of an object; they distinguish between the elements on the grid that are filled with material and those that are empty. The size of the voxel will determine the number of divisions Figure 4: A 2D visualization of the parametric design space of the vessel dataset. Inset image: a detailed section for a subset of the models. of the grid, consequently, the resolution at which we represent our 3D models; the larger size, the more detailed 3D models. In this experiment, we used 32-sized voxels so that a 3D vessel model is represented by 32x32x32 grid, the shape of the entire dataset is [15.000, 32, 32, 32]. Finally, the dataset was then divided into two groups: 80% of the dataset (12.000 vessels) was used for training the DL model, and the remaining 20% (3.000 vessels) was used for testing the model and the parametric and feature space analysis and comparison. ### Training For training the model, we adopted the VAE architecture implemented in 'Adversarial Generation of Continuous Implicit Shape Representations' (Kleineberg, Fey and Weichert, 2020). The encoder consists of four residual blocks. Each residual block is composed of a 3D convolution layer, followed by a batch normalization and a Leaky ReLu activation layer. The decoder, on the contrary, comprises four residual blocks. Each block starts with a batch normalization, followed by a Leaky ReLu activation layer, and finally a 3D transposed convolution layer. The following hyper-parameters are used for training the VAE with the voxelized vessel dataset: batch size 32, Adam optimizer (Kingma and Ba, 2015), learning rate 5e-. The model was trained in Google Colab Pro using the Nvidia Tesla T4 GPU. The model was trained for a total of 240 epochs. We early stopped the model before the model started to overfit, Figure 5. The loss function used during training was a combination of two losses. The first one, is the Kullback-Leibler divergence (KLD) loss (Kullback and Leibler, 1951), with a weight in the total loss formula of 1. This function is a measurement of the difference between two statistical distributions. The second loss is the Minimum Square Distance (MSE) loss (Sammut and Webb, 2010). It is used as the reconstruction loss and measures the error between the input voxels and the reconstructed output. Figure 6. shows the reasonable quality of the reconstruction of the training result after 240 epochs. To ensure the performance of the model, it was evaluated using the test set and showed that the model maintained the accuracy with the new dataset, which shows Figure 5: Training process losses. that the model generalizes well to new data and is able to encode never seen before 3D vessels. ### Dimensionality Reduction Once the VAE is trained, the encoder is used to extract the features of each vessel in the test dataset from 32.768 dimensions, the size of each voxelized vessel, into 128-dimensional vectors, the latent vectors. Consequently, the entire test dataset of the vessels is represented into vectors whose total shape is [3.000, 128]. Like in the parametric case, 128 dimensions are non-visualizable so the same process as in Section 2 is followed. t-SNE algorithm is used to reduce the dimensionality of each vector and plot the resulting two dimensions in an image with the section of each of the vessels (Figure 7.). The hyper-parameters used for this reduction are: perplexity: 50; learning rate: 700; and iterations: 3. Figure 8. shows the results of distributed feature vectors in the reduced dimensional space, the feature space. Figure 6: Two examples of reconstructions from the trained VAE: the section slides and 3D voxels of the ground truth (the top row of each example) and the reconstruction (bottom of each example). Figure 7: Feature space generation and visualization diagram. ## 4 Comparison Between the spaces Figure 8. shows that similar vessels have been clustered together. Thinner vessels are located at the top right of the image, in contrast to the opposite lower bottom corner with the bigger vessels. The figure illustrates how the VAE model is able to understand the relationship between the parameters and their influence on the output morphological shape. On the contrary, in the parametric space (Figure 4.), we can see how concave vessels were gathered at the bottom of the image, however, if the height of the vessels is considered, we can see that this parameter was not considered when clustering the vessels. Parametric space is based on each parameter independently, and not on the relationship among them. Therefore, we observe that parametric design space insufficiently expresses the final form characteristics of the vessels by the combinations of the parameters. On the contrary, in Figure 8., the feature space, a gradual change in the shape or concavity as well as height or width is observed. To further examine and compare the characteristics of both design spaces, we used a clustering, algorithm: a Density-Based Spatial Clustering of Applications with Noise (DBSCAN) (Ester M et al., 1996). It is one of the most common clustering algorithms that finds core samples of high density and expands clusters with them. Figure 9. shows the results of this clustering. The parametric design space has a total of seven clusters: three of them large, and four of them small. It shows how the parametric design space doesn't provide enough information to intuitively compare the design variants locally, this space shows extreme changes in vessel forms even in the same cluster. Figure 8: A 2D visualization of the feature design space of the vessel dataset. Inset image: a detailed section for a subset of the models. ## 5 Conclusion and Future work We constructed the parametric and the feature design spaces using a custom synthetic dataset and a VAE model. By comparing the parametric and feature design spaces, we observed improved distributions of design alternatives in the later. When the multi-dimensional parametric design space is projected into a 2D space (Figures 4. and 9. left), the clusters are insufficiently relevant to the morphological characteristics. On the other hand, when the multi-dimensional feature space is projected into a 2D space (Figures 8. and 9. right), the clusters show sufficient relevance to the features of the data they represent. Based on this comparison, we conclude that combination of individual parameters in the parametric design space is limited in representing the morphological characteristics of the shapes. However, we showed that DL models can be used to extract design features from 3D models and that the extracted features are more complex than the combinations of individual parameters. Hence, we conclude that the extracted features, that include information of the relationships between the parameters, can construct a well-distributed design space. For that reason, we propose feature design space as a tool for design space exploration that creative practitioners can use as a new way for looking at objects beyond the parametric design space. Figure 9: Final visualization and clusters of the parametric and feature design spaces with representative vessels of each group. Our results and implications are limited to a single dataset and DL model, however the results seem promising. Future work will expand on this study with more diverse datasets generated by more complex parametric algorithms. Accordingly, to perform the feature extraction, we would like to train other types of DL models to investigate different potentials of DL in design.
2302.10258
Neural Algorithmic Reasoning with Causal Regularisation
Recent work on neural algorithmic reasoning has investigated the reasoning capabilities of neural networks, effectively demonstrating they can learn to execute classical algorithms on unseen data coming from the train distribution. However, the performance of existing neural reasoners significantly degrades on out-of-distribution (OOD) test data, where inputs have larger sizes. In this work, we make an important observation: there are many different inputs for which an algorithm will perform certain intermediate computations identically. This insight allows us to develop data augmentation procedures that, given an algorithm's intermediate trajectory, produce inputs for which the target algorithm would have exactly the same next trajectory step. We ensure invariance in the next-step prediction across such inputs, by employing a self-supervised objective derived by our observation, formalised in a causal graph. We prove that the resulting method, which we call Hint-ReLIC, improves the OOD generalisation capabilities of the reasoner. We evaluate our method on the CLRS algorithmic reasoning benchmark, where we show up to 3$\times$ improvements on the OOD test data.
Beatrice Bevilacqua, Kyriacos Nikiforou, Borja Ibarz, Ioana Bica, Michela Paganini, Charles Blundell, Jovana Mitrovic, Petar Veličković
2023-02-20T19:41:15Z
http://arxiv.org/abs/2302.10258v2
# Neural Algorithmic Reasoning with Causal Regularisation ###### Abstract Recent work on neural algorithmic reasoning has investigated the reasoning capabilities of neural networks, effectively demonstrating they can learn to execute classical algorithms on unseen data coming from the train distribution. However, the performance of existing neural reasoners significantly degrades on out-of-distribution (OOD) test data, where inputs have larger sizes. In this work, we make an important observation: there are many _different_ inputs for which an algorithm will perform certain intermediate computations _identically_. This insight allows us to develop data augmentation procedures that, given an algorithm's intermediate trajectory, produce inputs for which the target algorithm would have _exactly_ the same next trajectory step. Then, we employ a causal framework to design a corresponding self-supervised objective, and we prove that it improves the OOD generalisation capabilities of the reasoner. We evaluate our method on the CLRS algorithmic reasoning benchmark, where we show up to 3\(\times\) improvements on the OOD test data. Machine Learning, ICML ## 1 Introduction Recent works advocate for building neural networks that can reason (Xu et al., 2020, 2021; Velickovic and Blundell, 2021; Velickovic et al., 2022). Therein, it is posited that combining the robustness of algorithms with the flexibility of neural networks can help us accelerate progress towards models that can tackle a wide range of tasks with real world impact (Davies et al., 2021; Deac et al., 2021; Velickovic et al., 2022; Bansal et al., 2022; Beurer-Kellner et al., 2022). The rationale is that, if a model learns how to reason, or learns to execute an algorithm, it should be able to apply that reasoning, or algorithm, to a completely novel problem, even in a different domain. Specifically, if a model has learnt an algorithm, it should be gracefully applicable on out-of-distribution (OOD) examples, which are substantially different from the examples in the training set, and return correct outputs for them. This is because an algorithm--and reasoning in general--is a sequential, step-by-step process, where a simple decision is made in each step based on outputs of the previous computation. Prior work (Diao and Loynd, 2022; Dudzik and Velickovic, 2022; Ibarz et al., 2022; Mahdavi et al., 2022) has explored this setup, using the CLRS-30 benchmark (Velickovic et al., 2022), and showed that while many algorithmic tasks can be learned by Graph Neural Network (GNN) processors in a way that generalises to larger problem instances, there are still several algorithms where this could not be achieved. Importantly, CLRS-30 also provides ground-truth _hints_ for every algorithm. Hints correspond to the state of different variables employed to solve the algorithm (e.g. positions, pointers, colouring of nodes) along its trace. Such hints can optionally be used during training, but are not available during evaluation. In previous work, they have mainly been used as auxiliary targets together with the algorithm output. The prevailing hypothesis is that gradients coming from predicting these additional relevant signals will help constrain the representations in the neural algorithmic executor and prevent overfitting. Predicted hints can also be optionally fed back into the model to provide additional context and aid their prediction at the next step. In practice, while utilising hints in this way does lead to models that _follow_ the algorithmic trajectory better, they have had a less substantial impact on the accuracy of the predicted _final output_. This is likely due to the advent of powerful strategies such as recall (Bansal et al., 2022), wherein the input is fed back to the model at every intermediate step, constantly "reminding" the model of the problem that needs to be solved. The positive effect of recall on the final output accuracy has been observed on many occasions (Mahdavi et al., 2022), and outweighs the contribution from directly predicting hints and feeding them back. In this work, we propose a method, namely Hint-ReLIC, that decisively demonstrates an advantage to using hints. We base our work on the observation that there are many different inputs for which an algorithm will make _identical_ computations at a certain step (Figure 1). For example, applying the bubble sort algorithm from the left on \([2,1,3]\) or \([2,1,\underline{5},3]\) will result in the same first step computation: a comparison of \(2\) and \(1\), followed by swapping them. Conversely, the first step of execution would be different for inputs \([2,1,3]\) and \([2,\underline{5},1,3]\); the latter input would trigger a comparison of \(2\) and \(5\) without swapping them. This observation allows us to move beyond the conventional way of using hints, i.e. autoregressively predicting them (Velickovic et al., 2022). Instead, we design a novel way that learns more informative representations that enable the networks to more faithfully execute algorithms. Specifically, we learn representations that are similar for inputs that result in identical intermediate computation. First, we design a causal graph in order to formally model an algorithmic execution trajectory. Based on this, we derive a self-supervised objective for learning hint representations that are invariant across inputs having the same computational step. Moreover, we prove that this procedure will result in stronger causally-invariant representations. Contributions.Our three key contributions are as follows: 1. We design a _causal graph_ capturing the observation that the execution of an algorithm at a certain step is determined only by a subset of the input; 2. Motivated by our causal graph, we present a _self-supervised objective_ to learn representations that are _provably_ invariant to changes in the input subset that does not affect the computational step; 3. We test our model, dubbed Hint-ReLIC, on the CLRS-30 algorithmic reasoning benchmark (Velickovic et al., 2022), demonstrating a _significant improvement_ in out-of-distribution generalisation over the recently published state-of-the-art (Ibarz et al., 2022). ## 2 Related work GNNs and invariance to size shifts.Graph Neural Networks (GNNs) constitute a popular class of methods for learning representations of graph data, and they have been successfully applied to solve a variety of problems. We refer the reader to Bronstein et al. (2021); Jegelka (2022) for a thorough understanding of GNN concepts. While GNNs are designed to work on graphs of any size, recent work has empirically shown poor size-generalisation capabilities of standard methods, mainly in the context of molecular modeling (Gasteiger et al., 2022), graph property prediction (Corso et al., 2020), and in executing specific graph algorithms (Velickovic et al., 2020; Joshi et al., 2020). A theoretical study of failure cases has been recently provided in Xu et al. (2021), with a focus on a geometrical interpretation of OOD generalisation. In order to learn models performing equally well in- and out-of-distribution, Bevilacqua et al. (2021); Chen et al. (2022); Zhou et al. (2022) designed ad-hoc solutions satisfying assumed causal assumptions. However, these models are not applicable to our setting, as the assumptions on our data generation process are significantly different. With the same motivation, Buffelli et al. (2022) introduced a regularisation strategy to improve generalisation to larger sizes, while Yehudai et al. (2021) proposed a semi-supervised and a self-supervised objective that assume access to the test distribution. However, these models are not designed to work on algorithmic data, where OOD generalisation is still underexplored. Neural Algorithmic Reasoning.In order to learn to execute algorithmic tasks, a neural network must include a _recurrent_ component simulating the individual algorithmic steps. This component is applied a variable number of times, as required by the size of the input and the problem at hand. The recurrent component can be an LSTM (Gers and Schmidhuber, 2001), possibly augmented with a memory as in Neural Turing Machines (Graves et al., 2014, 2016); it could exploit spatial invariances in the algorithmic task through Figure 1: An illustration of the key observation of our work, on the depth-first search (DFS) algorithm as implemented in CLRS-30 (Velicković et al., 2022). On the left, the first four steps of DFS are visualised. At each step, DFS explores the unvisited neighbour with the smallest index, and backtracks if no unexplored neighbours exist. The next computational step—_assigning \(2\) as the parent of \(4\)_—is bound to happen, even under many _transformations_ of this graph. For example, if we were to insert new (dashed) nodes and edges into the graph, this step would still proceed as expected. Capturing this computational invariance property is the essence of our paper. a convolutional architecture (Bansal et al., 2022); it could be based on the transformer self-attentional architecture, as in the Universal Transformer (Dehghani et al., 2019); or it could be a Graph Neural Network (GNN). GNNs are particularly well suited for algorithmic execution (Velickovic et al., 2020; Xu et al., 2020), and they have been applied to algorithmic problems before with a focus on extrapolation capabilities (Palm et al., 2017; Selsam et al., 2019; Joshi et al., 2020; Tang et al., 2020). Recently, Velickovic and Blundell (2021) have proposed a general framework for algorithmic learning with GNNs. To reconcile different data encodings and provide a unified evaluation procedure, Velickovic et al. (2022) have presented a benchmark of algorithmic tasks covering a variety of areas. This benchmark, namely the CLRS algorithmic benchmark, represents data as graphs, showing that the graph formulation is general enough to include several algorithms, and not just the graph-based ones. On the CLRS benchmark, Ibarz et al. (2022) has recently presented several improvements in the architecture and learning procedure in order to obtain better performances. However, even the latest state-of-the-art models suffer from performance drops in certain algorithms when going out-of-distribution, an aspect we wish to improve upon here. Self-supervised learning.Recently, many self-supervised representation learning methods that achieve good performance on a wide range of downstream vision tasks without access to labels have been proposed. One of the most popular approaches relies on contrastive objectives that make use of data augmentations to solve the instance discrimination task (Wu et al., 2018; Chen et al., 2020; He et al., 2020; Mitrovic et al., 2021). Other approaches that rely on target networks and clustering have also been explored (Grill et al., 2020; Caron et al., 2020). Our work is similar in spirit to Mitrovic et al. (2021), which examines representation learning through the lens of causality and employs techniques from invariant prediction to make better use of data augmentations. This approach has been demonstrated to be extremely successful on vision tasks (Tomasev et al., 2022). In the context of graphs, You et al. (2020); Suresh et al. (2021); You et al. (2022) have studied how to learn contrastive representations, with particular attention paid to data augmentations. Moreover, Velickovic et al. (2019); Zhu et al. (2020) proposed novel objectives based on mutual information maximization in the graph domain to learn representations. Several other self-supervised methods (e.g. Thakoor et al. (2022)) have also been studied, and we refer the reader to Xie et al. (2022) for a review of existing literature. ## 3 Causal model for algorithmic trajectories An algorithm's execution trajectory is described in terms of the _inputs_, _outputs_ and _hints_, which represent intermediate steps in the execution. We consider a graph-oriented way of representing this data (Velickovic et al., 2022): inputs and outputs are presented as data on nodes and edges of a graph, and hints are encoded as node, edge or graph features changing over time steps. To better understand the data at hand, we propose to formalise the data generation process for an algorithmic trajectory using a _causal graph_. In such a causal graph, nodes represent random variables, and incoming arrows indicate that the node is a function of its parents (Pearl, 2009). The causal graph we use can be found in Figure 2. Note that this graph does not represent input data for the model, but a way of describing how any such data is generated. Let us consider the execution trajectory of a certain algorithm of interest, at a particular time step \(t\). Assume \(X_{1}\) to be the observed input, and let \(X_{t}\) be the random variable denoting the "snapshot" at step \(t\) of the algorithm execution on the input. For example, in bubble sort, \(X_{1}\) will be the initial (unsorted) array, and \(X_{t}\) the array after \(t\) steps of the sorting procedure (thus a partially-sorted array). The _key contribution_ of our causal graph is modelling the assumption that outcomes of a particular execution step depend only on a subset of the current snapshot, while the remainder of the snapshot can be arbitrarily different. Accordingly, we assume the snapshot \(X_{t}\) to be generated from _two_ random variables, \(X_{t}^{c}\) and \(X_{t}^{s}\), with \(X_{t}^{c}\) representing the part of the snapshot that does not influence the current execution step (what can be changed without affecting the execution), while \(X_{t}^{s}\) the one that determines it (what needs to be stable). Let us now revisit our bubble sort example from this perspective (see Figure 3). At each execution step, bubble sort compares two adjacent elements of the input list, and swaps them if they are not correctly ordered. Hence, in this par Figure 2: The causal graph formalising our assumption about the outcome of a step depends only on a subset \(X_{t}^{s}\) of the snapshot \(X_{t}\), while the remainder \(X_{t}^{c}\) of the snapshot can be arbitrarily different. ticular example, \(X_{t}^{s}\) constitutes these two elements being compared at step \(t\), while the remaining elements--which do not affect whether or not a swap is going to happen at time \(t\)--form \(X_{t}^{c}\). By definition this implies that the next algorithm state is a function of _only_\(X_{t}^{s}\). The data encoding used by Velickovic et al. (2022) prescribes that hints have values provided in _all_ relevant parts of the graph. That is, in a graph of \(n\) nodes, an \(m\)-dimensional node hint has shape \(\mathbb{R}^{n\times m}\), and an \(m\)-dimensional edge hint has shape \(\mathbb{R}^{n\times n\times m}\). However, in order to keep our causal model simple, we choose to track the next-step hint in _only one_ of those values, using an _index_, \(I_{t}\), to decide which. Specifically, \(I_{t}\in\{1,2,\ldots,n\}\) are possible indices for node-level hints, and \(I_{t}\in\{(1,1),(1,2),\ldots,(1,n),(2,1),\ldots,(2,n),\ldots,(n,n)\}\) are possible indices for edge-level hints. For the indexed node/edge only, our causal graph then tracks the next-step value of the hint (either no change from the previous step or the new value), which we denote by \(Y_{t+1}\). Returning once again to our bubble sort example: one specific hint being tracked by the algorithm is which two nodes in the input list are currently considered for a swap. If \(I_{2}=4\), then \(Y_{3}\) will track whether node \(4\) is being considered for a swap, immediately after two steps of the bubble sort algorithm have been executed. Once step \(t\) of the algorithm has been executed, a new snapshot \(X_{t+1}\) is produced, and it can be decomposed into \(X_{t+1}^{c}\) and \(X_{t+1}^{s}\), just as before. Note that the execution in CLRS-30 is assumed _Markovian_(Velickovic et al., 2022): the snapshot at step \(t\) contains all the information to determine the snapshot at the next step. Finally, the execution terminates after \(T\) steps, and the final output is produced. We can then represent the output in a particular node/edge--indexed by \(I_{T}\), just as before--by \(Y_{T+1}^{o}:=g(X_{T}^{s},I_{T})\), with \(g\) being the function producing the algorithm output. As can be seen in Figure 2, \(X_{t}^{s}\) has all the necessary information to predict \(Y_{t+1}\), since our causal model encodes the conditional independence assumption \(Y_{t+1}\perp X_{t}^{c}\,|\,X_{t}^{s}\). More importantly, using the independence of mechanisms (Peters et al., 2017) we can conclude that under this causal model, performing interventions on \(X_{t}^{c}\) by changing its value, does not change the conditional distribution \(P(Y_{t+1}\,|\,X_{t}^{s})\). Note that this is exactly the formalisation of our initial intuition: _the output of a particular step of the algorithm (i.e., \(Y_{t+1}\)) depends only on a subset of the current snapshot (i.e., \(X_{t}^{s}\)), and thus it is not affected by the addition of input items that do not interfere with it_ (which we formalise as an **intervention** on \(X_{t}^{c}\)).1 Therefore, given a step \(t\in[1\ldots T]\), for all \(x,x^{\prime}\in\mathcal{X}_{t}^{c}\), where \(\mathcal{X}_{t}^{c}\) denotes the domain of \(X_{t}^{c}\), we have that \(X_{t}^{s}\) is an _invariant_ predictor of \(Y_{t+1}\) under interventions on \(X_{t}^{c}\): Footnote 1: In bubble sort, adding sorted keys at the end of the array does not affect whether we are swapping the current entries. \[p^{\text{do}(X_{t}^{c})=x}(Y_{t+1}|X_{t}^{s})=p^{\text{do}(X_{t}^{c})=x^{ \prime}}(Y_{t+1}|X_{t}^{s}), \tag{1}\] where \(p^{\text{do}(X_{t}^{c})=x}\) denotes the distribution obtained from assigning \(X_{t}^{c}\) the value of \(x\), i.e. the interventional distribution. Note, however, that Equation (1) does not give us a practical way of ensuring that our neural algorithmic reasoner respects these causal invariances, because it only has access to the entirety of the current snapshot \(X_{t}\), without knowing its specific subsets \(X_{t}^{c}\) and \(X_{t}^{s}\). More precisely, it is generally not known _which_ input elements constitute \(X_{t}^{s}\). In the next section, we will describe how to ensure invariant predictions for our reasoner, leveraging only \(X_{t}\). ## 4 Size-invariance through self-supervision in neural algorithmic reasoning Given a step \(t\), to ensure invariant predictions of \(Y_{t+1}\) without access to \(X_{t}^{s}\), we construct a _refinement_ task \(Y_{t+1}^{R}\) and learn a representation \(f(X_{t},I_{t})\) to predict \(Y_{t+1}^{R}\), as originally proposed for images in Mitrovic et al. (2021). A refinement for a task (Chalupka et al., 2014) represents a more fine-grained version of the initial task. More formally, given two tasks \(R:\mathcal{A}\rightarrow\mathcal{B}\) and \(T:\mathcal{A}\rightarrow\mathcal{B}^{\prime}\), task \(R\) is more (or equally) fine-grained than task \(T\) if, for any two elements \(a,a^{\prime}\in\mathcal{A}\), \(R(a)=R(a^{\prime})\implies\) Figure 3: Example of values of \(X_{t}^{c}\) and \(X_{t}^{s}\) on an input array in the execution of the bubble sort algorithm. At every step of computation, bubble sort compares and possibly swaps exactly _two_ nodes—those nodes are the only ones determining the outcome of the current step, and hence they constitute \(X_{t}^{s}\). All other nodes are part of \(X_{t}^{c}\). \(T(a)=T(a^{\prime})\). We will use this concept to show that a representation learned on the refinement task can be effectively used in the original task. Note that, as for \(Y_{t+1}\), we assume \(f(X_{t},I_{t})\) to be the representation learned from \(X_{t}\) of a predefined hint value--indexed by \(I_{t}\)--for example, the representation of the predecessor of a specific element of the input list. Given a step \(t\), let \(Y_{t+1}^{R}\) be a refinement of \(Y_{t+1}\), and let \(f(X_{t},I_{t})\) be a representation learned from \(X_{t}\), used for the prediction of the refinement (see Figure 4). As we will formally prove, a representation that is invariant in the prediction of the refinement task across changes in \(X_{t}^{c}\) is also invariant in the prediction of the algorithmic step under these changes. Therefore, optimising \(f(X_{t},I_{t})\) to be an invariant predictor for the refinement task \(Y_{t+1}^{R}\) represents a _sufficient_ condition for the invariance in the prediction of the next algorithmic state, \(Y_{t+1}\). In the next subsection we present how to learn \(f(X_{t},I_{t})\) in order to be an invariant predictor of \(Y_{t+1}^{R}\) under changes in \(X_{t}^{c}\). Then, we show that this represents a sufficient condition for \(f(X_{t},I_{t})\) to be an invariant predictor of \(Y_{t+1}\) across changes in \(X_{t}^{c}\). ### Learning an invariant predictor of the refinement We consider \(Y_{t+1}^{R}\) to be the most-fine-grained refinement task, which corresponds to classifying each (hint) instance individually, that is, a contrastive learning objective where we want to distinguish each hint from all others. This represents the _most-fine-grained refinement_, because \(Y_{t+1}^{R}(a)=Y_{t+1}^{R}(a^{\prime})\Longleftrightarrow a=a^{\prime}\), by definition. Our goal is to learn \(f(X_{t},I_{t})\) to be an invariant predictor of \(Y_{t+1}^{R}\) under changes (interventions) of \(X_{t}^{c}\). Thus, given a step \(t\in[1\dots T]\), for all \(x,x^{\prime}\in\mathcal{X}_{t}^{c}\), we want \(f(X_{t},I_{t})\) such that \[p^{\text{do}(X_{t}^{c})=x}(Y_{t+1}^{R}|f(X_{t},I_{t}))=p^{\text{do}(X_{t}^{c} )=x^{\prime}}(Y_{t+1}^{R}|f(X_{t},I_{t})), \tag{2}\] where \(p^{\text{do}(X_{t}^{c})}\) is the interventional distribution and \(\mathcal{X}_{t}^{c}\) denotes the domain of \(X_{t}^{c}\). Since we do not have access to \(X_{t}^{c}\), as it is unobserved (see Figures 2 and 4), we cannot explicitly intervene on it. Thus, we simulate interventions on \(X_{t}^{c}\) through data augmentation. As we are interested in being invariant to appropriate size changes, we design a data augmentation procedure tailored for neural algorithmic reasoning, which mimics interventions changing the size of the input. Given a current snapshot of the algorithm on a given input, the data augmentation procedure should produce an augmented input which is larger, but on which the execution of the current step is going to proceed identically. For example, a valid augmentation in bubble sort at a certain step consists of adding new elements to the tail of the input list, since the currently-considered swap will occur (or not) regardless of any elements added there. Thus, the valid augmentations for the bubble sort algorithm at a given step are all possible ways to add items in such a way that ensures that the one-step execution is unaffected by this addition. To learn an encoder \(f(X_{t},I_{t})\) that satisfies Equation (2), we propose to explicitly enforce invariance under valid augmentations. Such augmentations, as discussed, provide us with diverse inputs with an identical intermediate execution step. Specifically, we use the ReLIC objective (Mitrovic et al., 2021) as a regularisation term, which we adapt to our causal graph as follows. Consider a time step, \(t\), and let \(\mathcal{D}_{t}\) be the dataset containing the snapshots at time \(t\) for all the inputs. Let \(i_{t},j_{t}\in I_{t}\) be two indices, and denote by \(a_{lk}=(a_{l},a_{k})\in\mathcal{A}_{x_{t}}\times\mathcal{A}_{x_{t}}\) a pair of augmentations, with \(\mathcal{A}_{x_{t}}\) the set of all possible valid augmentations at \(t\) for \(x_{t}\) (which simulate the interventions on \(X_{t}^{c}\)). The objective function to optimise becomes: \[\mathcal{L}_{t}=\] \[-\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! different from standard contrastive objectives, where positive and negative examples are taken from the batch. Due to space constraints, we expand on the derivation of Equation (3) in Appendix A. In practice, we consider only one augmentation per graph, which is equivalent to setting \(a_{l}\) to the identity transformation. Moreover, we follow the standard setup in contrastive learning and implement \(\phi(f(x_{t}^{a_{i}},i_{t}),f(x_{t}^{a_{k}},i_{t}))=\langle\,h(f(x_{t}^{a_{l}}), i_{t}),h(f(x_{t}^{a_{k}},i_{t}))\,\rangle/\tau\) with \(h\) a fully-connected neural network and \(\tau\) a temperature parameter. Finally, we use a KL penalty to ensure invariance in the probability distribution across augmentations. This is a requirement for satisfying the assumptions of our key theoretical result. Example.To better understand Equation (3), we provide an example illustrated in Figure 5. We will consider one of the algorithms in CLRS-30--Kosaraju's strongly connected component (SCC) algorithm (Aho et al., 1974)--which consists of two invocations of depth-first search (DFS). Let \(G=(V,E)\) be an input graph to the SCC algorithm. Further, assume that at step \(t\), the algorithm is visiting a node \(v\in V\). We will focus on the prediction of the _parent_ of \(v\): the node from which we have reached \(v\) in the current DFS invocation. Note that, in practice, this is a classification task where node \(v\) decides which of the other nodes is its parent. Accordingly, given a particular node \(v\), our model computes a representation for every other node \(u\in V\). This representation is which is then passed through a final classifier, outputting the (unnormalised) probability of \(u\) being the parent of \(v\). Now, consider any augmentation of \(G\)'s nodes and edges that does not disrupt the current step of the search algorithm, denoted by \(G^{a}=(V^{a},E^{a})\). For example, as the DFS implementation in CLRS-30 prefers nodes with a smaller id value, a valid augmentation can be the obtained by adding nodes with a larger id than \(v\) to \(V^{a}\), and adding edges from them to \(v\) in \(E^{a}\). Note that this augmentation does not change the predicted parent of \(v\). We can enforce that our representations respect this constraint by using our regularisation loss in Equation (3). Given a node \(v\in V\), we denote the representation of its parent node, \(\pi_{v}\in V\) by \(f(G,(v,\pi_{v}))\). This representation is contrasted to _all_ other representations of nodes \(w\in V^{a}\) in the augmented graph, that is \(f(G^{a},(v,w))\).2 Footnote 2: Note that, in this case, \(I_{t}\) is a two-dimensional index, choosing two nodes—i.e., an edge—at once. More precisely, the most similar representation of \(f(G,(v,\pi_{v}))\) is the representation _in the augmentation_ of the parent of \(v\), \(f(G^{a},(v,\pi_{v}))\), while the representations associated to all other nodes (including the added ones) represent the negative examples \(f(G^{a},(v,w))\), for \(w\neq\pi_{v}\). ### Implications of the invariance In the previous subsection, we have presented a self-supervised objective, justified by our assumed causal graph, in order to learn invariant predictors for a refinement task \(Y_{t+1}^{R}\) under changes of \(X_{t}^{c}\). However, our initial goal was to ensure invariance in the prediction of algorithmic hints \(Y_{t+1}\) across \(X_{t}^{c}\). Now we will bridge these two aims. In the following, we show how learning a representation that is an invariant predictor of \(Y_{t+1}^{R}\) under changes of \(X_{t}^{c}\) represents a _sufficient_ condition for this representation to be invariant to \(X_{t}^{c}\) when predicting \(Y_{t+1}\). **Theorem 4.1**.: _Consider an algorithm and let \(t\in[1\dots T]\) be one of its steps. Let \(Y_{t+1}\) be the task representing a prediction of the algorithm step and let \(Y_{t+1}^{R}\) be a refinement of such task. If \(f(X_{t},I_{t})\) is an invariant representation for \(Y_{t+1}^{R}\) under changes in \(X_{t}^{c}\), then \(f(X_{t},I_{t})\) is an invariant representation for \(Y_{t+1}\) under changes in \(X_{t}^{c}\), that is, for all \(x,x^{\prime}\in\mathcal{X}_{t}^{c}\), the following holds:_ \[p^{do(X_{t}^{c})=x}(Y_{t+1}^{R}|f(X_{t},I_{t})) =p^{do(X_{t}^{c})=x^{\prime}}(Y_{t+1}^{R}|f(X_{t},I_{t}))\] \[\implies\] \[p^{do(X_{t}^{c})=x}(Y_{t+1}|f(X_{t},I_{t})) =p^{do(X_{t}^{c})=x^{\prime}}(Y_{t+1}|f(X_{t},I_{t})).\] We prove Theorem 4.1 in Appendix C. Note that this justifies our self-supervised objective: by learning invariant representations though a refinement task, we can also guarantee invariance in the hint prediction. In other words, we can _provably_ ensure that the prediction of an algorithm step is not affected by changes in the input that do not interfere with the current execution step. Since we can express these changes in the form of _addition_ of input nodes, we are ensuring that the hint prediction is the same on two inputs of different sizes, but identical current algorithmic step. Figure 5: Example of applying our data augmentation and contrastive loss, following the example in Figure 1. An input graph (left) is augmented by adding nodes and edges (right), such that the next step—making \(2\) the parent of \(4\), i.e. \(\pi_{4}=2\)—remains the same. The representation of the pair \((4,2)\) is hence contrasted against all other representations of pairs \((4,u)\) in the augmented graph. In other words, the green edge is the _positive_ pair to the blue edge, with other edges (in red) being _negative_ pairs to it. ## 5 Experiments We conducted an extensive set of experiments to answer the following main questions: 1. _Can our model, Hint-ReLIC, which relies on the addition of our causality-inspired self-supervised objective, outperform the corresponding base model in practice?_ 2. _What is the importance of such objective when compared to other changes made with respect to the previous state-of-the-art model?_ 3. _How does Hint-ReLIC compare to a model which does not leverage hints at all, directly predicting the output from the input? Are hints necessary?_ Model.As a base model, we use the Triplet-GMPNN architecture proposed by Ibarz et al. (2022), which consists of a fully-connected MPNN (Gilmer et al., 2017) where the input graph is encoded in the edge features, augmented with gating and triplet reasoning (Dudzik and Velickovic, 2022). We replace the loss for predicting the next-step hint in the base model with our regularisation objective (Equation (3)), which aims at learning hint representations that are _invariant to size changes that are irrelevant to the current step_ via constrastive and KL losses. We make an additional change with respect to the base model, consisting of including the _reversal_ of hints of pointer type. More specifically, given an input graph, if a node \(A\) points to another node \(B\) in the graph, we include an additional (edge-based) hint representing the pointer from \(B\) to \(A\). This change (which we refer to as **reversal** in the results) consists simply in the inclusion of these additional hints, and we study the impact of this addition in Section 5.1. The resulting model is what we call Hint-ReLIC. Data augmentations.To simulate interventions on \(X_{\tilde{t}}^{c}\) and learn invariant representations, we design augmentation procedures which construct augmented data given an input and an algorithm step, such that the step of the algorithm is the same on the original input and on the augmented data. We consider simple augmentations, which we describe in detail in Appendix D. To reduce the computational overhead, given an input graph, instead of sampling an augmentation at each algorithm step, we sample a single step, \(\tilde{t}\sim\mathcal{U}\{1,T\}\), and construct an augmentation only for the sampled step. Then, we use the (same) constructed augmentation in all the steps _until_ the sampled one, \(t\leq\tilde{t}\). This follows from the consideration that, if augmentations are carefully constructed, the execution of the algorithm is the same not only in the next step but in all steps leading up to that. Whenever possible, we relax the requirement of having the augmentation with _exactly_ the same execution, and we allow for approximate augmentations, in order to avoid over-engineering the methodology and obtain a more robust model. This results in more general and simpler augmentations, though we expect more tailored ones to perform better. We refer the reader to Appendix D for more details. We end this paragraph by stressing that we _never_ run the target algorithm on the augmented inputs: rather, we directly construct them to have the same next execution step as the corresponding inputs. As a result, our method does not require direct access to the algorithm used to generate the inputs. Furthermore, the number of nodes in our augmentations is at most one more than the number of nodes in the largest training input example. This means that, in all of our experiments, we still never significantly cross the intended test size distribution shift during training. Datasets.We run our method on a diverse subset of the algorithms present in the CLRS benchmark consisting of: 1. _DFS-based algorithms_ (Articulation Points, Bridges, Strongly Connected Components (Aho et al., 1974), Topological Sort (Knuth, 1973)); 2. _Other graph-based algorithms_ (Bellman-Ford (Bellman, 1958), BFS (Moore, 1959), DAG Shortest Paths, Dijkstra et al., 1959), Floyd-Warshall (Floyd, 1962), MST-Kruskal (Kruskal, 1956), MST-Prim (Prim, 1957)); 3. _Sorting algorithms_ (Bubble Sort, Heapsort (Williams, 1964), Insertion Sort, Quicksort (Hoare, 1962)); 4. _Searching algorithms_ (Binary-search, Minimum). This subset is chosen as it contains most algorithms suffering from out-of-distribution performance drops in current state-of-the-art; see Ibarz et al. (2022, Table 2). Results.Figure 6 compares the out-of-distribution (OOD) performances of the Triplet-GMPNN baseline, which we have re-trained and evaluated in our experiments, to our model Hint-ReLIC, as described above. Hint-ReLIC performs better or comparable to the existing state-of-the-art baseline, showcasing how the proposed procedure appears to be beneficial not only theoretically, but also in practice. The most significant improvements can be found in the sorting algorithms, where we obtain up to \(3\times\) increased performance. ### Ablation study In this section we study the contribution and importance of two main components of our methodology. First, we consider the impact of the change we made with respect to the original baseline proposed in Ibarz et al. (2022), namely the inclusion of the reversal of hint pointers. Second, as we propose a novel way to leverage hints through our self-supervised objective, which is different from the direct supervision in the baseline, one may wonder whether completely removing hints can achieve even better scores. Thus, we also study the performance when completely disregard ing hints and directly going from input to output. Finally, we refer the reader to Appendix E.1 for additional ablation experiments, including the removal of the KL component in Equation (3)--which is necessary for the theoretical results but may not always be needed in practice. The effect of the inclusion of pointers' reversal.As discussed above, pointers' reversal simply consists of adding an additional hint for each hint of pointer type (if any), such that a node not only has the information representing which other node it points to, but also from which nodes it is pointed by. We study the impact of this inclusion by running the baseline with these additional hints, and evaluate its performance against both the baseline and our Hint-ReLIC. Table 1 shows that this addition, which we refer to as **Baseline + reversal**, indeed leads to improved results for certain algorithms, but does not obtain the predictive performances we reached with our regularisation objective. The removal of hints.While previous works directly included the supervision on the hint predictions, we argue in favour of a novel way of leveraging hints. We use hints first to construct the augmentations representing the same algorithm step, and then we employ their representations in the self-supervised objective. An additional valid model might consist of a model that directly goes from input to output and completely ignores hints. In Table 2 we show that this **No Hints** model can achieve very good performances, but it is still generally outperformed by Hint-ReLIC. \begin{table} \begin{tabular}{l c c c} \hline \hline **Alg.** & **Baseline** & **Baseline + reversal** & **Hint-ReLIC (ours)** \\ \hline Articulation points & \(88.93\%\pm 1.92\) & \(91.04\%\pm 0.92\) & \(\mathbf{98.45\%\pm 0.60}\) \\ Badges & \(93.75\%\pm 2.73\) & \(97.70\%\pm 0.34\) & \(\mathbf{98.32\%\pm 0.09}\) \\ SCC & \(38.53\%\pm 0.45\) & \(31.40\%\pm 8.80\) & \(\mathbf{76.79\%\pm 3.04}\) \\ Topological sort & \(87.27\%\pm 2.67\) & \(88.83\%\pm 7.29\) & \(\mathbf{96.59\%\pm 0.20}\) \\ \hline Belman-Ford & \(\mathbf{96.67\%\pm 0.81}\) & \(95.02\%\pm 0.49\) & \(95.54\%\pm 1.06\) \\ BPS & \(99.64\%\pm 0.05\) & \(\mathbf{90.93\%\pm 0.03}\) & \(90.00\%\pm 0.21\) \\ DAG Short Paths & \(88.12\%\pm 5.70\) & \(96.61\%\pm 0.61\) & \(\mathbf{98.17\%\pm 0.26}\) \\ Dijkstra & \(93.41\%\pm 1.08\) & \(91.50\%\pm 1.85\) & \(\mathbf{97.44\%\pm 0.50}\) \\ Floyd-Warball & \(46.51\%\pm 1.30\) & \(46.28\%\pm 0.80\) & \(\mathbf{72.23\%\pm 4.84}\) \\ MST-Kruskal & \(91.81\%\pm 1.05\) & \(89.83\%\pm 0.43\) & \(\mathbf{90.51\%\pm 0.45}\) \\ MST-Prim & \(87.64\%\pm 1.79\) & \(86.95\%\pm 2.34\) & \(\mathbf{87.97\%\pm 2.94}\) \\ \hline Insertion sort & \(75.28\%\pm 5.62\) & \(87.21\%\pm 2.80\) & \(\mathbf{92.70\%\pm 1.29}\) \\ Bubble sort & \(79.87\%\pm 6.85\) & \(80.51\%\pm 9.10\) & \(\mathbf{92.94\%\pm 1.23}\) \\ Quicksort & \(70.53\%\pm 11.59\) & \(85.69\%\pm 4.53\) & \(\mathbf{93.30\%\pm 1.96}\) \\ Heaport & \(32.12\%\pm 5.20\) & \(49.13\%\pm 10.35\) & \(\mathbf{95.16\%\pm 1.27}\) \\ \hline Binary Search & \(74.60\%\pm 3.61\) & \(50.42\%\pm 8.45\) & \(\mathbf{98.68\%\pm 2.13}\) \\ Minimum & \(97.78\%\pm 0.63\) & \(98.43\%\pm 0.01\) & \(\mathbf{99.37\%\pm 0.20}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Effect of the inclusion of pointers’ reversal on each algorithm. The table shows mean and stderr of the OOD micro-F\({}_{1}\) score after \(10\),\(000\) training steps, across different seeds. \begin{table} \begin{tabular}{l c c} \hline \hline **Alg.** & **No Hints** & **Hint-ReLIC (ours)** \\ \hline Ariticulation points & \(81.97\%\pm 5.08\) & \(\mathbf{98.45\%\pm 0.60}\) \\ Badges & \(95.02\%\pm 1.03\) & \(\mathbf{93.25\%\pm 0.09}\) \\ SCC & \(57.63\%\pm 0.68\) & \(\mathbf{76.79\%\pm 3.04}\) \\ Topological sort & \(84.29\%\pm 1.16\) & \(\mathbf{96.59\%\pm 0.20}\) \\ \hline Belman-Ford & \(93.26\%\pm 0.04\) & \(\mathbf{95.54\%\pm 1.06}\) \\ BPS & \(\mathbf{90.90\%\pm 0.03}\) & \(99.00\%\pm 0.02\) \\ DAG Short Paths & \(97.62\%\pm 0.62\) & \(\mathbf{98.17\%\pm 0.26}\) \\ Dijkstra & \(95.01\%\pm 1.14\) & \(\mathbf{97.45\%\pm 0.50}\) \\ Floyd-Warball & \(40.80\%\pm 2.90\) & \(\mathbf{72.23\%\pm 4.84}\) \\ MST-Kruskal & \(92.28\%\pm 0.82\) & \(\mathbf{90.01\%\pm 0.45}\) \\ MST-Prim & \(85.33\%\pm 1.21\) & \(\mathbf{87.97\%\pm 2.94}\) \\ \hline Insertion sort & \(77.29\%\pm 7.42\) & \(\mathbf{92.70\%\pm 1.29}\) \\ Bubble sort & \(81.32\%\pm 6.50\) & \(\mathbf{92.94\%\pm 1.23}\) \\ Quicksort & \(71.60\%\pm 2.22\) & \(\mathbf{93.30\%\pm 1.96}\) \\ Heaport & \(68.50\%\pm 2.81\) & \(\mathbf{95.16\%\pm 1.27}\) \\ \hline Binary Search & \(\mathbf{93.21\%\pm 1.10}\) & \(89.68\%\pm 2.13\) \\ Minimum & \(99.24\%\pm 0.21\) & \(\mathbf{99.37\%\pm 0.20}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Importance of hint usage in the final performance. The table shows mean and stderr of the OOD micro-F\({}_{1}\) score after \(10\),\(000\) training steps, across different seeds. Figure 6: Per-algorithm comparison of the Triplet-GMPNN baseline (Ibarz et al., 2022) and our Hint-ReLIC. Error bars represent the standard error of the mean across three random seeds. The final column shows the average and standard error of the mean performances across the different algorithms. ## 6 Conclusions In this work we propose a self-supervised learning objective that employs augmentations derived from available hints, which represent intermediate steps of an algorithm, as a way to better ground the execution of GNN-based algorithmic reasoners on the computation that the target algorithm performs. Our Hint-ReLIC model, based on such self-supervised objective, leads to algorithmic reasoners that produce more robust outputs of the target algorithms, especially compared to autoregressive hint prediction. In conclusion, hints can take you a long way, if used in the right way. ## Acknowledgements The authors would like to thank Andrew Dudzik and Daan Wierstra for valuable feedback on the paper. They would also like to show their gratitude to the Learning at Scale team at DeepMind for a supportive atmosphere.
2308.15390
Bayesian Integration of Information Using Top-Down Modulated WTA Networks
Winner Take All (WTA) circuits a type of Spiking Neural Networks (SNN) have been suggested as facilitating the brain's ability to process information in a Bayesian manner. Research has shown that WTA circuits are capable of approximating hierarchical Bayesian models via Expectation Maximization (EM). So far, research in this direction has focused on bottom up processes. This is contrary to neuroscientific evidence that shows that, besides bottom up processes, top down processes too play a key role in information processing by the human brain. Several functions ascribed to top down processes include direction of attention, adjusting for expectations, facilitation of encoding and recall of learned information, and imagery. This paper explores whether WTA circuits are suitable for further integrating information represented in separate WTA networks. Furthermore, it explores whether, and under what circumstances, top down processes can improve WTA network performance with respect to inference and learning. The results show that WTA circuits are capable of integrating the probabilistic information represented by other WTA networks, and that top down processes can improve a WTA network's inference and learning performance. Notably, it is able to do this according to key neuromorphic principles, making it ideal for low-latency and energy efficient implementation on neuromorphic hardware.
Otto van der Himst, Leila Bagheriye, Johan Kwisthout
2023-08-29T15:33:51Z
http://arxiv.org/abs/2308.15390v1
# Bayesian Integration of Information Using Top-Down Modulated Winner-Take-All Networks ###### Abstract Winner-Take-All (WTA) circuits -- a type of Spiking Neural Networks (SNN) -- have been suggested as facilitating the brain's ability to process information in a Bayesian manner. Research has shown that WTA circuits are capable of approximating hierarchical Bayesian models via Expectation-Maximization (EM). So far, research in this direction has focused on bottom-up processes. This is contrary to neuroscientific evidence that shows that, besides bottom-up processes, top-down processes too play a key role in information processing by the human brain. Several functions ascribed to top-down processes include direction of attention, adjusting for expectations, facilitation of encoding and recall of learned information, and imagery. This paper explores whether WTA circuits are suitable for further integrating information represented in separate WTA networks. Furthermore, it explores whether, and under what circumstances, top-down processes can improve WTA network performance with respect to inference and learning. The results show that WTA circuits are capable of integrating the probabilistic information represented by other WTA networks, and that top-down processes can improve a WTA network's inference and learning performance. Notably, it is able to do this according to key neuromorphic principles, making it ideal for low-latency and energy-efficient implementation on neuromorphic hardware. Neuromorphic Computing, Winner-take-all (WTA) circuit, hierarchical WTA network, Spiking Neural Network (SNN), spike-timing-dependent plasticity (STDP) learning, Bayesian inference, Top-Down Processes. ## I Introduction Bayesian inference is one of the most prominent computational techniques in Artificial Intelligence (AI), being applied in a broad range of areas including statistical machine learning [1, 2], causal discovery [3], automatic speech recognition [4], spam filtering [5], and clinical decision support systems [6]. Bayesian inference uses new data in order to update existing models or hypotheses. The method relies on the well-known Bayes' theorem: \[\overbrace{P(H|E)}^{Posterior}=\overbrace{\frac{P(E|H)}{P(E|H)}\overbrace{P(H)} ^{Prior}}\overbrace{\frac{P(E)}{P(E)}}^{P(E)} \tag{1}\] Where \(H\) can be interpreted as the hypotheses or hidden causes, and \(E\) as the available evidence or data. In essence this method can be used to continuously update the posterior probability over hypotheses based on the newly arriving evidence. While Bayesian inference in this manner yields an optimal estimation of the posterior, it quickly becomes intractable for all but the most simple environments. Addressing this issue, many methods have been conceived that approximate Bayesian inference. Such methods seek to obtain acceptable posterior estimations in a manner that is computationally tractable. Besides interest from a purely AI perspective, there is considerable evidence that suggests that the human brain processes information in a Bayesian manner [7, 8, 9, 10, 11]. Even so, it remains an open question how the brain represents probabilistic information, how it performs Bayesian inference, and how it learns to represent Bayesian models correctly. ### _The Brain as Inspiration: Neuromorphic Computing_ While on the one hand, answering these questions is interesting because it tells us more about how the human brain works, it also serves as useful inspiration for field of neuromorphic computing. Recent years have seen an increasing growth in this field [12], where the computational principles of the traditional Von Neumann computer are substituted by more brain-inspired approaches. One of the most common neuromorphic approaches is to rely on Spiking Neural Network (SNNs). In its most simple form a SNN consists of (often sparsely) connected neurons. A neuron spikes when its membrane potential exceeds its threshold potential, causing a signal to be sent down its outgoing connections which excites or inhibits the connected neurons. This architecture can be adapted in many ways in order to change its behaviour and computational capabilities. The motivation for this can be to make the architecture more biologically plausible (e.g. by adopting a more complex neuron model), to make it more suitable for some practical purpose (e.g. by allowing neurons to send non-binary signals), or both. Some of the most important features that are strived for in neuromorphic designs are computation based on locally available information and the co-location of computation and memory. Computation with local information entails that each computational component (e.g. a neuron in a SNN) requires access exclusively to local information (e.g. a neuron's own membrane potential, and spikes coming into its dendrites via connected axons). While this may seem like an undesirable restriction, such an architecture eliminates the need for constant communication with a global memory unit, which can serve as a major bottleneck in Von Neumann computers. Further, co-location of computation and memory refers to the parameters of the network being represented by the computational units themselves. That is, an inherent part of each computational unit (such as a neuron or an axon), is their own set of dynamic parameters (such as its membrane potential or a connection weight). These parameters reflect the state of their corresponding computational unit, and combined reflect the state of the network as a whole. Having memory as close as possible to where it is needed is a desirable property that reduces the time required for memory retrieval. Relying on local rather than global information further opens the door for several additional desirable properties. For one, it facilitates event-based computation. In event-based computation, locality is leveraged such that each individual computational unit (e.g. a neuron) performs computations (e.g. spikes) exclusively when it receives some minimal amount of input. One might for example compare traditional cameras with the neuromorphic Dynamic Vision Sensor (DVS). When filming, a camera will shoot a given amount of frames each second, whereas a DVS (similar to the human visual system) will only register changes in its visual field. The event-based approach of the DVS has several advantages. First of all, between frames the traditional camera is essentially blind, meaning that it risks missing information when filming rapidly changing scenes. Contrary to this, the DVS responds to change regardless of its timing, and thus has no such temporal blind spots. Secondly, a camera will typically record a lot of redundant information. In a mostly static environment, each new frame captured will have a lot of information in common with the preceding frame. The DVS has no such issues given that it only responds to change, allowing for a very natural way of ignoring redundant data, and significantly reducing computational costs in many settings. Local and event-based computation further facilitate a computer's capacity for parallel and asynchronous computation. This entails that each computational unit (e.g. a neuron) can act in parallel to many or all other computational units. Ideally, each unit acts purely based on local events that affect it (e.g. spikes travelling into a neuron's dendrites), and can completely ignore what happens outside of this, meaning that it can act or stay dormant without reference to some global computational unit. Under the right architecture, such a system is capable of processing many different stimuli at once, and of utilizing only those computational units that are necessary at a given time. In general, the primary benefits of neuromorphic computers are considered to be a potential decrease in energy costs and response latency by orders of magnitude [12]. This is reflected for example in the DVS when compared to a traditional camera: the DVS saves energy by responding only to changes, and reacts immediately to said changes rather than waiting for a frame to be shot. The hugely parallel and asynchronous potential of neuromorphic computers is also a key feature to be exploited. In order to fully utilize these potential benefits two things are needed. First of all hardware is needed which functions according to these neuromorphic principles. Recent years have seen an increase in the design of such system, with notable chips including the Loihi [13], SpiNNaker, and BrainScaleS [14] chips. Note that, for a variety of reasons, most chips do not exhibit all the neuromorphic properties mentioned above, instead choosing to focus on a subset of them. Secondly, the algorithms run on this hardware must adhere to neuromorphic principles. For example, an algorithm which requires constant access to global information will not be suitable for neuromorphic hardware. Likewise it is desirable for a SNN algorithm to be designed to spike very sparsely, given that this sparsity is what underlies the energy efficiency of neuromorphic designs. ### _Neuromorphic solutions to Bayesian Inference_ Circling back to Bayesian inference, we will proceed by highlighting several neuromorphic solutions to this problem. One direction concerns neural sampling methods. [15] and [16] propose a model for neural sampling that is on the one hand consistent with the dynamics of spiking neurons, and which on the other hand can also be understood from the perspective of probabilistic inference through Markov chain Monte Carlo (MCMC) sampling. Their method is similar to sampling approaches which have already been applied extensively (e.g. Boltzmann machines), moreover, the model is more biologically realistic as it incorporates aspects of inherent temporal dynamics and spike-based communication of a network of spiking neurons. [17] further provides a concrete neural implementation of this model that is capable both of approximate Bayesian inference and learning for any hidden Markov model. Another direction relies on Winner-Take-All (WTA) circuits, a type of SNN that has been identified as a ubiquitous processing component in the brain [18, 19, 20, 21, 22, 23, 24]. A WTA circuit is a simple SNN that consists of a single layer of excitatory neurons that is connected to a population of inhibitory neurons. Whenever an excitatory neurons fires, the population of inhibitory neurons is activated such that it sends back a strong inhibitory signal to all excitatory neurons. The excitatory neurons of the WTA circuit are themselves excited by a population of input neurons (e.g. sensory neurons). When combined with a Spike-Timing Dependent Plasticity (STDP) learning rule a WTA circuit can learn to distinguish between patterns of input activity. The intuition is that a WTA neuron firing in response to a particular input pattern will (due to STDP) become more likely to fire in response to this same input pattern in the future, while other neurons will not because they are inhibited and thus do not fire. Therefore if a WTA circuit is repeatedly exposed to structured input, a single neuron will become increasingly sensitive to a particular structure. This process has been proven to be an approximation of the Expectation-Maximation (EM) algorithm [19]. In this algorithm one can distinguish an expectation step and a maximization step. During the expectation step the network generates a posterior distribution over hidden causes of the current input given the current network weights. This is represented over time by the WTA excitatory neurons, where each excitatory spike represents a single stochastic sample from the posterior distribution encoded in the circuit. The maximization step maximizes the likelihood of the data in this distribution by updating the connection weights of the excitatory neurons according to a STDP learning rule. Thus, a WTA circuit is capable first of all of generating a probabilistic representation of multinomial variables through spikes of its excitatory neurons. Secondly, it is able to infer the state of such a hidden variable from the input it receives from other neurons. And finally, it is able to learn the relation between patterns of input and hidden variables. Notably, it is able to do all this whilst adhering to all the neuromorphic principles that we mentioned earlier. It has further been shown that multiple WTA circuits can be connected into a hierarchical WTA network to perform approximate mean field inference. [21] introduced this approach to extend the method to be able to implement inference and learning for hierarchical Bayesian models. This method employs a fully factorized model to approximate a complex Bayesian model. This is particularly suitable for tree-structured models, though the neural implementation of such models can extend to arbitrary Bayesian networks by merging variables. While WTA networks seem well suited for not just the processing of a single source of information, but for the integration of multiple sources of information, the latter has not been explicitly addressed in earlier research. We show that networks of WTA circuits are capable of executing this task. Specifically, we interpret the spiking behaviour of a single WTA circuit (after learning according to STDP) as a compressed probabilistic representation of incoming information. As a WTA circuit can learn to distinguish between patterns of raw sensory neuron spikes, we show that a WTA circuit can learn to distinguish between patterns in the spiking behaviour of preceding WTA circuits, such that it integrates their information into a single probabilistic representation. As we add more layers to the WTA network, it becomes important to take into account not just bottom-up processes, but also top-down processes. Bottom-up processes concern flows of information starting with spikes from sensory neurons, such as from neurons in the retina. In the human brain this bottom-up activation flows from the sensory neurons up through a hierarchy of processing layers (e.g. visual areas V1-V5); in addition to these bottom-up processes, however, there are top-down processes in which activation flows in the other direction (e.g. from V5 back to V1) [25, 26, 27, 28]. There are several functions ascribed to these top-down processes, including directing of attention, adjusting for expectations, adjusting for the perceptual task, facilitation of encoding and recall of learned information, and imagery. Specifically for visual tasks these processes are thought to play an important role in perceptual grouping, perceptual constancies, contour integration, surface segmentation, and shape recognition [25]. In [19] only a single WTA circuit is considered, rather than a network of circuits. Brain research shows that top-down processes do not extend to sensory neurons, and since the design of [19] includes no other neuron layers, it cannot include top-down processes. In the work of [21] where an additional layer of WTA circuits is added to the network, top-down processes do become possible. Indeed [21] mentions that the connections between the two WTA layers are bidirectional, thereby allowing feedback, however, further explanation on the role or impact of this feature is still missing. In our work we extend the hierarchical network of [21] by adding a WTA circuit that integrates information from multiple hierarchical networks. The addition of this layer further increases the potential impact of top-down processes. The purpose of our work then is twofold. First we explore experimentally whether WTA circuits can chain together separate WTA networks into a larger WTA network, with beneficial consequences to its inference and learning capacities. We demonstrate that WTA circuits are in fact suitable for such a task. Secondly, we explore the role that top-down processes have in WTA networks. On the one hand this is done by demonstrating that top-down processes are able to improve a WTA network's capacity to represent variables, perform inference, and learn. On the other hand it is done by demonstrating that this effect is greater in larger WTA networks (i.e. in our integration design as compared to the hierarchical design of [21]). Together, this research highlights the feasibility of WTA networks as a fully neuromorphic (i.e. local, event-based, parallel, asynchronous) approach to performing Bayesian inference. The rest of the paper is organized as follows. Section II provides a formal definition of a WTA circuit and the underlying Bayesian model it represents. Section III describes our experimental setup and reports the experimental results. And finally, the conclusions are drawn in Section IV. ## II WTA network definition and underlying Bayesian model We will now define in formal terms what a WTA circuit is, how it can represent a Bayesian model, and how it can be extended to a WTA network consisting of multiple circuits. Table I provides an overview of the mathematical notation used throughout this paper. ### _WTA Circuit Definition_ In our discrete-time model, a WTA circuit consists of a layer \(\mathbf{z}=\{z_{1},...,z_{K}\}\) of \(K\) excitatory integrate-and-fire neurons. Like [19] we adopt a stochastic firing model in which the firing probability of each neuron \(z_{k}\) depends exponentially on the membrane potential \(\mu_{k}\) of said neuron. The membrane potential \(\mu_{k}\) of each neuron \(z_{k}\) is updated at each time step as a function of its current state and of incoming excitatory and inhibitory signals. #### Ii-A1 Excitation Excitatory inputs consist of bottom-up inputs generated by a population of \(M\) neurons \(\mathbf{y}^{\dagger}=\{y_{1}^{\dagger},...,y_{M}^{\dagger}\}\), and - if top-down processes are enabled - additionally include top-down inputs generated by a population of \(N\) neurons \(\mathbf{y}^{\dagger}=\{y_{1}^{\dagger},...,y_{N}^{\dagger}\}\). In our work, neurons \(\mathbf{y}^{\dagger}\) are either sensory neurons \(\mathbf{s}=\{s_{1},...,s_{W}\}\) or neurons \(\mathbf{z}^{\prime}\) from preceding WTA circuits, while neurons \(\mathbf{y}^{\dagger}\) are exclusively neurons \(\mathbf{z}^{\prime}\) from successive WTA circuits. Input neurons \(y^{\dagger}\) and \(y^{\dagger}\) have outgoing excitatory connections leading to neurons \(\mathbf{z}\), the strength of the connections is expressed by weights \(\mathbf{w}^{\dagger}=\{w_{km}^{\dagger}|k\in\{1,...,K\},m\in\{1,...,M\}\}\) and \(\mathbf{w}^{\dagger}=\{w_{km}^{\dagger}|k\in\{1,...,K\},n\in\{1,...,N\}\}\). If we denote \(y(t)\) to mean that input neuron \(y\) fired at time \(t\), then we can define the combined strength of the excitatory inputs for neuron \(z_{k}\) at time \(t\) to be: \[u_{k}(t)=\overbrace{\sum_{m=1}^{M}w_{km}^{\dagger}y_{m}^{\dagger}(t)}^{Bottom- up\ excitation}+\overbrace{\sum_{n=1}^{N}w_{kn}^{\dagger}y_{n}^{\dagger}(t)}^{Top- down\ excitation} \tag{2}\] #### Ii-A2 Inhibition At every point in time, each neuron \(z_{k}\) is further influenced by an identical (scalar) inhibitory signal \(I(t)=I^{l}(t)+I^{c}(t)\). The role of this inhibitory signal is twofold. First of all, inhibition signal \(I^{l}(t)\) installs a mechanism of lateral inhibition which drives competition between neurons \(\mathbf{z}\). Secondly, inhibition signal \(I^{c}(t)\) is used to exert control over the combined input signal \(u_{k}(t)-I(t)\), which has several purposes that will be elaborated on in section II-C; for now we will assume that \(I^{c}(t)=0\). a single neuron in \(\mathbf{z}\) spikes, the conditional probability \(q_{k}(t)\) that this spike was generated by neuron \(z_{k}\) is: \[q_{k}(t)=\frac{r_{k}(t)\delta t}{R(t)\delta t}=\frac{e^{\mu_{k}(t)}}{\sum_{k^{ \prime}=1}^{K}e^{\mu_{k^{\prime}}(t)}} \tag{8}\] This holds in continuous time, where the probability of two neurons spiking at the exact same time is zero. In discrete time, where multiple neurons can spike simultaneously, additional conditions need to be satisfied in order to approximately arrive at equation 8, we elaborate on this in section II-C. We can interpret the spike distribution defined by \(q_{k}(t)\) as a generative model over multinomial observed variables \(\mathbf{x}=\{x_{1},...,x_{V}\}\) and hidden cause \(k\), parametrized by \(\theta\): \[p(k,\mathbf{x}|\theta)=p(k|\theta)\prod_{v=1}^{V}p(x_{v}|k,\theta) \tag{9}\] For one, such a model can be used to generate observable variables \(x_{1},...,x_{V}\) by sampling from \(k\) from prior distribution \(p(k|\theta)\). For another, by applying Bayes' rule, it can be used to approximate the posterior distribution: \[p(k|\mathbf{x},\theta)\propto p(k|\theta)p(\mathbf{x}|k,\theta) \tag{10}\] and thus infer the hidden cause \(k\) of the observation \(\mathbf{x}\). In order to link equations 8 and 10, we define population codes to represent observable variables \(\mathbf{x}\) by a set of input neurons \(\mathbf{y}=\{\mathbf{y}^{\intercal},\mathbf{y}^{\intercal}\}\), and hidden variable \(k\) by circuit neurons \(\mathbf{z}\). Observable variables \(\mathbf{x}\) are encoded such that for each possible value of each variable \(x_{v}\), there is exactly one neuron in \(\mathbf{y}\) that encodes it. Likewise, each of the K possible values that hidden variable \(k\) can assume is represented by exactly one neuron in \(\mathbf{z}\). We define \(\mathbf{y}(t)\) to be the activation of neurons \(\mathbf{y}\) at time \(t\) (i.e. which input neurons fired at time t). Further, \(\hat{\mathbf{y}}\) represents a variable in the probabilistic model that models the distribution of \(\mathbf{y}(t)\) over all points in time. We can the neuron population codes defining binary variable vectors \(\mathbf{y}\) and \(\mathbf{z}\) to reformulate the probabilistic model \(p(k,\mathbf{x}|\theta)\) as: \[p(\mathbf{z},\hat{\mathbf{y}}|\mathbf{w})=\frac{1}{Z}\sum_{k=1}^{K}z_{k}\exp\left(\sum_{m =1}^{M}w_{km}\hat{y}_{m}\right) \tag{11}\] Where \(z_{k}=1\) if the hidden cause is \(k\) and \(z_{k}=0\) otherwise, and where Z is the normalization constant. This generative probabilistic model can then be described in terms of a WTA circuit SNN model. Evaluating the network at each time point \(t^{f}\) at which a neuron in \(\mathbf{z}\) fires, we can compute the posterior probability of cause k by applying Bayes' rule to \(p(\mathbf{z},\hat{\mathbf{y}}|\mathbf{w})\), arriving at1: Footnote 1: [19] uses weights \(w_{k0}\), representing the excitability of a neuron \(z_{k}\), to model the prior probability of hidden cause \(k\). For brevity we instead assume at all times a uniform prior (which fits with the dataset used in our experiments). \[p(k|\mathbf{y}(\mathbf{t}^{\prime},t^{f}),\mathbf{w}) =\frac{\overbrace{e^{\mu_{k}(t^{f}-1)+\sum_{m=1}^{M}w_{km}y_{m}(t ^{f})-I(t^{f})}}^{\text{ likelihood }p(\mathbf{y}(\mathbf{t}^{\prime},t^{f})|\mathbf{k},\mathbf{w})}^{ \text{ likelihood }p(\mathbf{y}(\mathbf{t}^{\prime},t^{f})|\mathbf{k},\mathbf{w})}^{\text{ likelihood }p(\mathbf{y}(\mathbf{t}^{\prime},t^{f})|\mathbf{k},\mathbf{w})} \tag{12}\] \[=\frac{e^{\mu_{k}(t^{f})}}{\sum_{k^{\prime}=1}^{K}e^{\mu_{k^{ \prime}}(t^{f})}}=q_{k}(t^{f})\] where \(\mathbf{y}(\mathbf{t}^{\prime},t^{f})\) concerns the spiking history of neurons \(\mathbf{y}\) from \(t^{f}\) back to the time point following the most recent non-zero lateral inhibition signal. Thus at all time points \(t^{f}\) a spike from a neuron \(z_{k}\) can be seen as a sample from the posterior distribution \(p(k|\mathbf{y}(\mathbf{t}^{\prime},t^{f}),\mathbf{w})\). ### _Discrete time implementation_ In discrete time, spiking probabilities do not naturally follow equation 8 like they do in continuous time. In our model, each neuron \(z_{k}\) fires at time \(t\) with probability: \[p(z_{k}(t))=e^{\mu_{k}(t)-\mu^{max}} \tag{13}\] where \(\mu^{max}\) is the maximum membrane potential of neurons \(\mathbf{z}\), at which the probability of firing is one (see Fig. 1). Note that this means that the firing probability of each neuron \(z_{k}\) does not depend directly on the membrane potentials of the other neurons in \(\mathbf{z}\). Further, it does not include direct information of the firing rates of other neurons in \(\mathbf{z}\), nor of the desired combined firing rate of neurons \(\mathbf{z}\). In order to still arrive approximately at the distribution described by 8, we pass on this information indirectly through inhibition signal \(I^{c}(t)\). A second role of inhibition signal \(I^{c}(t)\) is related to variations in the excitatory input signal. In discrete time it is possible for multiple neurons in \(\mathbf{y}\) to fire simultaneously. Furthermore, the number of neurons in \(\mathbf{y}\) that fire simultaneously can vary greatly between time steps. Given the exponential relation between a neuron's membrane potential and its spiking probability, sudden bursts in excitatory input can cause the spiking probability of multiple neurons in \(\mathbf{z}\) to go from close to zero, to one, from one timestep to the next. In this scenario, no distinction is made between the excitatory signals being received by each of these neurons. In order to avoid this, inhibition signal \(I^{c}(t)\) is used to balance variation in the excitatory signals received over time. The role of inhibition signal \(I^{c}(t)\) is thus to pass on to neurons \(\mathbf{z}\), approximate information about the present firing rates of neurons \(\mathbf{z}\), as well as information about the desired combined firing rate of said neurons. In this manner, the signal can be used to exert control over the combined firing rate of neurons \(\mathbf{z}\), such that it remains stable over time and is independent of the fluctuation of activity in \(\mathbf{y}\). At each time step, the signal must be strong enough such that it prevents the spiking probabilities of neurons \(\mathbf{z}\) from exploding. Further, it must not be so strong that it completely overrides excitatory inputs, in which case information sent by neurons \(\mathbf{y}\) would be lost. This mechanism is included in our model through a second population of inhibitory neurons which at every timestep sends an inhibitory signal to all neurons \(\mathbf{z}\). The strength of this signal is defined to be: \[I^{c}(t)=\psi|\mathbf{y}(t)| \tag{14}\] Where \(|\mathbf{y}(t)|\) is the number of input neurons that fired at time \(t\). And where \(\psi\) is a scalar variable that changes every timestep as a function of the divergence of overall firing of \(\mathbf{z}\) from the desired firing rate. ### _WTA Network Definition_ Circuit neurons \(\mathbf{z}\) can be used as input for other circuits without changing the dynamics. Thus if we have a set of \(G\) WTA circuits with neurons \(\mathbf{\mathsf{Z}}=\{\mathbf{z}^{1},...,\mathbf{z}^{G}\}\), we can organize these in a hierarchical network such that each WTA circuit receives feedforward input from neurons \(\mathbf{z}\) from an arbitrary number of WTA circuits, and has its own neurons \(\mathbf{z}\) send feedforward activity to at most one WTA circuit. In addition we include feedback connections with their own separate weights to model top-down processes, thus allowing WTA circuits to furthermore receive input from other circuits that are one step higher in the hierarchy. Note that within a WTA network, a layer of neurons can have the role of excitatory neurons \(\mathbf{z}\) with respect to one WTA circuit, while playing the role of input neurons \(\mathbf{y}\) with respect to another WTA circuit. Since the probability distribution of a WTA circuit is solely dependent on input from neurons \(\mathbf{y}\), we can define hierarchical WTA by the joint probability distribution: \[p(\mathbf{k}|\mathbf{Y}(t^{f}),\mathbf{W})=\prod_{g=1}^{G}p(k^{g}|\mathbf{y}^{g}(t^{f}),\mathbf{w }^{g}) \tag{15}\] Where \(\mathbf{k}=\{k^{1},...,k^{G}\}\) are the underlying causes encoded respectively by neuron layers \(\mathbf{z}^{1},...,\mathbf{z}^{G}\), where \(\mathbf{Y}=\{\mathbf{y}^{1},...,\mathbf{y}^{G}\}\) is the combined inputs to all circuits, and where \(\mathbf{W}=\{\mathbf{w}^{1},...,\mathbf{w}^{G}\}\) is the combined weights of all circuits. Fig. 2 illustrates both in an abstract and concrete manner how, following this formulation, a Bayesian model can be represented by a network of WTA circuits. ### _Expectation Maximization through STDP_ We have shown how WTA networks can represent generative probabilistic models. In addition to this, WTA networks are capable of learning the parameters of such a model. As was mentioned earlier, the combination of lateral inhibition and a Spike-Timing-Dependent Plasticity (STDP) learning rule, allows a neuron \(z_{k}\) to become distinctly sensitive to common input spiking patterns. As is described in [19] a WTA circuit approximates a stochastic, online version of the Expectation Maximization (EM) algorithm when it adopts a particular STDP rule. [19] names this principle SEM (spike-based EM). The rule adopted by [19], as well as by [21] and ourselves, is the biological STDP rule: \[\Delta w_{kn}(t)=\alpha(t^{\mathit{diff}})ce^{-w_{kn}}-1 \tag{16}\] Where \(c\) is a constant, where \(t^{\mathit{diff}}\) marks the time difference between the most recent post-synaptic spike of neuron \(z_{k}\) and pre-synaptic spike of neuron \(y_{n}\), and where \(\alpha(t^{\mathit{diff}})\) is shaped according to an alpha shaped kernel: \[\begin{split}&\alpha(t^{\mathit{diff}})\\ &=\frac{1}{\tau_{f}-\tau_{s}}\left(\exp\left(-\frac{t^{\mathit{ diff}}}{\tau_{f}}\right)-\exp\left(-\frac{t^{\mathit{diff}}}{\tau_{s}}\right) \right)\Theta(t^{\mathit{diff}})\end{split} \tag{17}\] An update \(\Delta w_{kn}(t)\) is finally weighed by adaptive learning rate \(\eta_{k}(t)\) that over time diminishes the weight updates for connections coming into neuron \(z_{k}\) proportionally to how often \(z_{k}\) has spiked. In essence, this rule causes weights of connections going from pre-synaptic neurons \(\mathbf{y}\) to a neuron \(z_{k}\) to be updated every time \(z_{k}\) fires. If a pre-synaptic neuron fired within some brief timespan (regulated by constants \(\tau_{f}\) and \(\tau_{s}\)) before \(z_{k}\) fired, then the connection weight between the two is increased, if not, then it is decreased. The strength and direction of the change is dependent on the relative timing of the pre- and post-synaptic spike, on the strength of the connection weight before the update, and on constants \(\tau_{f}\) and \(\tau_{s}\). Heavyside step function \(\Theta(t^{\mathit{diff}})\) ensures that the function returns zero for negative values of \(t^{\mathit{diff}}\). Fig. 3 visualizes STDP weight updates for \(\tau_{f}=2\) and \(\tau_{s}=8\). This dynamic has the effect that when a neuron \(z_{k}\) fires, it becomes more sensitive to pre-synaptic neurons that fired in the right time window before \(z_{k}\) did, whereas it becomes less sensitive to pre-synaptic neurons that fired outside this time window. Thus, in the future this neuron will be more likely to respond to a particular pattern of pre-synaptic spikes, and if it does again respond to the same pattern, this is reinforced once more. At the same time, lateral inhibition ensures that when a neuron \(z_{k}\) fires in response to some pattern of pre-synaptic spikes, others are inhibited and do not fire. As such they are forced to respond to a different pattern of pre-synaptic spikes, thus ensuring that each neuron becomes sensitive to its own distinct pattern (Fig. 5 visualizes this process). As shown by [19] this can be interpreted within the framework of EM, where one can identify the expectation step and the maximization step in the circuit dynamics. The expectation step is represented by spikes from neurons \(\mathbf{z}\), where each spike can be considered to be a sample from the currently encoded posterior distribution over hidden causes. The maximization step is represented by the STDP weight update following a spike. This change of connection weights optimizes the posterior distribution they encode according to evidence provided by the spike. In this manner a WTA circuit is interpreted as a stochastic online EM algorithm. ## III Experiments We performed two experiments. The first experiment concerns a comparison between the two-layer hierarchical network design of [21], and our own three-layer integration design. In the second experiment we include top-down processes, and observe how these impact network performance. ### _Experimental Setup_ Like [19] and [21] we assess the performance of network designs with respect to the MNIST dataset. In essence this becomes an unsupervised learning task, where the network learns solely through exposure to MNIST images, and the already described STDP dynamics, to distinguish between different input patterns, in this case black-and-white2 images of handwritten digits 0-9. Footnote 2: The MNIST dataset in fact contains grayscale images, which we convert to black-and-white by converting all values that are not completely white to the value black. The MNIST images are 28x28 pixel black-and-white images, which we encode in the same manner as [21]. Each pixel is encoded by two neurons, one neuron represents the pixel value black, the other white. Thus the input layer of (sensory) neurons consists of 28x28x2 neurons. An active neuron fires according to a Poisson process, an inactive neurons remains silent. Thus each MNIST image is encoded over time by 784 Poisson-spike trains which we consider to be sensory input. In our experiments we present each MNIST digit for 150 ms, and set each active sensory neuron to adopt a firing rate of 200 Hz. In the design of [21] -- which we will refer to as the Fig. 2: Abstract and concrete examples of Bayesian models and corresponding WTA network. (a) Simple abstract Bayesian model. (b) WTA network that encodes abstract hidden variables \(A\), \(B\), and \(\mathcal{C}\). (c) Bayesian model of underlying causes of MNIST images. (d) WTA network that encodes underlying causes of MNIST images. Note that while the example models and networks are simple, the principles highlighted here hold for any tree-structured model, regardless of the amount of layers and the amount of variables per layer. Top-down processes are included by adding reversed connections with their own separate weights between each pair of connected WTA circuits. hierarchical design -- the first layer consists of 16 WTA circuits with each \(K_{h}\) excitatory neurons. Each of these circuits receive input from a separate 7x7x2 cube of sensory neurons. The activation of each of these WTA circuits is then fed forward to a single WTA circuit with \(K_{o}\) excitatory neurons that makes up the final layer. Our own design is an extension of the hierarchical design. In our design -- which we will refer to as the integration design -- the final layers of two identical hierarchical networks (\(\mathcal{H}_{a}\) and \(\mathcal{H}_{b}\)) are connected to one another by an additional WTA circuit with \(K_{f}\) excitatory neurons, which comprises the final layer of the design. This integration network (\(\mathcal{I}\)) is thus capable of separately processing two sets of stimuli (one by \(\mathcal{H}_{a}\) and the other by \(\mathcal{H}_{b}\)) before integrating the information into a single representation. The design is further illustrated in Fig. 4. When we assess the designs on the MNIST, this includes first of all a learning phase, where the network is exposed once to each of the \(60\,000\) images that make up the MNIST training set. During this phase, the network weights evolve in an unsupervised manner according to STDP dynamics. After the learning phase, the network weights are frozen and performance is assessed against the full MNIST test set consisting of \(10\,000\) images. Fig. 5 visualizes how network weights evolve over time. Performance of the network is assessed according to three measures: accuracy, confidence, and confidence error. Given that the network is trained in a completely unsupervised manner, it does not actually have an explicit notion of the underlying hidden variables. Thus, while the network may learn to distinguish between digits, it does not actually know that it is digits that it is distinguishing between. As such in order to determine classification accuracy we must determine what network output we correspond to which classification, which we do in the following fashion. After training the network, we determine for each neuron in the final layer to which digit it responds the most (over the entire test set), this digit is then considered to be this neuron's classification. Then when a stimulus (MNIST image) is presented to the network, we observe which neurons respond, and can derive from this a distribution over digits (e.g. 8 spikes for digit zero, 2 spikes for digit one,...). We consider the digit for which there were the most spikes to be the network's classification of the corresponding stimulus (where ties are broken at random). Further, we refer to the neurons generating the majority of spikes (in response to a specific stimulus) as the dominant neurons. Overall classification accuracy is the percentage of MNIST images that was classified correctly. Secondly we include a measure of how confident the network is with respect to its classifications. As was just described, exposure to a stimulus may generate spikes that represent evidence for different digits. Any spike from a non-dominant neuron can be considered a sign of uncertainty, and thus the more spikes are distributed among non-dominant neurons, the less confident the network it is with regard to its classification. A network that performs well will both have a high confidence, and also be correct in its confidence. The former we define simply as the proportion of spikes generated by dominant neurons. The latter we can approximately measure by comparing the correctness of network classifications with the confidence the network has regarding said classifications. More concretely, over all classifications of a specific digit we would expect the proportion of spikes from non-dominant neurons to correspond to the proportion of wrong classifications. For example, consider all the times the network classifies an MNIST image from the test set as the digit zero. If 20% of these images should actually have been classified as the digit nine, then we would expect that -- assuming the network is correct in its confidence -- on average classifications of the digit zero should consist for 20% of spikes from neurons corresponding to the digit nine. We define the deviation from this expected value to be the confidence error. Table II summarizes the relevant constants, as well as the values to which we set these in our experiments. Further, the code used to perform the experiments is available on [https://github.com/Grottoh/WTA-Network](https://github.com/Grottoh/WTA-Network), and the corresponding data will soon be available on [https://doi.org/10.34973/7fjz-va85](https://doi.org/10.34973/7fjz-va85). ### _Experiment 1: hierarchical vs integration_ #### Iv-B1 Results The first experiment compares the performance of the hierarchical design of [21] against our integration design. We ran 10 experiments with the integration design. The integration design includes two hierarchical designs that function the same in the integration network as they would independently (in case of an absence of top-down processes). As a result, one assessment of the integration design (of \(\mathcal{I}\)) automatically includes two assessments of the hierarchical design (of \(\mathcal{H}_{a}\) and of \(\mathcal{H}_{b}\)). We thus end up with two assessments of the hierarchical design, and one of the integration design, each averaged over 10 runs. The hierarchical networks achieved an average accuracy of 85.36% and 85.51%, with standard deviations of 1.05% and 0.52% between runs. The integration network achieved an average accuracy of 92.84% with a standard deviation of 0.39% between runs. Further, the hierarchical networks Fig. 3: STDP learning curve. The graph shows how the strength of weight change \(\Delta w_{km}\) varies as a function of the difference \(t^{diff}\) between the most recent post- and pre-synaptic spike times. achieved an average confidence of 88.98% and 89.06%, with standard deviations of 0.53% and 0.47% between runs. The integration network achieved an average confidence of 92.02% with a standard deviation of 0.44% between runs. Finally, the hierarchical networks achieved an average confidence error of 13.58% and 12.66%, with standard deviations of 2.15% and 1.24% between runs. The integration network achieved an average confidence error of 8.01% with a standard deviation of 1.28% between runs. These results are further displayed in Fig. 6. #### Iv-A2 Interpretation Comparing the accuracy achieved by our replication of the hierarchical network of [21] (85.36% and 85.51%) to the accuracy actually reported by [21] themselves (84.89%), we can see that these lie close to one another. This is as one would expect of a replication, and we expect that the slight difference that is present is likely a result of minor differences in implementation and hyperparameters. Further, the results are in line with our hypothesis that WTA circuits can chain together separate WTA networks to improve inference and learning capacities. The integration network performs better than the hierarchical network on all our measures, displaying a higher accuracy, a greater confidence, and a lower confidence error. These results are expected, given that the integration network has access to more information. The fact that WTA circuits are capable of processing probabilistic information represented by the spiking behaviour of other WTA circuits has several implications. Given that the manner in which information is encoded by a WTA circuit is independent of its source, these results suggest that WTA networks can be used to integrate information across multiple modalities (e.g. visual, haptic, auditory,...). Further, hardware that can properly take advantage of the parallel and \begin{table} \begin{tabular}{p{42.7pt} p{113.8pt} p{113.8pt}} \hline \hline Constant & Value & Description \\ \hline \(K_{h}\) & 38 & number of excitatory neurons per WTA circuit in layer one of the hierarchical and integration network \\ \(K_{o}\) & 99 & number of excitatory neurons per WTA circuit in layer two of the hierarchical and integration network \\ \(K_{f}\) & 98 & number of excitatory neurons in layer three of the integration network \\ — & 150 & The number of discrete timesteps for which a stimulus (MNIST image) is presented to the network; one timestep is considered to be one millisecond \\ — & 200 & The frequency (Hz) of the Poisson spike train generated by active sensory neurons \\ \(\tau_{f}\) & 2 & STDP fast time constant \\ \(\tau_{n}\) & 8 & STDP slow time constant \\ \(\mu^{max}\) & \(19.2558\) & maximum membrane potential of each neuron in \(\mathbf{z}\) \\ \(c\) & \(1e{-8}\) & constant that plays a role in weight updates \\ — & \(-\ln c\) & maximum strength of connection weights of neurons \(\mathbf{z}\) \\ \(\eta_{k}\) & \(\dfrac{1}{N(z_{k})^{0.8}}\) & adaptive learning rate weighing the weight update of neuron \(z_{k}\); \(N(z_{k})\) represents the number of spikes generated by neuron \(z_{k}\) over all stimuli at time \(t\) \\ \hline \hline \end{tabular} \end{table} TABLE II: Experiment constants Fig. 4: Visualization and description of our proposed integration network design (\(\mathcal{I}\)) that processes two separate input sources through the integration of information fed forward by two hierarchical networks (\(\mathcal{H}_{a}\) and \(\mathcal{H}_{b}\)). Fig. 5: Visualization of network weights after it has been exposed to respectively \(0\), \(1\,000\), and \(10\,000\) stimuli (MNIST images). Within each sub-figure, the top left image(s) display an MNIST image that is being presented to the network, the remaining images are visualizations of the weights of all bottom-up connections that lead to a single neuron \(z_{k}\). The red squares outline which neurons respond to the stimulus at hand. (a), (b), and (c) visualize bottom-up connection weights going from a \(7\kappa 7\kappa 2\) cube of sensory neurons (that encode a \(7\kappa 7\) square of the image) to a single WTA circuit from the first layer of the WTA network. Of the two images in the top left of each sub-figure, the first image represents the digit being presented to the first layer as a whole, while the second image represents the part of the image being presented to the WTA circuit of which the connection weights are visualized. The figures show how WTA circuits in the first layer become sensitive to simple features such as particularly oriented lines and curves. (d), (e), and (f) visualize bottom-up connection weights going from all sixteen WTA circuits in the first layer to a single WTA circuit of the second layer. The figures show how WTA circuits in the second layer form a relatively clear representation handwritten digits. (g), (h), and (i) visualize bottom-up connection weights going from both WTA circuits in the second layer to a single WTA circuit of the third and final layer. The figures show how WTA circuits in the final layer form a slightly more generic representation of handwritten digits. asynchronous nature of WTA networks (such as the human brain, or neuromorphic chips) can use these designs to process many stimuli at once and integrate information from these at low computational costs. To illustrate this, consider the costs associated with expanding the network. Lateral inhibition causes each WTA circuit to generate only a single spike at the time, and other inhibition mechanisms ensure that an appropriate firing rate of said neurons. This means that increasing the number of neurons per circuit will not increase the number of spikes generated by the circuit, and thus precludes the increased energy costs associated with an increase of spikes. Additionally, adding circuits to the same layer does not increase the processing time of this layer, given that all these circuits work in parallel with no direct dependence on one another. Indeed only the addition of layers increases the time it takes to propagate activation from sensory neurons to the final layer of the network, and that only by a single timestep. The main costs related to expanding a WTA network is thus related to the number of neurons and connections required. At least for the human brain this cost seems surmountable, given the many billions of neurons and the many trillions of connections that it consists of. As such, WTA networks appear to be a versatile tool for processing information in a Bayesian manner, and efficient with respect to speed and energy costs. ### _Experiment 2: top-down processes_ #### Iv-C1 Results In the second experiment we investigate whether top-down processes can facilitate improved classification accuracy and confidence, and reduce confidence error. To this end we add sets of top-down connections between each layer, to mirror the bottom-up connections that are already in place. For the most part, the top-down connections function and evolve in the same manner as the bottom-up connections. The only difference present in our experiments, is their strength. Given that in our design the amount of firing neurons tends to decrease as we proceed to new layers in the network, bottom-up activation carries a greater combined weight than top-down activation. As such we include runs where we increase the strength of top-down processes by a scalar factor. Further, drawing upon literature that observe that top-down processes tend to increase in strength over time [28], we include runs where this is the case as well. In order to properly illustrate the impact of top-down processes we report results of runs with several different parameters, which we label as follows. First of all runs with label _p_no-\(td\) concern runs where top-down processes are disabled (runs from experiment 1). Secondly, runs with labels _p_td\(\times\)1, _p_td\(\times\)2, and _p_td\(\times\)3 concern runs where top-down activation is strengthened by a constant factor of 1, 2, and 3 respectively. Finally, runs with label _p_td\(\times\phi\) concern runs where top-down processes strengthen over time. During these runs, top-down signals from a neuron \(z_{k}\) are strengthened by a factor \(\phi_{k}(t)=\max(1.5+0.3S(z_{k},t)^{1.3},3)\) (a product of fine-tuning), where \(S(z_{k},t)\) is the number of spikes generated by neuron \(z_{k}\) for the stimulus at hand at time \(t\). With respect to the impact of top-down processes we are interested in the following comparisons. First of all we want to know whether top-down processes are indeed capable of improving the performance of the integration network. And Fig. 6: Results for experiment 1. The figures display the overall (a) accuracy, (b) confidence, and (c) confidence error, averaged over 10 runs for two hierarchical networks (\(\mathcal{H}_{a}\) and \(\mathcal{H}_{b}\)) and one integration network (\(\mathcal{I}\)). secondly, we are interested in the comparative performance of hierarchical networks trained as part of an integration network versus that of a hierarchical network trained in isolation. The results for the first comparison are displayed in Fig. 7. These results concern integration network accuracy, confidence, and confidence error under the aforementioned parameters, averaged over ten runs. The results show that top-down processes can indeed improve performance. In particular, runs \(p\_td\times 2\) and \(p\_td\times\phi\) (which perform roughly the same) show superior performance on all measures. hardware in the near future, but at least in the human brain we have an example of 'neuromorphic hardware' with plenty of neurons and connections, showing that this is not an insurmountable obstacle. Future research in this area has several challenges to tackle. For one it will be necessary to move to more complex stimuli than MNIST digits. Further, considering more closely how to encode the stimuli will be vital. The encoding used in the present research violates several key neuromorphic principles in that it is neither sparse, nor event-based. Future research might instead encode visual data in a manner similar to for example a DVS. Additionally, in the future it will be interesting to extend this method to work with temporal data such as speech, and to see it implemented on neuromorphic hardware. Future research should be careful when comparing this method to traditional machine learning approaches. It will be tempting to compare it directly with e.g. a Convolutional Neural Network (CNN) when performing visual classification tasks, yet such a comparison may not fit. The danger is that with such comparisons the focus becomes fine-tuning and tweaking the approach to improve its performance on a small set of common baseline tasks (e.g. first MNIST, and then more complex image classification tasks). While this could indeed improve the method, we argue that the true potential of this approach lies not with optimized performance on very narrowly defined tasks such as traditional image classification, but with more complex and general tasks such as are more common in a real-world setting. For example, when a human attempts to 'classify' a digit they often have an abundance of additional information. They may for instance know that the number is even, because it is a house number on a particular side of the street. If they are unsure about their 'classification', they may additionally act to change the environment (e.g. by wipping off some dirt) or to change their perspective of it (by looking at it from a closer distance). This integration of information, and these actions taken, require processing in different areas of the overall network, and communication between said areas. In order to do this in an quick and energy-efficient manner the event-based, parallel, and asynchronous nature of the algorithm proposed here will be useful, or even essential. Therefore we advice that future research keeps this in mind when designing tasks, and making performance comparisons. ## References * [1] M. Tipping, "Bayesian inference: An introduction to principles and practice in machine learning," 01 2004, pp. 41-62. * [2] S. Theodoridis, _Machine Learning: A Bayesian and Optimization Perspective_, 1st ed. USA: Academic Press, Inc., 2015. * [3] D. Heckerman, C. Meck, and G. Cooper, _A Bayesian Approach to Causal Discovery_, 05 2006, vol. 19, pp. 1-28. * [4] G. Zweig and S. J. Russell, "Speech recognition with dynamic bayesian networks," in _AAAI/IAAI_, 1998. * [5] J. M. Gomez Hidalgo, G. C. Brignas, E. P. Sinz, and F. C. Garcia, "Content based nas spam filtering," in _Proceedings of the 2006 ACM Symposium on Document Engineering_, ser. DocEing '06. New York, NY, USA: Association for Computing Machinery, 2006, p. 107-114. [Online]. Available: [https://doi.org/10.1145/116610.116619](https://doi.org/10.1145/116610.116619) Fig. 7: Results for experiment 2 regarding the influence of top-down processes on integration network performance. The figures display the overall (a) accuracy, (b) confidence, and (c) confidence error, averaged over 10 runs for five integration networks. Experiments \(p_{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_} {\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_} {\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_} {\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_} {\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_} {\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_} {\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_} {\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_} {\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_} {\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}\] (c)
2305.04261
Resolution of a conjecture about linking ring structures
An LR-structure is a tetravalent vertex-transitive graph together with a special type of a decomposition of its edge-set into cycles. LR-structures were introduced in a paper by P. Poto\v{c}nik and S. Wilson, titled `Linking rings structures and tetravalent semisymmetric graphs', in Ars Math. Contemp. 7 (2014), as a tool to study tetravalent semisymmetric graphs of girth 4. In this paper, we use the methods of group amalgams to resolve some problems left open in the above-mentioned paper.
Marston Conder, Luke Morgan, Primož Potočnik
2023-05-07T12:44:31Z
http://arxiv.org/abs/2305.04261v2
# Resolution of a conjecture about linking ring structures ###### Abstract. An LR-structure is a tetravalent vertex-transitive graph together with a special type of a decomposition of its edge-set into cycles. LR-structures were introduced in a paper by P. Potocnik and S. Wilson, titled "Linking rings structures and tetravalent semisymmetric graphs', in _Ars Math. Contemp._**7** (2014), as a tool to study tetravalent semisymmetric graphs of girth \(4\). In this paper, we use the methods of group amalgams to resolve some problems left open in the above-mentioned paper. Key words and phrases:tetravalent, vertex-transitive, graph, linking rings, amalgam 2000 Mathematics Subject Classification: 20B25 We are grateful to the referee for their suggestions and corrections. The first author is grateful to New Zealand's Marsden Fund for its support via project UOA2030. The second author acknowledges the Australian Research Council grant DE160100081 and the Slovenian Research Agency research programme P1-0285 and research projects J1-1691, N1-0160, J1-2451, N1-0208. The third author gratefully acknowledges the support of Slovenian Research Agency, programme P1-0294 and research project N1-012 A much more complex situation arises combinatorially if \(G_{v}^{\Gamma(v)}\) has two orbits of length \(2\). In this case \(G_{v}^{\Gamma(v)}\) is permutation isomorphic to either the cyclic group \(\mathrm{C}_{2}=\langle(1,2)(3,4)\rangle\) or the Klein group \(\mathrm{V}_{4}=\langle(1,2),(3,4)\rangle\), in their respective intransitive actions on \(4\) points. In these two cases the group \(G\) has precisely two orbits on the arc-set \(\mathrm{A}(\Gamma)\), but can have either one or two orbits on the edge-set \(\mathrm{E}(\Gamma)\). If \(G\) has a single orbit on \(\mathrm{E}(\Gamma)\), then the graph \(\Gamma\) belongs to the widely-studied class of graphs admitting a half-arc-transitive group action, which has received much attention in the recent past; see [1, 5, 12, 20, 21, 22, 23, 26] for example. On the other hand, the case where \(G\) has two orbits on \(\mathrm{E}(\Gamma)\) has received much less attention so far. Here the analysis can again be split into two subcases, depending on whether \(G_{v}^{\Gamma(v)}\) is isomorphic to \(\mathrm{C}_{2}\) or to \(\mathrm{V}_{4}\). In the first subcase one can easily see that the vertex-stabiliser \(G_{v}\) is itself isomorphic to \(\mathrm{C}_{2}\), which forces the group \(G\) to be relatively small (indeed \(|G|=2|\mathrm{V}(\Gamma)|\), to be precise), which allows the use of a number of standard group-theoretical approaches. In this paper, we are interested in the remaining subcase, where \(G_{v}^{\Gamma(v)}\cong\mathrm{V}_{4}\), and \(G\) is intransitive on \(\mathrm{E}(\Gamma)\). Accordingly, we are interested in the situation covered by the following definition: **Definition 1.1**.: _Let \(G\) be a vertex-transitive group of automorphisms of a connected tetravalent graph \(\Gamma\) such that \(G_{v}^{\Gamma(v)}\) is permutation isomorphic to the Klein group \(\mathrm{V}_{4}\) in its faithful intransitive action on four points. If \(G\) has two orbits on \(\mathrm{E}(\Gamma)\), then we call the group \(G\) an LR-group of automorphisms of \(\Gamma\)._ _For a subgroup \(X\) of \(\mathrm{Aut}(\Gamma)\) that contains \(G\), we say that \(G\) is a maximal LR-subgroup of \(X\) if there is no LR-subgroup \(Y\) of \(X\) with \(G<Y<X\). If \(X=\mathrm{Aut}(\Gamma)\) we say that \(G\) is a maximal LR-group of automorphisms of \(\Gamma\)._ LR-groups of automorphisms were first introduced in [16] via the notion of an _LR-structure_. Although the definition given in [16] was for finite graphs only, our definition below works for finite _and_ infinite graphs, where in an infinite graph a 'cycle' is understood to be a set of edges inducing a connected \(2\)-regular subgraph. **Definition 1.2**.: _Let \(\Gamma\) be a connected tetravalent graph, let \(\mathcal{C}\) be a partition of \(E(\Gamma)\) into cycles, and let \(\{\mathcal{L},\mathcal{R}\}\) be a partition of \(\mathcal{C}\) such that every vertex of \(\Gamma\) is incident to one cycle in \(\mathcal{L}\) and one cycle in \(\mathcal{R}\). Define_ \[\mathrm{Aut}(\Gamma,\mathcal{C})=\{g\in\mathrm{Aut}(\Gamma)\mid\mathcal{C}^{ g}=\mathcal{C}\}\quad\text{and}\quad\mathrm{Aut}^{+}(\Gamma,\mathcal{C})=\{g\in \mathrm{Aut}(\Gamma,\mathcal{C})\mid\mathcal{L}^{g}=\mathcal{L}\text{ and }\mathcal{R}^{g}=\mathcal{R}\}.\] _Then the pair \((\Gamma,\mathcal{C})\) is called an LR-structure, and \(\mathcal{C}\) is called an LR-decomposition of \(\Gamma\), provided that_ 1. _the group_ \(\mathrm{Aut}^{+}(\Gamma,\mathcal{C})\) _acts transitively on_ \(\mathrm{V}(\Gamma)\)_, and_ 2. _for every_ \(v\in\mathrm{V}(\Gamma)\) _and for every cycle_ \(C\in\mathcal{C}\) _passing through_ \(v\)_, some_ \(g\in\mathrm{Aut}^{+}(\Gamma,\mathcal{C})_{v}\) _acts as a reflection on_ \(C\) _and fixes every vertex of the other cycle in_ \(\mathcal{C}\) _passing through_ \(v\)_._ _An LR-structure \((\Gamma,\mathcal{C})\) is called self-dual provided that \(\mathrm{Aut}^{+}(\Gamma,\mathcal{C})\) is a proper subgroup of \(\mathrm{Aut}(\Gamma,\mathcal{C})\) and is non-self-dual if \(\mathrm{Aut}^{+}(\Gamma,\mathcal{C})=\mathrm{Aut}(\Gamma,\mathcal{C})\)._ **Remark 1.3**.: If there exists a partition \(\{\mathcal{L},\mathcal{R}\}\) of \(\mathcal{C}\) satisfying the conditions of the above definition, then by connectedness of \(\Gamma\) it is unique, and so \(\mathrm{Aut}^{+}(\Gamma,\mathcal{C})\) is well-defined. Furthermore, observe that the connectedness of \(\Gamma\) implies also that every element of \(\mathrm{Aut}(\Gamma,\mathcal{C})\) either preserves each of the sets \(\mathcal{L}\) and \(\mathcal{R}\) (and hence belongs to \(\mathrm{Aut}^{+}(\Gamma,\mathcal{C})\)), or takes cycles in \(\mathcal{L}\) to cycles in \(\mathcal{R}\) and vice-versa. In particular, the index of \(\mathrm{Aut}^{+}(\Gamma,\mathcal{C})\) in \(\mathrm{Aut}(\Gamma,\mathcal{C})\) is at most \(2\), and so an LR-structure \((\Gamma,\mathcal{C})\) is self-dual if and only if there exists some \(g\in\mathrm{Aut}(\Gamma,\mathcal{C})\) such that \(\mathcal{L}^{g}=\mathcal{R}\) and \(\mathcal{R}^{g}=\mathcal{L}\). **Remark 1.4**.: Note that condition (b) of Definition 1.2 implies that the permutation group \(\mathrm{Aut}^{+}(\Gamma,\mathcal{C})_{v}^{\Gamma(v)}\) contains the involutions \(x:=(w_{1}\,w_{2})\) and \(y:=(u_{1}\,u_{2})\) swapping the neighbours \(w_{1}\) and \(w_{2}\) of \(v\) along the unique cycle in \(\mathcal{L}\) passing through \(v\), and swapping the neighbours \(u_{1}\) and \(u_{2}\) of \(v\) along the unique cycle in \(\mathcal{R}\) passing through \(v\). In particular, \(\mathrm{Aut}^{+}(\Gamma,\mathcal{C})_{v}^{\Gamma(v)}\) contains the Klein \(4\)-group \(\langle x,y\rangle\). But furthermore, since by definition there is no element in \(\operatorname{Aut}^{+}(\Gamma,\mathcal{C})\) mapping a cycle from \(\mathcal{L}\) to a cycle to \(\mathcal{R}\), we see that \(\{w_{1},w_{2}\}\) and \(\{u_{1},u_{2}\}\) are orbits of \(\operatorname{Aut}^{+}(\Gamma,\mathcal{C})_{v}^{\Gamma(v)}\), and therefore \(\operatorname{Aut}^{+}(\Gamma,\mathcal{C})_{v}^{\Gamma(v)}=\langle x,y\rangle\). **Remark 1.5**.: If \((\Gamma,\mathcal{C})\) is an LR-structure, and \(G:=\operatorname{Aut}^{+}(\Gamma,\mathcal{C})\), and \(C\) is a cycle in \(\mathcal{C}\), then every element \(g\in G\) taking an edge of \(C\) to an edge of \(C\) must preserve the cycle \(C\) setwise. Moreover, by vertex-transitivity of \(G\) and the existence of automorphisms guaranteed by condition (b) of Definition 1.2, the setwise stabiliser of \(C\) in \(G\) acts as the full automorphism group of the cycle \(C\), and in particular, induces an arc-transitive group on \(C\). Then since all cycles in \(\mathcal{L}\) lie in the same orbit of \(G\), as do all the cycles in \(\mathcal{R}\), we see that \(G\) has precisely two orbits on \(A(\Gamma)\): one consists of the arcs of the cycles in \(\mathcal{L}\), the other consists of the arcs of the cycles in \(\mathcal{R}\). In Section 2 we will show that the notions of LR-group and LR-structure are equivalent in some sense. More precisely, as stated in Lemma 2.3, every LR-group of automorphisms \(G\) determines a unique LR-structure \((\Gamma,\mathcal{C})\) for which \(G\leq\operatorname{Aut}^{+}(\Gamma,\mathcal{C})\), and conversely, if \((\Gamma,\mathcal{C})\) is an LR-structure then \(\operatorname{Aut}^{+}(\Gamma,\mathcal{C})\) is a maximal LR-group of automorphisms of \(\Gamma\). There is no obvious reason why one would expect an LR-structure on a given graph to be unique (up to the action of the automorphism group of a graph). The lack of examples of graphs admitting several non-equivalent LR-structures, however, encouraged the authors of [16] to pose the following: **Question 1.6**.: _[_16_, Question 1]_ _If \(\mathcal{C}\) and \(\mathcal{C}^{\prime}\) are two distinct LR-decompositions of a finite tetravalent graph \(\Gamma\), is it true that there exists \(g\in\operatorname{Aut}(\Gamma)\) such that \(\mathcal{C}^{g}=\mathcal{C}^{\prime}\)?_ For an LR-structure \((\Gamma,\mathcal{C})\) to be self-dual, it is necessary that the cycles in \(\mathcal{L}\) and in \(\mathcal{R}\) have the same length. Remarkably, it was shown in [16, Theorem 8.2] that this necessary condition holds provided that \(\operatorname{Aut}(\Gamma)\neq\operatorname{Aut}(\Gamma,\mathcal{C})\). Under the same hypothesis, the authors of [16] conjectured that the necessary condition is _also_ sufficient: **Conjecture 1.7**.: _[_16_, Conjecture 8.1]_ _If \((\Gamma,\mathcal{C})\) is a finite LR-structure for which \(\operatorname{Aut}^{+}(\Gamma,\mathcal{C})\) is a proper subgroup of \(\operatorname{Aut}(\Gamma)\), then \((\Gamma,\mathcal{C})\) is self-dual._ The aim of this paper is to resolve the status of both the question and the conjecture above. **Theorem 1.8**.: _The answer to Question 1.6 is affirmative, and Conjecture 1.7 is correct._ Our approach to proving Theorem 1.8 is based on the following observation. If \((\Gamma,\mathcal{C})\) is a non-self-dual LR-structure for which \(\operatorname{Aut}^{+}(\Gamma,\mathcal{C})\) is a proper subgroup of \(\operatorname{Aut}(\Gamma)\), then there exists a second cycle decomposition \(\mathcal{C}^{\prime}\) for which \((\Gamma,\mathcal{C}^{\prime})\) is an LR-structure different from \((\Gamma,\mathcal{C})\); indeed \(\mathcal{C}^{g}\) is such a cycle decomposition whenever \(g\in\operatorname{Aut}(\Gamma)\setminus\operatorname{Aut}(\Gamma,\mathcal{C})\). Hence both Question 1.6 and Conjecture 1.7 concern the situation where \(\Gamma\) admits two distinct LR-structures. On the other hand, we prove in Lemma 2.2 that this situation forces the graph \(\Gamma\) to be \(2\)-arc-transitive. (Recall that a \(2\)-arc in a graph is a walk \((u,v,w)\) of length \(2\) such that \(u\neq w\), and then a vertex-transitive graph \(\Gamma\) is \(2\)-arc-transitive if \(\operatorname{Aut}(\Gamma)\) acts transitively on the set of \(2\)-arcs of \(\Gamma\), which is equivalent to requiring that \(\operatorname{Aut}(\Gamma)_{v}\) acts doubly transitively on \(\Gamma(v)\).) This allows us to employ structural theory of \(2\)-arc-transitive groups of automorphisms of tetravalent graphs. In particular, we will deduce Theorem 1.8 in Section 5 from the following theorem, which we will prove in Section 4. For this, we note that a group of automorphisms \(G\) of a graph \(\Gamma\) is called _discrete_ if the stabiliser \(G_{v}\) is finite for every vertex \(v\in\operatorname{V}(\Gamma)\); see [3], for example. **Theorem 1.9**.: _Let \(A\) be a discrete \(2\)-arc-transitive group of automorphisms of a connected tetravalent graph \(\Gamma\) such that \(A_{v}^{\Gamma(v)}\cong\operatorname{Sym}(4)\). If \(A\) contains an LR-subgroup, then \(A\) cannot act transitively on the \(5\)-arcs of \(\Gamma\). Also, if \(G\) is a maximal LR-subgroup of \(A\), then there exists an arc-transitive subgroup \(X\) of \(A\) containing \(G\) as a subgroup of index \(2\), and every maximal LR-subgroup of \(A\) is conjugate to \(G\)._ In view of the correspondence between LR-structures and maximal LR-groups of automorphisms, the above theorem has the following corollary. **Corollary 1.10**.: _If \((\Gamma,\mathcal{C})\) is an LR-structure such that \(\operatorname{Aut}^{+}(\Gamma,\mathcal{C})\) is contained in a discrete arc-transitive group of automorphisms of \(\Gamma\), then \((\Gamma,\mathcal{C})\) is self-dual._ It is not known to us whether Theorem 1.9 remains valid if the condition on discreteness of the \(2\)-arc-transitive group \(A\) is dropped. In any case, as our approach depends heavily on the classification of the discrete \(2\)-arc-transitive groups of automorphisms of tetravalent graphs, an entirely different method may need to be used to analyse the more general situation. ## 2. Additional observations The following lemma is a variation of the well-known fact that an arc-transitive group \(G\) of automorphisms of a connected graph \(\Gamma\) is generated by the stabiliser \(G_{v}\) of a vertex \(v\) and an element \(g\) that reverses an arc incident with \(v\). Its proof can be derived from the proof of the more general phenomenon of generation of a group of graph automorphisms, as in [15, Theorem 34]; but for the sake of completeness, we give an independent proof for our specific context. **Lemma 2.1**.: _Let \(G\) be an LR-group of automorphisms of a connected tetravalent graph \(\Gamma\), let \(v\) be a vertex of \(\Gamma\), and let \(u\) and \(w\) be two neighbours of \(\Gamma\) belonging to distinct orbits of \(G_{v}\). If \(a\) and \(b\) are elements of \(G_{\{v,u\}}\backslash G_{vu}\) and \(G_{\{v,w\}}\backslash G_{vw}\) respectively, then \(G=\langle G_{v},a,b\rangle\)._ Proof.: Let \(H:=\langle G_{v},a,b\rangle\). Observe that every edge incident to \(v\) can be reversed by \(a\) or \(b\) or one of its \(G_{v}\) conjugates, implying that \(\Gamma(v)\subseteq v^{H}=\{v^{h}:h\in H\}\). Now suppose \(H\) is not transitive on \(\operatorname{V}(\Gamma)\). Then, because \(\Gamma\) is connected, there exists an edge \(\{x,y\}\) of \(\Gamma\) with \(x\in v^{H}\), say \(x=v^{h}\) where \(h\in H\), while \(y\not\in v^{H}.\) Since \(\Gamma(v)\subseteq v^{H}\), it follows that \(\Gamma(x)=\Gamma(v^{h})=\Gamma(v)^{h}\subseteq(v^{H})^{h}=v^{H}\), and so \(y\in v^{H}\), a contradiction. Thus \(H\) is transitive on \(\operatorname{V}(\Gamma)\), and therefore \(G=G_{v}H=H\). **Lemma 2.2**.: _Suppose that \(\Gamma\) is a connected tetravalent graph admitting two distinct LR-decompositions \(\mathcal{C}\) and \(\mathcal{C}^{\prime}\). Then the group \(A:=\langle\operatorname{Aut}^{+}(\Gamma,\mathcal{C}),\operatorname{Aut}^{+}( \Gamma,C^{\prime})\rangle\) acts transitively on the \(2\)-arcs of \(\Gamma\), and \(A_{v}^{\Gamma(v)}\cong\operatorname{Sym}(4)\) for all \(v\in V(\Gamma)\)._ Proof.: Let \(G:=\operatorname{Aut}^{+}(\Gamma,\mathcal{C})\) and \(H:=\operatorname{Aut}^{+}(\Gamma,\mathcal{C}^{\prime})\). As \(\mathcal{C}\neq\mathcal{C}^{\prime}\), there exists a vertex \(v\) of \(\Gamma\) and cycles \(C\in\mathcal{C}\) and \(C^{\prime}\in\mathcal{C}^{\prime}\) passing through \(v\), sharing precisely one of the edges incident with \(v\), \(\{v,u\}\) say, and accordingly, \(C(v)=\{u,w\}\) and \(C^{\prime}(v)=\{u,z\}\) for three different neighbours \(u,w\) and \(z\) of \(v\) in \(\Gamma\). Now let \(x\) be the fourth neighbour of \(v\). By our observations in Remark 1.4, we know that \(G_{v}^{\Gamma(v)}=\langle(u\,w),(z\,x)\rangle\) and \(H_{v}^{\Gamma(v)}=\langle(u\,z),(w\,x)\rangle\). Thus \(\langle G_{v}^{\Gamma(v)},H_{v}^{\Gamma(v)}\cong\operatorname{Sym}(4)\), and since \(A_{v}^{\Gamma(v)}\) contains \(H_{v}^{\Gamma(v)}\) and \(G_{v}^{\Gamma(v)}\), we find that \(A_{v}^{\Gamma(v)}\cong\operatorname{Sym}(4)\) also. Finally, since \(G\) is vertex-transitive, so is \(A\). Hence the \(2\)-transitivity of \(A_{v}^{\Gamma(v)}\) implies that \(A\) is transitive on the set of \(2\)-arcs of \(\Gamma\). **Lemma 2.3**.: _If \(G\) is an LR-group of automorphisms of a connected tetravalent graph \(\Gamma\), then there exists a unique LR-decomposition \(\mathcal{C}\) of \(\Gamma\) such that \(G\leq\operatorname{Aut}^{+}(\Gamma,\mathcal{C})\). Conversely, if \((\Gamma,\mathcal{C})\) is an LR-structure, then \(\operatorname{Aut}^{+}(\Gamma,\mathcal{C})\) is a maximal LR-group of automorphisms of \(\Gamma\)._ Proof.: Suppose first that \(G\) is an LR-group of automorphisms of \(\Gamma\). By definition, \(G\) has two orbits on \(\operatorname{E}(\Gamma)\), say \(E_{1}\) and \(E_{2}\). So now let \(X_{1}\) and \(X_{2}\) be the graphs with vertex-set \(\operatorname{V}(\Gamma)\) and edge-sets \(E_{1}\) and \(E_{2}\), respectively. By connectivity of \(\Gamma\), there exists a vertex \(v\in\operatorname{V}(\Gamma)\) which is incident to an edge in \(E_{1}\) as well as to one in \(E_{2}\), and since \(G\) is vertex-transitive, it follows that this is true for every vertex of \(\Gamma\). Furthermore, since \(G_{v}^{\Gamma(v)}\) has two orbits of length \(2\), it follows that every vertex of \(v\) is incident to two edges in \(E_{1}\) and two edges in \(E_{2}\), implying that \(X_{1}\) and \(X_{2}\) are both spanning \(2\)-regular subgraphs of \(\Gamma\). Let \(\mathcal{L}\), respectively, \(\mathcal{R}\), be the set consisting of the sets of edges of the cycles in \(X_{1}\), respectively, \(X_{2}\) and let \(\mathcal{C}=\mathcal{L}\cup\mathcal{R}\). Then \(\mathcal{C}\) is clearly a \(G\)-invariant decomposition of \(\mathrm{E}(\Gamma)\) into cycles, with each vertex of \(\Gamma\) incident to one cycle in \(\mathcal{L}\) and to one cycle in \(\mathcal{R}\). Moreover, since \(E_{1}\) and \(E_{2}\) are orbits of \(G\) on \(\mathrm{E}(\Gamma)\), we see that \(G\) is a subgroup of \(\mathrm{Aut}^{+}(\Gamma,\mathcal{C})\), with respect to the partition \(\{\mathcal{L},\mathcal{R}\}\) of \(\mathcal{C}\). In particular, \(\mathrm{Aut}^{+}(\Gamma,\mathcal{C})\) acts transitively on \(\mathrm{V}(\Gamma)\). Next, to show that condition (2) of Definition 1.2 holds, consider the cycles \(C\in\mathcal{L}\) and \(D\in\mathcal{R}\) passing through a vertex \(v\), and let \(C(v)=\{u_{1},w_{1}\}\) and \(D(v)=\{u_{2},w_{2}\}\) denote the neighbourhoods of \(v\) in these two cycles. By the construction of \(\mathcal{L}\) and \(\mathcal{R}\), we see that \(G_{v}^{\Gamma(v)}\) preserves each of \(C(v)\) and \(D(v)\) setwise, which together with the requirement that \(G_{v}^{\Gamma(v)}\cong\mathrm{V}_{4}\) implies that \(G_{v}^{\Gamma(v)}=\langle(u_{1}\,w_{1}),(u_{2}\,w_{2})\rangle\). Now observe that an element of \(G_{v}\) inducing the permutation \((u_{1}\,w_{1})\) on \(\Gamma(v)\) preserves both \(C\) and \(D\) setwise, and moreover, as it fixes three consecutive vertices on the cycle \(D\), it fixes \(D\) point-wise. Similarly, since this element fixes \(v\) but swaps the two \(C\)-neighbours of \(v\), it reflects \(C\) at \(v\). By applying an analogous argument to the permutation \((u_{2}\,w_{2})\), we see that the condition (2) of Definition 1.2 is indeed fulfilled, completing the proof that \((\Gamma,\mathcal{C})\) is an LR-structure with \(G\leq\mathrm{Aut}^{+}(\Gamma,\mathcal{C})\). Suppose now that \(\mathcal{C}\) and \(\mathcal{C}^{\prime}\) are LR-decompositions of \(\Gamma\) for which \(G\leq\mathrm{Aut}^{+}(\Gamma,\mathcal{C})\) and \(G\leq\mathrm{Aut}^{+}(\Gamma,C^{\prime})\). If \(\mathcal{C}\neq\mathcal{C}^{\prime}\), then there is some vertex \(v\in V(\Gamma)\) and some \(C\in\mathcal{C}\), \(C^{\prime}\in\mathcal{C}^{\prime}\) such that for \(x,y,z\in\Gamma(v)\) we have \(x,v,y\in C\) and \(x,v,z\in C^{\prime}\). Since \(G\leq\mathrm{Aut}^{+}(\Gamma,\mathcal{C})\), the set \(\{x,y\}\) is an orbit of \(G_{v}\). On the other hand, since \(G\leq\mathrm{Aut}^{+}(\Gamma,\mathcal{C}^{\prime})\), the set \(\{x,z\}\) is an orbit of \(G_{v}\). This is a contradiction, and hence \(\mathcal{C}=\mathcal{C}^{\prime}\). Conversely, let \((\Gamma,\mathcal{C})\) be an LR-structure and let \(G=\mathrm{Aut}^{+}(\Gamma,\mathcal{C})\). Then \(G_{v}^{\Gamma(v)}\) is permutation isomorphic to the Klein \(4\)-group \(\mathrm{V}_{4}\) in its intransitive action on \(4\) points, by our observations in Remark 1.4. On the other hand, \(G\) is vertex-transitive (by definition), but \(G\) is not edge-transitive as it preserves the sets \(\mathcal{L}\) and \(\mathcal{R}\). Also vertex-transitivity and the fact that \(G_{v}\) induces \(\mathrm{V}_{4}\) on \(\Gamma(v)\) imply that \(G\) is transitive on the set of edges contained in the cycles in \(\mathcal{L}\), as well as on the set of edges contained in \(\mathcal{R}\). In particular, \(G\) has two orbits on \(\mathrm{E}(\Gamma)\). This shows that \(G\) is an LR-group of automorphisms of \(\Gamma\). Moreover, if \(X\) is another LR-group of automorphisms of \(\Gamma\) such that \(G\leq X\), then from the first paragraph of this proof we know there exists a unique LR-decomposition \(\mathcal{C}^{\prime}\) of \(\Gamma\) for which \(X\leq\mathrm{Aut}^{+}(\Gamma,\mathcal{C}^{\prime})\). But then \(G\leq\mathrm{Aut}^{+}(\Gamma,\mathcal{C}^{\prime})\), and again by the uniqueness of the LR-decomposition \(\mathcal{C}\) for which \(G\leq\mathrm{Aut}^{+}(\Gamma,\mathcal{C})\), we find that \(\mathcal{C}=\mathcal{C}^{\prime}\). It follows that \(X\leq\mathrm{Aut}^{+}(\Gamma,\mathcal{C})=G\), and hence \(X=G\). This shows that \(G\) is a maximal LR-group of automorphisms of \(\Gamma\), and completes the proof. **Corollary 2.4**.: _If \((\Gamma,\mathcal{C})\) is an LR-structure, and \(\mathrm{Aut}^{+}(\Gamma,\mathcal{C})<X\leq\mathrm{Aut}(\Gamma)\), then \(X\) acts transitively on the arcs of \(\Gamma\)._ Proof.: By Lemma 2.3, we see that \(G:=\mathrm{Aut}^{+}(\Gamma,\mathcal{C})\) is a maximal LR-group of automorphisms of \(\Gamma\), and so \(X\) is not an LR-group of automorphisms. Now suppose that \(X\) is not transitive on \(\mathrm{A}(\Gamma)\). Then \(X_{v}^{\Gamma(v)}\) is not transitive, and so \(X_{v}^{\Gamma(v)}=G_{v}^{\Gamma(v)}\cong\mathrm{V}_{4}\), and since \(X\) is not an LR-group, \(X\) must be transitive on the edges of \(\Gamma\). In view of Remark 1.5, it follows that \(G\) acts transitively on the arcs underlying each of the two edge-orbits of \(G\). Since these two \(G\)-edge-orbits are merged into a single \(X\)-edge-orbit, this implies that \(X\) is arc-transitive on \(\Gamma\) after all, a contradiction. **Lemma 2.5**.: _Let \((\Gamma,\mathcal{C})\) be an LR-structure, and let \(G:=\mathrm{Aut}^{+}(\Gamma,\mathcal{C})\). Then the following claims are equivalent\(\,\):_ 1. \((\Gamma,\mathcal{C})\) _is self-dual_\(;\)__ 2. _there exists a group_ \(X\) _such that_ \(G\leq X\leq\mathrm{Aut}(\Gamma)\) _and_ \(|X:G|=2\,\)__ 3. _there exists an arc-transitive but not_ \(2\)_-arc-transitive group_ \(X\leq\mathrm{Aut}(\Gamma)\) _that contains_ \(G\) _as a normal subgroup._ Proof.: Suppose first that (i) holds. In this case, let \(\{\mathcal{L},\mathcal{R}\}\) be the partition of \(\mathcal{C}\) as in Definition 1.2, and let \(X=\mathrm{Aut}(\Gamma,\mathcal{C})\). Then clearly the partition \(\{\mathcal{L},\mathcal{R}\}\) is \(X\)-invariant, and \(G\) is the kernel of the induced action of \(X\) on \(\{\mathcal{L},\mathcal{R}\}\), so \(|X:G|=2\), and hence (ii) holds. Next, suppose that (ii) holds. Then by Corollary 2.4, \(X\) is arc-transitive. Moreover, as both \(X\) and \(G\) are vertex-transitive, we find that \(|X_{v}:G_{v}|=|X:G|=2\), which implies that the \(G_{v}\)-orbits on \(\Gamma(v)\) are blocks of imprimitivity for \(X_{v}^{\Gamma(v)}\). In particular, \(X_{v}^{\Gamma(v)}\) cannot be doubly transitive, and so \(X\) is not \(2\)-arc-transitive. This proves that (iii) holds. Finally, suppose that (iii) holds. Since \(G\) is normal in \(X\) and since both \(\mathcal{L}\) and \(\mathcal{R}\) are orbits of the action of \(G\) on \(\mathrm{E}(\Gamma)\), the partition \(\{\mathcal{L},\mathcal{R}\}\) is \(X\)-invariant. Since \(X\) is arc-transitive, this implies that there exists \(g\in X\) interchanging \(\mathcal{L}\) and \(\mathcal{R}\), and therefore \(\mathcal{C}\) is self-dual, that is, (i) holds. ## 3. Amalgams and a reduction to trees We begin this section by recalling some basic facts about finite group amalgams of rank \(2\) and their relationship with discrete groups acting arc-transitively on graphs. Let \(\Gamma\) be a connected \(d\)-valent graph, and let \(G\) be a discrete arc-transitive group of automorphisms of \(\Gamma\). Also let \(u\) and \(v\) be two adjacent vertices in \(\Gamma\), let \(L=G_{v}\), \(R=G_{\{u,v\}}\), \(B=G_{uv}\) be the stabilisers of the vertex \(v\), edge \(\{u,v\}\) and arc \((u,v)\), respectively. Then it is well known that the following hold (see, [14] for example): 1. \(B=L\cap R\) is a finite group, 2. \(|L:B|=d\) and \(|R:B|=2\), 3. if \(K\leq B\) and \(K\) is normal in \(L\) and in \(R\), then \(K=1\), and 4. \(G=\langle L,R\rangle\). Furthermore, if \(a\) is an arbitrary element of \(R\setminus B\) (so that \(a\) reverses the arc \((u,v)\)), then \(\Gamma\) is isomorphic to the _Schreier coset graph_\(\mathrm{Cos}(G;L,a)\) whose vertices are the right cosets of \(L\) in \(G\) and whose edges are of the form \(\{Lx,Lax\}\) for \(x\in G\). This graph can also be denoted by \(\mathrm{Cos}(G;L,B,R)\), with arcs being the cosets of \(B\) in \(G\) (with the initial vertex of an arc \(Bx\) being \(Lx\), and the reverse of \(Bx\) being \(Bax\)). Moreover, via this isomorphism between \(\Gamma\) and \(\mathrm{Cos}(G;L,B,R)\), the action of \(G\) on the right cosets of \(L\) and \(B\) by right multiplication corresponds to the original action of \(G\) on the vertices and arcs of \(\Gamma\). Conversely, suppose that \(B,L\), \(R\) and \(G\) are arbitrary groups satisfying conditions (1) to (4) above. In this case we say that \((L,B,R)\) is a _finite faithful amalgam of index \((d,2)\)_, and that \(G\) is a _completion_ of the amalgam. Correspondingly, \(\Gamma=\mathrm{Cos}(G;L,B,R)\) is a connected regular \(d\)-valent graph, upon which \(G\) acts by right multiplication as a group of automorphisms. Moreover, this action is faithful, and transitive on the arcs of \(\Gamma\). Note that the groups \(L\), \(R\) and \(B\) play the roles of the vertex-stabiliser, edge-stabiliser and the arc-stabiliser of a mutually incident vertex-edge-arc triple in \(\Gamma\). For a given finite faithful amalgam \((L,B,R)\), there exists a _universal completion_\(\tilde{G}\), denoted by \(L*_{B}R\) and called the free product of \(L\) and \(R\) amalgamated over \(B\). This has the property that for every completion \(G\) of the given amalgam, there exists an epimorphism \(\pi\colon\tilde{G}\to G\) whose kernel intersects the groups \(L\) and \(R\) trivially (so that we may identify the subgroups \(L,R,B\leq\tilde{G}\) with their \(\pi\)-images in \(G\)). Furthermore, \(\tilde{\Gamma}:=\mathrm{Cos}(\tilde{G};L,B,R)\) is an infinite \(d\)-valent graph, called _the universal cover of \(\Gamma\)_, with \(\pi\) induces a covering projection \(\tilde{\Gamma}\to\Gamma\). Let us now focus on the situation arising in Theorem 1.9 and prove the following statement: **Lemma 3.1**.: _Theorem 1.9 holds provided that it holds in the case where \(\Gamma\) is a tetravalent tree._ Proof.: Suppose Theorem 1.9 holds for the case in which the graph in question is a tetravalent tree. Now let \(\Gamma\) be any connected tetravalent graph admitting a discrete \(2\)-arc-transitive group \(A\) of automorphisms with \(A_{v}^{\Gamma(v)}\cong\mathrm{Sym}(4)\), and \(A\) contains an LR-subgroup \(G\). Also let \(\{u,v\}\) be an edge of \(\Gamma\), and let \(L:=A_{v}\), \(R:=A_{\{u,v\}}\) and \(B:=A_{uv}\). Then we may identify \(\Gamma\) with \(\mathrm{Cos}(A;L,B,R)\), and the action of \(A\) on the vertices, arcs and edges of \(\Gamma\) with the action of \(A\) on cosets of \(L\), \(B\) and \(R\), respectively. Next, let \(\tilde{A}:=L*_{B}R\) be the universal completion of the amalgam \((L,B,R)\), let \(\pi\colon\tilde{A}\to A\) be the corresponding epimorphism, and let \(\mathrm{T}_{4}:=\mathrm{Cos}(\tilde{A};L,B,R)\). Then \(\mathrm{T}_{4}\) is a \(4\)-valent tree upon which \(\tilde{A}\) acts arc-transitively. Moreover, since the groups \(L\), \(B\) and \(R\) are the stabiliser of an incident vertex, arc and edge (respectively), we see that in the actions of the group \(\tilde{A}\) on \(\mathrm{T}_{4}=\mathrm{Cos}(\tilde{A};L,B,R)\)) and the group \(A\) on \(\Gamma=\mathrm{Cos}(A;L,B,R)\)), the vertex-stabiliser \(\tilde{A}_{\tilde{v}}=L\) acts on the neighbourhood \(\tilde{\Gamma}(\tilde{v})\) in the same way as the vertex-stabiliser \(A_{v}=L\) on the neighbourhood \(\Gamma(v)\). Thus \(\tilde{A}_{\tilde{v}}^{\mathrm{T}_{4}(\tilde{v})}\) is permutation isomorphic to \(A_{v}^{\Gamma(v)}\cong\mathrm{Sym}(4)\). Also, because \(A\) is \(2\)-arc-transitive and discrete, so is \(\tilde{A}\), and \(\tilde{A}_{\tilde{v}}^{\mathrm{T}_{4}(\tilde{v})}\cong\mathrm{Sym}(4)\). Moreover, using the fact that \(\pi\) induces a covering projection \(\mathrm{T}_{4}\to\Gamma\), one can also show that \(A\) acts \(s\)-arc transitively on \(\Gamma\) if and only if \(\tilde{A}\) acts \(s\)-arc transitively on \(\mathrm{T}_{4}\). (This fact is well-known; for a proof see [10, Lemma 3.2].) Now let \(\tilde{G}\) be the \(\pi\)-preimage of the LR-subgroup \(G\) of \(A\), and observe that the stabilisers of some incident vertex, arc and edge of \(\tilde{G}\) are equal to the intersections \(\tilde{G}\cap L\), \(\tilde{G}\cap B\) and \(\tilde{G}\cap R\), respectively. By an argument similar to the one above, we see that the action of \(\tilde{G}_{\tilde{v}}\) on \(\mathrm{T}_{4}(\tilde{v})\) is isomorphic to the action of \(G_{v}\) on \(\Gamma(v)\), and therefore \(\tilde{G}_{\tilde{v}}^{\mathrm{T}_{4}(\tilde{v})}\) is permutation isomorphic to \(G_{v}^{\Gamma(v)}\cong\mathrm{V}_{4}\). Then since the action of \(\tilde{G}\) on the edge-set \(\mathrm{E}(\mathrm{T}_{4})\) is isomorphic to the action of \(\tilde{G}\) on right cosets of the copy of \(R\) in \(\tilde{A}\) by right multiplication, which in turn is isomorphic to the action of \(G\) on right cosets of \(R\) in \(A\) by right multiplication, we see that the number of edge-orbits of \(\tilde{G}\) on \(\mathrm{T}_{4}\) is equal to the number of edge-orbits of \(G\) on \(\Gamma\). In particular, since \(G\) is an LR-group of automorphisms, it follows that so is \(\tilde{G}\). Finally, observe that the epimorphism \(\pi\) induces a bijection between the lattice of subgroups of \(\tilde{A}\) that contain \(\ker(\pi)\) and the lattice of subgroups of \(A\). It follows that there exists a subgroup \(\tilde{X}\) of \(\tilde{A}\) containing \(\tilde{G}\) as a subgroup of index \(2\) if and only if there exists a subgroup \(X\) of \(A\) containing \(G\) as a subgroup of index \(2\). Similarly, \(\tilde{G}\) is a maximal LR-subgroup of \(\tilde{A}\) if and only if \(G\) is a maximal LR-subgroup of \(A\). We may now use our assumption that Theorem 1.9 holds for trees to conclude that \(\tilde{A}\) does not act \(5\)-arc-transitively on \(\mathrm{T}_{4}\), and hence that \(A\) does not act \(5\)-arc-transitively on \(\Gamma\). Also if \(G\) is a maximal LR-subgroup of \(\Gamma\), then \(\tilde{G}\) is a maximal LR-subgroup of \(\tilde{A}\), and hence by our assumption, there exists an arc-transitive subgroup \(\tilde{X}\) of \(\tilde{A}\) containing \(\tilde{G}\) as a subgroup of index \(2\). Thus \(X:=\pi(\tilde{X})\) is an arc-transitive subgroup of \(A\) containing \(G\) as a subgroup of index \(2\). Moreover, if \(\Gamma\) admits another maximal LR-subgroup \(H\) of \(A\), then by the same argument as above, \(\pi^{-1}(H)\) is a maximal LR-subgroup of \(\tilde{A}\), and hence (by our assumption) is conjugate in \(\tilde{A}\) to \(\tilde{G}\). But then \(H\) is conjugate to \(G\) in \(A\), and this completes the proof. ## 4. LR-subgroups of \(2\)-arc-transitive groups The structure of discrete \(2\)-arc-transitive groups of automorphisms of the infinite tetravalent tree \(\mathrm{T}_{4}\) is well understood, thanks to the classical work of Gardiner [8] and Weiss [25], and a description of these groups in terms of generators and relators was given explicitly in [14, Table 1]. It follows from this work that up to conjugacy in \(\mathrm{Aut}(\mathrm{T}_{4})\), there are exactly nine possibilities for the group \(A\), with six of those satisfying the additional condition \(A_{v}^{\mathrm{T}_{4}(v)}\cong\mathrm{Sym}(4)\). These six groups, together with the corresponding stabilisers \((L,B,R)\) of mutually incident vertex-arc-edge triples in the graph, are given in the second column of Table 1. Moreover, presentations for the subgroups \(L\), \(B\) and \(R\) can be read conveniently from the presentation of the corresponding group \(A\) in Table 1, by simply taking all the relators that involve only the generators of the subgroup in question. Note that the names of the first four of the groups in Table 1 are chosen so as to reflect the isomorphism type of the vertex-stabiliser. For example, the group \(A\) named \(S_{4}\) has vertex-stabiliser \(L\) isomorphic to \(\mathrm{Sym}(4)\), while for the groups named \(C_{3}\rtimes S_{4}\) and \(C_{3}\rtimes S_{4}^{*}\), the vertex-stabiliser \(L\) is isomorphic to a semidirect product \(C_{3}\rtimes\mathrm{Sym}(4)\), and for the group named \(S_{3}\times S_{4}\) the vertex-stabiliser is isomorphic to the direct product \(\mathrm{Sym}(3)\times\mathrm{Sym}(4)\). Each of these four possibilities for the group \(A\) act on \(\mathrm{T}_{4}\) either \(2\)-arc-transitively or \(3\)-arc-transitively (but not \(4\)-arc-transitively). The last two groups, named \(4\)-AT and \(7\)-AT, act \(4\)-arc-transitively (but not \(5\)-arc-transitively) and \(7\)-arc-transitively, respectively. We will now prove that Theorem 1.9 holds for the case where \(\Gamma\) is the tetravalent tree \(\mathrm{T}_{4}\), which by Lemma 3.1, then proves Theorem 1.9 in full generality. In fact, we prove something slightly stronger: **Lemma 4.1**.: _Let \(A\) be a discrete \(2\)-arc-transitive group of automorphisms of \(\mathrm{T}_{4}\). If \(A\) is one of the groups in the first five rows of Table 1 (that is, if \(A\) has type \(S_{4}\), \(C_{3}\rtimes S_{4}\), \(C_{3}\rtimes S_{4}^{*}\), \(S_{3}\times S_{4}\) or 4-AT), then \(A\) contains a maximal LR-subgroup \(G\), unique up to conjugation in \(A\), given in the third column of Table 1. Also the normaliser \(\mathrm{N}_{A}(G)\) in \(A\) of the maximal LR-subgroup \(G\) contains \(G\) as a subgroup of index \(2\), and is shown in the third column of Table 1 too. Finally, if \(A\) has type \(7\)-AT (as in the sixth row), then \(A\) contains no LR-subgroup._ Proof.: Let \(\{v,u\}\) be an edge of \(\Gamma:=\mathrm{T}_{4}\), and recall that we may assume that \((A,A_{v},A_{vu},A_{\{v,u\}})\) is one of the quadruples \((A,L,B,R)\) given in Table 1, and that the vertex-set, arc-set and edge-set of \(\Gamma\) can be identified with the right coset spaces \((A\!:\!A_{v})\), \((A\!:\!A_{vu})\) and \((A\!:\!A_{\{v,u\}})\), respectively, with the action of \begin{table} \begin{tabular}{|c|c|c|} \hline Name & \(A=\langle x,y,s,t,a\mid x^{2},y^{2},s^{3},t^{2},a^{2},[x,y],s^{t}=s^{-1},x^{s}= y,y^{s}=xy,x^{t}=y,[s,a],[t,a]\rangle\) & maximal LR-subgroup \(G\) \\ \cline{2-3} \(S_{4}\) & \(A=\langle x,y,s,t,a\mid x^{2},y^{2},c^{3},d^{3},t^{2},a^{2},[x,y],[c,d],\) & \(\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad \(A\) on vertices, arcs and edges of \(\Gamma\) coinciding with the actions of \(A\) on \((A\!:\!A_{v})\), \((A\!:\!A_{vu})\) and \((A\!:\!A_{\{v,u\}})\) by right multiplication. We proceed using a combination of theoretical and computational methods. Let \(\Omega\) be the right coset space \((A_{v}\!:\!A_{vu})\), let \(\rho\!:\!A_{v}\to\mathrm{Sym}(\Omega)\) be the natural action of \(A_{v}\) on \(\Omega\), and let \(A_{v}^{\Omega}\) be the permutation group \(\rho(A_{v})\) induced by this action on \(\Omega\). Since \(A_{vu}\) is the stabiliser of \(u\) in the transitive action of \(A_{v}\) on \(\Gamma(v)\), we may identify the elements of \(\Gamma(v)\) with the elements of \(\Omega\) in such a way that the coset action \(\rho\) of \(A_{v}\) on \(\Omega\) corresponds to the action of \(A_{v}\) on \(\Gamma(v)\), and that \(A_{v}^{\Gamma(v)}\) corresponds to \(A_{v}^{\Omega}\). In particular, \(A_{v}^{\Omega}=\mathrm{Sym}(\Omega)\). By the definition of an LR-group, \(G_{v}\) is a 2-subgroup of \(A_{v}\) such that \(G_{v}^{\Gamma(v)}\) is permutation isomorphic to the intransitive Klein 4-group \(\mathrm{V}_{4}\). With the above-described identification of \(\Gamma(v)\) with \(\Omega\), we see that \(G_{v}^{\Omega}:=\rho(G_{v})\) is one of the three intransitive subgroups of \(\mathrm{Sym}(\Omega)\) isomorphic to \(\mathrm{V}_{4}\), and hence that \(G_{v}\) belongs to the set \(\mathcal{X}\) of all 2-subgroups of \(A_{v}\) with \(\rho(X)\cong\mathrm{V}_{4}\). Because \(A_{v}\) is a finite group (with the presentation given in Table 1), one can use a computer algebra system such as Magma[2] to determine the set \(\mathcal{X}\) for each of the six possible types of the group \(A\). If \(A\) has type \(S_{4}\), \(C_{3}\rtimes S_{4}\) or \(C_{3}\rtimes S_{4}^{*}\), then the set \(\mathcal{X}\) consists of a single conjugacy class of subgroups of \(A_{v}\), with the class representative being the Klein 4-subgroup generated by \(xy\) and \(t\). If \(A\) has type \(S_{3}\times S_{4}\), then \(\mathcal{X}\) is the disjoint union of four \(A_{v}\)-conjugacy classes, the representatives of which are \(\langle xy,s\rangle\), \(\langle xy,sr\rangle\), \(\langle rxy,s\rangle\) and \(\langle xy,r,s\rangle\), with the first three isomorphic to \(\mathrm{V}_{4}\), and the fourth being an elementary abelian group of order 8. If \(A\) has type 4-AT, then \(\mathcal{X}\) consists of a unique \(A_{v}\)-conjugacy class represented by the group \(\langle x,y,t\rangle\), isomorphic to the dihedral group \(\mathrm{D}_{4}\) of order 8. Finally, if \(A\) is of type 7-AT, then \(\mathcal{X}\) is the conjugacy class in \(A_{v}\) of \(\langle pcq,(pcq)^{h}\rangle\), which is also isomorphic to \(\mathrm{D}_{4}\). Next, since \(G\) acts transitively on \(\mathrm{V}(\Gamma)\), we see that \(A=GA_{v}\), and so \(G\) has finite index in \(A\), indeed \(|A:G|=|A_{v}:G_{v}|\). Hence the group \(G\) is a member of the set \[\mathcal{T}=\bigcup_{X\in\mathcal{X}}\{\,T\,:\,X\leq T\leq A\,\,\,\mathrm{ with}\,\,\,|A:T|=|A_{v}:X|\,\}.\] We computed this set \(\mathcal{T}\) for each of the first five types of the group \(A\), using the LowIndexSubgroups routine in Magma. (The computation in these cases takes only a few seconds on an average laptop.) We were unable to do the same in the case where \(A\) has type 7-AT, however, due to the computational complexity of the LowIndexSubgroups algorithm, we will explain how we dealt with this case later. Of course the set \(\mathcal{T}\) might contain some subgroups of \(A\) that are not vertex-transitive, or are vertex-transitive but act half-arc-transitively on \(\Gamma\), and so we can restrict our attention to the subset \(\mathcal{T}^{*}\) consisting of all \(T\in\mathcal{T}\) that are LR-groups of automorphisms of \(\Gamma\). In determining this set \(\mathcal{T}^{*}\), we observe that an LR-group \(T\in\mathcal{T}^{*}\) has two orbits on \(\mathrm{E}(\Gamma)\) and \(\mathrm{A}(\Gamma)\), and hence that each of \(A_{\{v,u\}}\) and \(A_{vu}\) has two orbits in its action on the coset space \((A\!:\!T)\) by right multiplication. Moreover, as an LR-group \(T\) is vertex-transitive, we see that \(A_{v}\) is transitive in its action on \((A\!:\!T)\), and hence a group \(T\in\mathcal{T}\) belongs to \(\mathcal{T}^{*}\) if and only if the stabilisers \(A_{\{v,u\}}\), \(A_{vu}\) and \(A_{v}\) have two orbits, two orbits and one orbit, respectively, in the their actions on \((A\!:\!T)\). Next, because every group \(T\in\mathcal{T}\) has finite (and relatively small) index in \(A\), it is easy check each group \(T\in\mathcal{T}\) against the latter condition computationally, and find that, up to conjugacy in \(A\), the set \(\mathcal{T}^{*}\) consists of a unique LR-group in all cases, except when \(A\) has type \(S_{3}\times S_{4}\) or possibly when it has type 7-AT (in which case the above approach is computationally too difficult). In all but those two exceptional cases, the LR-groups that satisfy the condition are precisely the groups listed in the third column of Table 1. If \(A\) has type \(S_{3}\times S_{4}\), then \(\mathcal{T}^{*}\) consists of two conjugacy classes, with representatives \(\langle xy,sr,a,a^{x}\rangle\) and \(\langle xy,a,a^{x},s,r\rangle\), and as the former is clearly a subgroup of the latter, we find that (again) \(A\) contains a unique maximal LR-subgroup up to conjugacy. Also, a direct computation shows that in each case (except possibly when \(A\) is of type 7-AT), the normaliser \(\mathrm{N}_{A}(G)\) in the unique maximal LR-subgroup \(G\) of \(A\) is as stated in Table 1 and contains \(G\) as a subgroup of index 2. This proves the statement of the lemma in all cases except when \(A\) is of type 7-AT. Finally, let us assume that \(A\) is of type \(7\)-AT, and hence that \(G_{v}=\langle\alpha,\alpha^{h}\rangle\cong\mathrm{D}_{4}\) where \(\alpha=pcq\). Recall that \(\rho\) is the action of \(A_{v}\) on the set of right cosets of \(A_{vu}\) (\(=B\)), and that \(\alpha\) fixes \(u\). Let \(x\) be the other vertex in \(\Gamma(v)\) fixed by \(\alpha\), and let \(w\) and \(z\) be vertices in \(\Gamma(v)\) that are interchanged by \(\alpha\). Now since \(|G_{v}|=|G_{vu}|\,|u^{G_{v}}|=2|G_{vu}|\), we find that \(|G_{vu}|=4\). Also we see that \(\rho(h)\) is the double transposition \((u\,w)(x\,z)\), and hence \(\alpha\in G_{vu}\) and \(\alpha^{h}\in G_{vw}\). Moreover, one can see that the element \((\alpha^{h}\alpha)^{2}\) of order \(2\) lies in the kernel of \(\rho\), and it follows that \((\alpha^{h}\alpha)^{2}\) fixes the neighbourhood \(\Gamma(v)\) pointwise, and therefore \(\alpha\neq(\alpha^{h}\alpha)^{2}\neq\alpha^{h}\). Also because \(|G_{vu}|=|G_{vw}|=4\), we find that \(G_{vu}=\langle(\alpha^{h}\alpha)^{2},\alpha\rangle\) and \(G_{vw}=\langle(\alpha^{h}\alpha)^{2},\alpha^{h}\rangle\). We will now determine the edge-stabiliser \(G_{\{v,u\}}\). To do this, we first observe that \((\alpha^{h}\alpha)^{2}=q^{2}k\), which can be verified easily since it involves only elements in the finite group \(G_{v}\), and from this we find that \(G_{vu}=\langle pcq,q^{2}k\rangle\). Then since the elements \(pcq\) and \(q^{2}k\) involve only generators of the group \(R=A_{\{v,u\}}\), we deduce that the edge-stabiliser \(G_{\{v,u\}}\) is precisely the normaliser \(\mathrm{N}_{A_{\{v,u\}}}(G_{vu})\), and this gives \(G_{\{v,u\}}=\langle G_{vu},a\rangle\cong\mathrm{D}_{4}\). Recall that \(\Gamma(v)=\{u,x,w,z\}\) and that \(\rho(G_{v})=\langle(w\,z),(u\,x)\rangle\). This means \(u\) and \(w\) are in distinct orbits of \(G_{v}\) on \(\Gamma(v)\). Thus, by Lemma 2.1, there exist \(\mu\in G_{\{v,u\}}\setminus G_{vu}\) and \(\nu\in G_{\{v,w\}}\setminus G_{vw}\) such that \(G=\langle G_{v},\mu,\nu\rangle\). Now \(\mu=\mu^{\prime}a\) for some \(\mu^{\prime}\in G_{vu}\) and since \(G_{\{v,w\}}=(G_{\{v,u\}})^{h}\), also \(\nu=\nu^{\prime}a^{h}\) for some \(\nu^{\prime}\in G_{vw}\). Hence \(G=\langle G_{v},a,a^{h}\rangle=\langle(pcq),(pcq)^{h},a,a^{h}\rangle\). A quick computation in Magma shows that \(G=A\), which is a contradiction to our assumption that \(G\) is an LR-group. This shows that if \(A\) is of type \(7\)-AT (or equivalently, is \(7\)-arc-transitive), then \(A\) contains no LR-groups. ## 5. Proof of Theorem 1.8 and Corollary 1.10 We conclude this paper by deducing Theorem 1.8 from what we found in the previous sections and then proving Corollary 1.10. Let us begin by answering Question 1.6, namely that if \(\mathcal{C}\) and \(\mathcal{C}^{\prime}\) are distinct LR-decompositions of a finite tetravalent graph \(\Gamma\), does there exist \(g\in\mathrm{Aut}(\Gamma)\) such that \(\mathcal{C}^{g}=\mathcal{C}^{\prime}\,\)? Let \(\Gamma\), \(\mathcal{C}\) and \(\mathcal{C}^{\prime}\) be as above and let \(G:=\mathrm{Aut}^{+}(\Gamma,\mathcal{C})\) and \(G^{*}:=\mathrm{Aut}^{+}(\Gamma,\mathcal{C}^{\prime})\). Then, by Lemma 2.2, the group \(A:=\langle G,G^{*}\rangle\) acts \(2\)-arc-transitively on \(\Gamma\) with \(A_{v}^{\Gamma(v)}\cong\mathrm{Sym}(4)\). Moreover, by Lemma 2.3, the groups \(G\) and \(G^{*}\) are maximal LR-subgroups of \(A\). By Theorem 1.9, \(G\) and \(G^{*}\) are conjugate within \(A\), so \(G^{*}=G^{g}\) for some \(g\in A\). It follows that \(\mathcal{C}^{g}\) is an LR-decomposition of \(\Gamma\) with \(G^{*}=\mathrm{Aut}^{*}(\Gamma,\mathcal{C}^{g})\), and now Lemma 2.3 implies that \(\mathcal{C}^{g}=\mathcal{C}^{\prime}\). Hence the answer to Question 1.6 is affirmative. Finally, we verify Conjecture 1.7, namely that if \((\Gamma,\mathcal{C})\) is a finite LR-structure for which \(\mathrm{Aut}^{+}(\Gamma,\mathcal{C})\) is a proper subgroup of \(\mathrm{Aut}(\Gamma)\), then \((\Gamma,\mathcal{C})\) is self-dual. Suppose that \((\Gamma,\mathcal{C})\) is a counterexample to Conjecture 1.7. Then \(\mathrm{Aut}(\Gamma,\mathcal{C})=\mathrm{Aut}^{+}(\Gamma,\mathcal{C})=G\), and there exists an element \(h\in\mathrm{Aut}(\Gamma)\setminus G\). Now \(\mathcal{C}^{h}\) is an LR-decomposition of \(\Gamma\) distinct from \(\mathcal{C}\), and by Lemmas 2.2 and 2.3, the group \(A:=\langle G,G^{h}\rangle\) acts \(2\)-arc-transitively on \(\Gamma\) with \(A_{v}^{\Gamma(v)}\cong\mathrm{Sym}(4)\) and with \(G\) being a maximal LR-subgroup of \(A\). By Theorem 1.9 there exists a group \(N\) with \(G\leq N\leq A\) and \(|N:G|=2\), and so by Lemma 2.5, \((\Gamma,\mathcal{C})\) is self-dual, a contradiction. This shows that \((\Gamma,\mathcal{C})\) is not a counterexample. We now turn to Corollary 1.10. Suppose that \((\Gamma,\mathcal{C})\) is an LR-structure such that \(\{\mathcal{L},\mathcal{R}\}\) is the (unique) partition of \(\mathcal{C}\) into cycles satisfying Definition 1.2, and that \(G=\mathrm{Aut}^{+}(\Gamma,\mathcal{C})\) is contained in an arc-transitive subgroup \(Y\). If \(G\) is normal in \(Y\), then since \(\{\mathcal{L},\mathcal{R}\}\) is a partition of \(E(\Gamma)\), there exists \(g\in Y\) such that \(\mathcal{L}^{g}=\mathcal{R}\) and \(\mathcal{R}^{g}=\mathcal{L}\) and hence \((\Gamma,\mathcal{C})\) is self-dual. If \(G\) is not normal in \(Y\), then as above, for \(y\in Y\setminus N_{Y}(G)\) the group \(\langle G,G^{y}\rangle\) is a \(2\)-arc-transitive discrete subgroup of \(\mathrm{Aut}(\Gamma)\) and then Theorem 1.8 shows that there is an arc-transitive subgroup \(X\) of \(\mathrm{Aut}(\Gamma)\) such that \(G\) is a normal subgroup of \(X\) of index \(2\), and hence \((\Gamma,\mathcal{C})\) is self-dual.
2306.02340
Solving the cohomological equation for locally hamiltonian flows, part II -- global obstructions
Continuing the research initiated in \cite{Fr-Ki2}, we study the existence of solutions and their regularity for the cohomological equations $X u=f$ for locally Hamiltonian flows (determined by the vector field $X$) on a compact surface $M$ of genus $g\geq 1$. We move beyond the case studied so far by Forni in \cite{Fo1,Fo3}, when the flow is minimal over the entire surface and the function $f$ satisfies some Sobolev regularity conditions. We deal with the flow restricted to any its minimal component and any smooth function $f$ whenever the flow satisfies the Full Filtration Diophantine Condition (FFDC) (this is a full measure condition). The main goal of this article is to quantify optimal regularity of solutions. For this purpose we construct a family of invariant distributions $\mathfrak{F}_{\bar t}$, $\bar{t}\in\mathscr{TF}^*$ that play the roles of the Forni's invariant distributions introduced in \cite{Fo1,Fo3} by using the language of translation surfaces. The distributions $\mathfrak{F}_{\bar t}$ are global in nature (as emphasized in the title of the article), unlike the distributions $\mathfrak{d}^k_{\sigma,j}$, $(\sigma,k,j)\in\mathscr{TD}$ and $\mathfrak{C}^k_{\sigma,l}$, $(\sigma,k,l)\in\mathscr{TC}$ introduced in \cite{Fr-Ki2}, which are defined locally. All three families are used to determine the optimal regularity of the solutions for the cohomological equation, see Theorem 1.1 and 1.2. As a by-product, we also obtained, interesting in itself, a spectral result (Theorem 1.3) for the Kontsevich-Zorich cocycle acting on functional spaces arising naturally at the transition to the first-return map.
Krzysztof Frączek, Minsung Kim
2023-06-04T12:19:01Z
http://arxiv.org/abs/2306.02340v1
# Solving the cohomological equation for locally Hamiltonian flows, part II - global obstructions ###### Abstract. Continuing the research initiated in [12], we study the existence of solutions and their regularity for the cohomological equations \(Xu=f\) for locally Hamiltonian flows (determined by the vector field \(X\)) on a compact surface \(M\) of genus \(g\geq 1\). We move beyond the case studied so far by Forni in [6, 8], when the flow is minimal over the entire surface and the function \(f\) satisfies some Sobolev regularity conditions. We deal with the flow restricted to any its minimal component and any smooth function \(f\) whenever the flow satisfies the Full Filtration Diophantine Condition (FFDC) (this is a full measure condition). The main goal of this article is to quantify optimal regularity of solutions. For this purpose we construct a family of invariant distributions \(\mathfrak{F}_{\bar{t}}\), \(\bar{t}\in\mathscr{T}\mathscr{F}^{*}\) that play the roles of the Forni's invariant distributions introduced in [6, 8] by using the language of translation surfaces. The distributions \(\mathfrak{F}_{\bar{t}}\) are global in nature (as emphasized in the title of the article), unlike the distributions \(\mathfrak{d}_{\sigma,j}^{k}\), \((\sigma,k,j)\in\mathscr{T}\mathscr{D}\) and \(\mathfrak{E}_{\sigma,l}^{k}\), \((\sigma,k,l)\in\mathscr{T}\mathscr{C}\) introduced in [12], which are defined locally. All three families are used to determine the optimal regularity of the solutions for the cohomological equation, see Theorem 1.1 and 1.2. As a by-product, we also obtained, interesting in itself, a spectral result (Theorem 1.3) for the Kontsevich-Zorich cocycle acting on functional spaces arising naturally at the transition to the first-return map. Key words and phrases:locally Hamiltonian flows, cohomological equation, invariant distributions 2000 Mathematics Subject Classification: 37E35, 37A10, 37C40, 37C83, 37J12 ## 1. Introduction Let \(M\) be a smooth compact connected orientable surface of genus \(g\geq 1\). Our primary focus is on smooth flows \(\psi_{\mathbb{R}}=(\psi_{t})_{t\in\mathbb{R}}\) on \(M\) preserving a smooth positive measure \(\mu\), i.e. such that for any (orientable) local coordinates \((x,y)\) we have \(d\mu=V(x,y)dx\wedge dy\) with \(V\) positive and smooth. Denote by \(X:M\to TM\) the associated vector field. Then for (orientable) local coordinates \((x,y)\) such that \(d\mu=V(x,y)dx\wedge dy\), the flow \(\psi_{\mathbb{R}}\) is (locally) a solution to the Hamiltonian equation \[\frac{dx}{dt}=\frac{\frac{\partial H}{\partial y}(x,y)}{V(x,y)},\quad\frac{dy }{dt}=-\frac{\frac{\partial H}{\partial x}(x,y)}{V(x,y)}\] for a smooth real-valued locally defined function \(H\). The flows \(\psi_{\mathbb{R}}\) are usually called _locally Hamiltonian flows_ or _multivalued Hamiltonian flows_. For general introduction to locally Hamiltonian flows on surfaces, we refer readers to [14, 11, 22, 24]. The main goal of the article is to fully understand the problem of existence of the solution \(u:M\to\mathbb{R}\) and its regularity for the cohomological equation \(Xu=f\), if \(f:M\to\mathbb{R}\) is any smooth observable (recall that \(Xu(x)=\frac{d}{dt}u(\psi_{t}x)|_{t=0}\)). We always assume that all fixed points of \(\psi_{\mathbb{R}}\) are isolated. Then the set of fixed points \(\operatorname{Fix}(\psi_{\mathbb{R}})\) is finite. As \(\psi_{\mathbb{R}}\) is area-preserving, every fixed point is either a center or a saddle. In what follows, we deal only with _perfect_ (_harmonic_) saddles defined as follows: a fixed point \(\sigma\in\operatorname{Fix}(\psi_{\mathbb{R}})\) is a perfect saddle of multiplicity \(m_{\sigma}\geq 2\) if there exists a chart \((x,y)\) in a neighborhood \(U_{\sigma}\) of \(\sigma\) such that \(d\mu=V(x,y)dx\wedge dy\) and \(H(x,y)=\Im(x+iy)^{m_{\sigma}}\). We call \((x,y)\)_a singular chart_. We denote by \(\operatorname{Sd}(\psi_{\mathbb{R}})\) the set of perfect saddles \(\psi_{\mathbb{R}}\). We call a _saddle connection_ an orbit of \(\psi_{\mathbb{R}}\) running from a saddle to a saddle. A _saddle loop_ is a saddle connection joining the same saddle. We deal only with flows such that all their saddle connections are loops. If every fixed point is isolated then \(M\) splits into a finite number of components (\(\psi_{\mathbb{R}}\)-invariant surfaces with boundary) so that every component is either a _minimal component_ (every orbit, except of fixed points and saddle loops, is dense in the component) or is a _periodic component_ (filled by periodic orbits, fixed points and saddle loops). The problem of existence and regularity of solutions for the cohomological equation \(Xu=f\) was essentially solved in two seminal articles [6, 8] by Forni when the flow \(\psi_{\mathbb{R}}\) is minimal over the whole surface \(M\) (has no saddle connection) and the function \(f\) belongs to a certain weighted Sobolev space \(H^{s}_{W}(M)\), \(s\geq 1\). Let us mention that being an element of a weighted Sobolev space enforces significant constraints on the behavior of the function \(f\) around saddles, even for smooth functions, as described in [8]. In [6, 8], for a.e. locally Hamiltonian flows, Forni proved the existence of fundamental invariant distributions on \(H^{s}_{W}(M)\) which are responsible for the degree of smoothness of the solution of \(Xu=f\) for \(f\in H^{s}_{W}(M)\). If all Forni's distributions at \(f\in H^{s}_{W}(M)\) are zero then there exists a solution \(u\in H^{s^{\prime}}_{\omega}(M)\) for some \(0<s^{\prime}<s\). The problem of solving cohomological equations for other classes of smooth dynamical systems of parabolic nature and the regularity of solutions using invariant distributions were studied also in [1, 2, 4, 5, 9, 15, 16, 23, 27]. ### Invariant distributions and the main results when saddle loops exist The main goal of this article is to go beyond the case of a minimal flow on the whole surface \(M\) and beyond the case of the function \(f\) belonging to weighted Sobolev spaces. We deal with locally Hamiltonian flows restricted to any minimal component \(M^{\prime}\subset M\) and \(f:M\to\mathbb{R}\) is any smooth function. The main novelty of the proposed approach is that it is used to study the regularity of the solution \(u\) when the flow has saddle loops, which has not been systematically studied before. The study of locally Hamiltonian flows in such a context gives rise to new invariant distributions, which, unlike Forni's distributions, are local in nature. Two families of such local functionals \(\mathfrak{C}^{k}_{\sigma,l}\) and \(\mathfrak{d}^{k}_{\sigma,j}\) were introduced by the authors in [12]. As it was shown in [12] both families play an important role in understanding the regularity of solution for cohomological equation if \(f\) is any smooth function. Throughout the article we use the notation \(x\lor y=\max\{x,y\}\) and \(x\wedge y=\min\{x,y\}\) for any pair of real numbers \(x,y\). Denote by \(\mathscr{T}\mathscr{D}\) the set of triples \((\sigma,k,j)\in(\operatorname{Sd}(\psi_{\mathbb{R}})\cap M^{\prime})\times \mathbb{Z}_{\geq 0}\times\mathbb{Z}_{\geq 0}\) such that \(0\leq j\leq k\wedge(m_{\sigma}-2)\) and \(j\neq k-(m_{\sigma}-1)\operatorname{mod}m_{\sigma}\). For every \((\sigma,k,j)\in\mathscr{T}\mathscr{D}\) we define the functional \(\mathfrak{d}^{k}_{\sigma,j}:C^{k}(M)\to\mathbb{C}\) as follows: \[\mathfrak{d}^{k}_{\sigma,j}(f)=\sum_{0\leq n\leq\frac{k-j}{m_{\sigma}}}\frac{ \binom{k}{j+nm_{\sigma}}\binom{(m_{\sigma}-1)-j}{m_{\sigma}}}{\binom{(k-j)-( m_{\sigma}-1)}{m_{\sigma}}}\frac{\partial^{k}(f\cdot V)}{\partial z^{j+nm_{ \sigma}}\partial\overline{z}^{k-j-nm_{\sigma}}}(0,0). \tag{1.1}\] The real number \(\widehat{\mathfrak{d}}(\mathfrak{d}^{k}_{\sigma,j})=\widehat{\mathfrak{d}}( \sigma,k)=k-(m_{\sigma}-2)\) we call the _hat-order_ of \(\mathfrak{d}^{k}_{\sigma,j}\). For any \(\sigma\in\operatorname{Sd}(\psi_{\mathbb{R}})\cap M^{\prime}\) its neighbourhood \(U_{\sigma}\) splits into \(2m_{\sigma}\) angular sectors bounded by separatrices. In singular coordinates \(z=(x,y)\) they are of the form \[U_{\sigma,l}:=\{z\in U_{\sigma}:\operatorname{Arg}z\in(\tfrac{\pi l}{m_{\sigma}},\tfrac{\pi(l+1)}{m_{\sigma}})\}\text{ for }0\leq l<2m_{\sigma}.\] Denote by \(\mathscr{T}\mathscr{C}\) the set of triples \((\sigma,k,l)\in(\operatorname{Sd}(\psi_{\mathbb{R}})\cap M^{\prime})\times \mathbb{Z}_{\geq 0}\times\mathbb{Z}_{\geq 0}\) such that \(0\leq l<2m_{\sigma}\) and \(U_{\sigma,l}\subset M^{\prime}\). For every \((\sigma,k,l)\in\mathscr{T}\mathscr{C}\) we define the functional \(\mathfrak{C}^{k}_{\sigma,l}:C^{k}(M)\to\mathbb{C}\) as follows: \[\mathfrak{C}^{k}_{\sigma,l}(f):=\sum_{\begin{subarray}{c}0\leq i\leq k\\ i\neq m_{\sigma}-1\operatorname{mod}m_{\sigma}\\ i\neq k-(m_{\sigma}-1)\operatorname{mod}m_{\sigma}\end{subarray}}\theta^{l(2i- k)}_{\sigma}\binom{k}{i}\mathfrak{B}(\tfrac{(m_{\sigma}-1)-i}{m_{\sigma}}, \tfrac{(m_{\sigma}-1)-k+i}{m_{\sigma}})\frac{\partial^{k}(f\cdot V)}{\partial z ^{i}\partial\overline{z}^{k-i}}(0,0),\] where \(\theta_{\sigma}\) is the principal \(2m_{\sigma}\)-th root of unity. The (beta-like) function \(\mathfrak{B}(x,y)\) is defined for any pair \(x,y\) of real numbers such that \(x,y\notin\mathbb{Z}\) as follows: \[\mathfrak{B}(x,y)=\frac{\pi e^{i\frac{\pi}{2}}(y-x)}{2^{x+y-2}}\frac{\Gamma(x +y-1)}{\Gamma(x)\Gamma(y)},\] where we adopt the convention \(\Gamma(0)=1\) and \(\Gamma(-n)=1/(-1)^{n}n!\). For the real number \(\mathfrak{o}(\mathfrak{C}^{k}_{\sigma,l})=\mathfrak{o}(\sigma,k)=\frac{k-(m_{ \sigma}-2)}{m_{\sigma}}\), we call it the _order_ of \(\mathfrak{C}^{k}_{\sigma,l}\). In this paper, for a.e. locally Hamiltonian flow (satisfying the Full Filtration Diophantine Condition (FFDC) defined in Section 3.2), we define the third family of distributions \(\mathfrak{F}_{\bar{t}}\) which have global nature and are smooth version of Forni's invariant distributions introduced in [6, 8]. We should emphasize that the definition of \(\mathfrak{F}_{\bar{t}}\) (unlike Forni's approach) does not use tools from translational surface theory. Such techniques cannot be used due to the existence of saddle loops. Our approach is based on the use of a (modified by us) correction operator invented by Marmi-Moussa-Yoccoz in [18] (see also [19] and [20]) in its simplest version and later extended in [13] and [11]. Let \(g\geq 1\) be the genus of \(M^{\prime}\) and let \(\gamma\) be the number of saddles in \(M^{\prime}\). Denote by \(\mathscr{T}\mathscr{F}^{*}\) the set of triples of the form \((k,+,i)\), \((k,0,s)\) or \((k,-,j)\) for \(k\geq 0\), \(1\leq i,j\leq g\) and \(1\leq s<\gamma\). Let \(\mathscr{T}\mathscr{F}\) be the subset of triples in \(\mathscr{T}\mathscr{F}^{*}\) after removing all triples of the form \((k,-,1)\) for \(k\geq 0\). Denote by \(0<\lambda_{g}<\ldots<\lambda_{2}<\lambda_{1}\) the positive Lyapunov exponents associated to a flow satisfying FFDC (see again Section 3.2). In Section 7.1, for every triple \(\bar{t}\in\mathscr{T}\mathscr{F}^{*}\) we define a corresponding functional \(\mathfrak{F}_{\bar{t}}\). For the real number \[\mathfrak{o}(\mathfrak{F}_{\bar{t}})=\mathfrak{o}(\bar{t})=\left\{\begin{array} []{cl}k-\frac{\lambda_{i}}{\lambda_{1}}&\text{ if }\bar{t}=(k,+,i)\\ k&\text{ if }\bar{t}=(k,0,s)\\ k+\frac{\lambda_{j}}{\lambda_{1}}&\text{ if }\bar{t}=(k,-,j),\end{array}\right.\] we call the _order_ of \(\mathfrak{F}_{\bar{t}}\). Let \(m\) be the maximal multiplicity of saddles in \(\operatorname{Sd}(\psi_{\mathbb{R}})\cap M^{\prime}\). Following [12], for every \(r>0\) let \[k_{r}=\left\{\begin{array}{cl}\lceil mr+(m-1)\rceil&\text{if }m=2\text{ and }r\leq\frac{1}{2}\\ \lceil mr+(m-2)\rceil&\text{otherwise.}\end{array}\right.\] Recall that \[\max\{k\geq 0:\exists_{\sigma\in\operatorname{Sd}(\psi_{ \mathbb{R}})\cap M^{\prime}}\mathfrak{o}(\sigma,k)<r\}+1=\lceil mr+(m-2) \rceil\leq k_{r}\] \[\max\{k\geq 0:\exists_{\sigma\in\operatorname{Sd}(\psi_{ \mathbb{R}})\cap M^{\prime}}\widehat{\mathfrak{o}}(\sigma,k)<r\}+1=\lceil r+(m-2) \rceil\leq k_{r}.\] Then for every flow \(\psi_{\mathbb{R}}\) restricted to its minimal component \(M^{\prime}\) and satisfying FFDC and for every \(\bar{t}\in\mathscr{T}\mathscr{F}^{*}\), the corresponding functional \(\mathfrak{F}_{\bar{t}}\) is defined on \(C^{k_{\mathfrak{o}}(\bar{t})+1}(M)\). The following main two results show how the three families of invariant distributions influence on the regularity of the solution for the cohomological equation \(Xu=f\) with smooth \(u\) defined on the end compactification \(M^{\prime}_{e}\) of \(M^{\prime}\setminus\operatorname{Sd}(\psi_{\mathbb{R}})\) considered in [12]. **Theorem 1.1**.: _Let \(\psi_{\mathbb{R}}\) be a locally Hamiltonian flow such that its restriction to a minimal component \(M^{\prime}\) satisfies FFDC. Let \(r\in\mathbb{R}_{>0}\setminus(\{\mathfrak{o}(\sigma,k):k\geq 0,\sigma\in \operatorname{Sd}(\psi_{\mathbb{R}})\cap M^{\prime}\}\cup\{\mathfrak{o}(\bar{t }):\bar{t}\in\mathscr{T}\mathscr{F}\})\). Suppose that \(f\in C^{k_{r}}(M)\) and_ * \(\mathfrak{d}^{k}_{\sigma,j}(f)=0\) _for all_ \((\sigma,k,j)\in\mathscr{T}\mathscr{D}\) _with_ \(\widehat{\mathfrak{o}}(\mathfrak{d}^{k}_{\sigma,j})<r\)_;_ * \(\mathfrak{C}^{k}_{\sigma,l}(f)=0\) _for all_ \((\sigma,k,l)\in\mathscr{T}\mathscr{C}\) _with_ \(\mathfrak{o}(\mathfrak{C}^{k}_{\sigma,l})<r\)_;_ * \(\mathfrak{F}_{\bar{t}}(f)=0\) _for all_ \(\bar{t}\in\mathscr{T}\mathscr{F}\) _with_ \(\mathfrak{o}(\mathfrak{F}_{\bar{t}})<r\)_._ _Then there exists \(u\in C^{r}(M^{\prime}_{e})\) such that \(Xu=f\) on \(M^{\prime}_{e}\). Moreover, there exists \(C_{r}>0\) such that \(\|u\|_{C^{r}(M^{\prime}_{e})}\leq C_{r}\|f\|_{C^{k_{r}}(M)}\)._ **Theorem 1.2** (optimal regularity).: _Let \(\psi_{\mathbb{R}}\) be a locally Hamiltonian flow such that its restriction to a minimal component \(M^{\prime}\) satisfies FFDC. For any \(r>0\) suppose that \(f\in C^{k_{r}}(M)\) and there exists \(u\in C^{r}(M^{\prime}_{e})\) such that \(Xu=f\) on \(M^{\prime}_{e}\). Then_ * \(\mathfrak{d}^{k}_{\sigma,j}(f)=0\) _for all_ \((\sigma,k,j)\in\mathscr{T}\mathscr{D}\) _with_ \(\widehat{\mathfrak{o}}(\mathfrak{d}^{k}_{\sigma,j})<r\)_;_ * \(\mathfrak{C}^{k}_{\sigma,l}(f)=0\) _for all_ \((\sigma,k,l)\in\mathscr{T}\mathscr{C}\) _with_ \(\mathfrak{o}(\mathfrak{C}^{k}_{\sigma,l})<r\)_;_ * \(\mathfrak{F}_{\bar{t}}(f)=0\) _for all_ \(\bar{t}\in\mathscr{T}\mathscr{F}\) _with_ \(\mathfrak{o}(\mathfrak{F}_{\bar{t}})<r\)_._ ### Cohomological equations over IETs and a spectral result Let us consider the restriction of a locally Hamiltonian flow \(\psi_{\mathbb{R}}\) on \(M\) to its minimal component \(M^{\prime}\subset M\) and let \(I\subset M^{\prime}\) be a transversal smooth curve. We always assume that each end of \(I\) is the first meeting point of a separatrix (that is not a saddle connection) emanating by a saddle (incoming or outgoing) with the curve \(I\). By minimality, \(I\) is a global transversal and the first return map \(T:I\to I\) is an interval exchange transformation (IET) in so called standard coordinates on \(I\). Denote by \(I_{\alpha}\), \(\alpha\in\mathcal{A}\) the intervals exchanged by \(T\) and by \(\tau:I\to\mathbb{R}_{>0}\cup\{+\infty\}\) the first return time map to the curve \(I\), called also the roof function. The roof function \(\tau:I\to\mathbb{R}_{>0}\cup\{+\infty\}\) is smooth on the interior of each exchanged interval and has _singularities_ at discontinuities of \(T\). For any continuous observable \(f:M\to\mathbb{C}\) we deal with the corresponding map \(\varphi_{f}:I\to\mathbb{C}\cup\{\infty\}\) given by \[\varphi_{f}(x)=\int_{0}^{\tau(x)}f(\psi_{t}x)dt.\] If \(u\) is a solution of the cohomological equation \(Xu=f\) then \[v(Tx)-v(x)=\varphi_{f}(x)\text{ on }I, \tag{1.2}\] where \(v\) is the restriction of \(u\) to the curve \(I\). Therefore the existence and a regularity of the solution to the cohomological equation (1.2) is an obvious necessary condition for the existence and the same regularity of the solution to \(Xu=f\). As shown in [12] (Theorem 1.2), this is also a sufficient condition under additional assumptions related to the vanishing of certain distributions \(\mathfrak{C}^{k}_{\sigma,l}\) and \(\mathfrak{d}^{k}_{\sigma,j}\) on \(f\). Moreover, the regularity of the solution \(u\) depends on the regularity of the solution \(v\) and the vanishing of all the mentioned distributions up to some level of their order or hat-order. For this reason, in the present paper, we primarily focus on the cohomological equation \(v\circ T-v=\varphi_{f}\). The regularity of \(\varphi_{f}\) was completely understood in [12]. It was shown there that \(\varphi_{f}\in C^{n+\mathrm{P}_{a}\mathrm{G}}(\sqcup_{\alpha\in\mathcal{A}}I_{ \alpha})\), i.e. is piecewise \(C^{n+1}\) and its \(n\)-th derivative has polynomial (of degree at most \(0<a<1\)) or logarithmic (if \(a=0\)) singularities at discontinuities of the IET \(T\). The degree of smoothness \(n\) depends on the maximal order of vanishing for the distributions \(\mathfrak{C}^{k}_{\sigma,l}\), see Theorem 1.1 in [12]. For any \(k\in\mathbb{N}\cup\{\infty\}\) denote by \(\Phi^{k}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) the space of functions \(\varphi_{f}\) for \(f\in C^{k}(M)\). The main tool used to solve the homological equation (1.2) is a spectral analysis of the functional version (on \(\Phi^{k}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\)) of the Kontsevich-Zorich cocycle \(S(j)\) (see Section 4.2 for the definition). Some kind of spectral analysis (for positive Lyapunov exponents) of the cocycle \(S(j)\) was used in [13] and [11] to fully understand the deviation for ergodic integrals of smooth observables for a.a. locally Hamiltonian flows. Our techniques are motivated by correction operators invented by Marmi-Moussa-Yoccoz in [18] (see also [19] and [20]) in its simplest version (without singularities) and later extended in [13] and [11]. To represent formally the main spectral result, let us consider an equivalence relation \(\sim\) on the set of triples \(\mathscr{T}\!\mathscr{C}\), introduced in [12]. Two triples \((\sigma,k,l),(\sigma,k,l^{\prime})\in\mathscr{T}\!\mathscr{C}\) are equivalent with respect to the equivalence relation \(\sim\) if the angular sectors \(U_{\sigma,l}\) and \(U_{\sigma,l^{\prime}}\) are connected through a chain of saddle loops emanating from the saddle \(\sigma\). For every equivalence class \([(\sigma,k,l)]\in\mathscr{T}\!\mathscr{C}/\sim\) let \[\mathfrak{C}_{[(\sigma,k,l)]}(f):=\sum_{(\sigma,k,l^{\prime})\sim(\sigma,k,l) }\mathfrak{C}^{k}_{\sigma,l}(f).\] For any \(k\geq 0\) let \(\Gamma_{k}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) be the space of functions which are polynomials of degree at most \(k\) on any interval \(I_{\alpha}\), \(\alpha\in\mathcal{A}\). This space plays an important role in solving cohomological equations in [10]. We will define two families of functions \(\{h_{\bar{t}}:\bar{t}\in\mathscr{T}\!\mathscr{F}^{*}\}\) and \(\{\xi_{[(\sigma,k,l)]}:[(\sigma,k,l)]\in\mathscr{T}\!\mathscr{C}/\sim\}\), which are the keys for understanding the spectral properties of the Kontsevich-Zorich cocycle \(S(j)\). First, they meet the following properties: \[h_{\bar{t}}\in\Gamma_{k}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha}) \text{ if }\bar{t}=(k,\,\cdot\,,\,\cdot\,);\] \[\lim_{j\to\infty}\frac{1}{j}\log\|S(j)h_{\bar{t}}\|_{\sup}=\lim_{ j\to\infty}\frac{1}{j}\log\|S(j)h_{\bar{t}}\|_{L^{1}}=-\lambda_{1}\mathfrak{o}( \bar{t});\] \[\xi_{[(\sigma,k,l)]}\in C^{n+\mathrm{P}_{a}\mathrm{G}}(\sqcup_{ \alpha\in\mathcal{A}}I_{\alpha})\text{ with }n=\lceil\mathfrak{o}(\sigma,k)\rceil,\,\,a= \mathfrak{o}(\sigma,k)-n; \tag{1.4}\] \[\lim_{j\to\infty}\frac{1}{j}\log\|S(j)\xi_{[(\sigma,k,l)]}\|_{L^ {1}}=-\lambda_{1}\mathfrak{o}(\sigma,k);\] (1.5) \[\lim_{j\to\infty}\frac{1}{j}\log\|S(j)\xi_{[(\sigma,k,l)]}\|_{ \sup}=-\lambda_{1}\mathfrak{o}(\sigma,k)\text{ if }\mathfrak{o}(\sigma,k)>0. \tag{1.3}\] Then the main spectral result is as follows. **Theorem 1.3** (spectral theorem).: _Let \(\psi_{\mathbb{R}}\) be a locally Hamiltonian flow such that its restriction to a minimal component \(M^{\prime}\) satisfies FFDC. For every \(r>-\frac{m-2}{m}\) and \(f\in C^{k_{r}}(M)\) we have_ \[\varphi_{f}=\sum_{\begin{subarray}{c}\bar{t}\in\mathscr{T}\!\mathscr{F}^{*}\\ \mathfrak{o}(t)<r\end{subarray}}\mathfrak{F}_{\bar{t}}(f)h_{\bar{t}}+\sum_{ \begin{subarray}{c}[(\sigma,k,l)]\in\mathscr{T}\!\mathscr{C}/\sim\\ \mathfrak{o}(\sigma,k)<r\end{subarray}}\mathfrak{C}_{[(\sigma,k,l)]}(f)\xi_{[( \sigma,k,l)]}+\mathfrak{r}_{r}(f)\] so that_ \[\begin{split}&\limsup_{j\to\infty}\frac{1}{j}\log\|S(j)\mathbf{r}_{r }(f)\|_{\sup}\leq-\lambda_{1}r\text{ if }r>0\text{ and }\\ &\limsup_{j\to\infty}\frac{1}{j}\log\|S(j)\mathbf{r}_{r}(f)\|_{L^ {1}}\leq-\lambda_{1}r\text{ if }r\leq 0.\end{split} \tag{1.6}\] This theorem can be seen as a counterpart to spectral results from [3] in general (non-pseudo-Anosov) setting. However, the most important advantage of Theorem 1.3 is that it (more precisely its preceding version, Theorem 5.6) is used to solve (in Section 6.2) the regularity of solutions to the cohomological equation \(v\circ T-v=\varphi_{f}\) (see Theorem 6.8). In Section 6, we modify techniques developed by Marmi-Yoccoz in [20] to study the regularity of solutions in Holder scale. ### A new family of invariant distributions via extended correction operators In [12], the authors defined two families of invariant distributions \(\mathfrak{C}^{k}_{\sigma,l}\) and \(\mathfrak{d}^{k}_{\sigma,j}\) inspired by local analysis of higher order derivatives \(\varphi_{f}\) around ends of intervals exchanged by \(T\). In the current paper, we introduce a new family \(\mathfrak{f}_{\bar{t}}\), \(\bar{t}\in\mathscr{TF}^{*}\) of invariant distributions over IETs and transport them to the level of the surface \(M\) by composing with the operator \(f\mapsto\varphi_{f}\). The resulting distributions \(\mathfrak{F}_{\bar{t}}\), \(\bar{t}\in\mathscr{TF}^{*}\) generalize (emulate) the notion of Forni's invariant distributions (associated with Lyapunov exponents of Kontsevitch-Zorich cocycle), but the method of construction is completely different from the original. The invariant distributions \(\mathfrak{f}_{\bar{t}}\) over IETs are defined on the space of \(C^{n+\mathrm{P}_{\mathrm{a}}\mathrm{G}}\)-function (if \(\mathfrak{o}(\bar{t})<n-a\)). In [11], the authors constructed invariant distributions, for \(n=0\), using correction through piecewise constant functions. They constructed so-called correction operators \(\mathfrak{h}_{j}\), \(1\leq j\leq g\) but the construction was limited to the unstable subspace (corresponding to positive Lyapunov exponents) of the Kontsevich-Zorich cocycle. The original idea of correcting smooth functions was introduced by Marmi-Moussa-Yoccoz in [18] and then developed in [13] and [11]. In this paper there are three types of new functionals that arise from other parts of the Oseledets splitting \((+/-/0\) denoting unstable/stable/central resp.) associated to Lyapunov exponents (see Section 5.3). Their construction is based on the use of new correction operators \(\mathfrak{h}_{-j,i}\), \(\mathfrak{h}^{*}_{j}\), \(\mathfrak{h}_{0}\) and their higher-order derivatives. The new correction operators allow us to correct \(\varphi_{f}\) by piecewise constant functions, not only related to unstable vectors as before, but also by central and stable vectors. The construction of these three new types of correction operators is the most important technical novelty of the article, which allows defining the counterparts of Forni's invariant distribution for flows with saddle loops. Together with the previously defined local invariant distributions \(\mathfrak{C}^{k}_{\sigma,l}\) and \(\mathfrak{d}^{k}_{\sigma,j}\) they give a complete and optimal knowledge of the regularity of solutions in Holder scale. This optimality of regularity seems to be the most important overall novelty of the article. ### Structure of the paper In SS 2, we recall some basic notions related to IETs, Rauzy-Veech induction and accelerations of the Kontsevich-Zorich cocycle. In SS 3, we review Oseledets filtration of accelerated KZ-cocycles and formulate the corresponding Full Filtration Diophantine Condition (FFDC). We set up a new infinite series that are necessary for constructing extended correction operators in the next section. In SS 4, extended correction operators \(\mathfrak{h}_{-j,i}\), \(\mathfrak{h}^{*}_{j}\), \(\mathfrak{h}_{0}\) are constructed and their basic properties are proven. In SS 5, we compute Lyapunov exponents of renormalization cocycle \(S(j)\) for piecewise polynomial function \(h_{i,l}\), \(c_{s,l}\), \(h_{-j,l}\). These three classes of functions are then used to construct the functionals \(\mathfrak{f}_{\bar{t}}\). The culmination of this section is the proof of the spectral result (Theorem 5.6) which is the main component of the proof of the Theorem 1.3. Cohomological equations for IET and the regularity of their solutions are studied in SS 6. Finally, in SS 7, we conclude the regularity of solutions to cohomological equations for locally Hamiltonian flows. The main results are obtained from main theorems in [12] and results of SS 6. ## 2. Interval exchange transformations (IET) Let \(\mathcal{A}\) be a \(d\)-element alphabet and let \(\pi=(\pi_{0},\pi_{1})\) be a pair of bijections \(\pi_{\varepsilon}:\mathcal{A}\to\{1,\dots,d\}\) for \(\varepsilon=0,1\). For every \(\lambda=(\lambda_{\alpha})_{\alpha\in\mathcal{A}}\in\mathbb{R}_{>0}^{ \mathcal{A}}\) let \(|\lambda|:=\sum_{\alpha\in\mathcal{A}}\lambda_{\alpha}\), \(I:=[0,|\lambda|)\) and for every \(\alpha\in\mathcal{A}\), \[I_{\alpha}:=[l_{\alpha},r_{\alpha}),\text{ where }l_{\alpha}=\sum_{\pi_{0}( \beta)<\pi_{0}(\alpha)}\lambda_{\beta},\ \ \ r_{\alpha}=\sum_{\pi_{0}(\beta)\leq\pi_{0}(\alpha)} \lambda_{\beta}.\] Denote by \(\mathcal{S}_{\mathcal{A}}^{0}\) the subset of _irreducible_ pairs, i.e. \(\pi_{1}\circ\pi_{0}^{-1}\{1,\dots,k\}\neq\{1,\dots,k\}\) for \(1\leq k<d\). We will always assume that \(\pi\in\mathcal{S}_{\mathcal{A}}^{0}\). An _interval exchange transformation_\(T=T_{(\pi,\lambda)}:I\to I\) is a piecewise translation determined by the data \((\pi,\lambda)\), so that \(T_{(\pi,\lambda)}\) translates the interval \(I_{\alpha}\) for each \(\alpha\in\mathcal{A}\) so that \(T(x)=x+w_{\alpha}\) for \(x\in I_{\alpha}\), where \(w=\Omega_{\pi}\lambda\) and \(\Omega_{\pi}\) is the matrix \([\Omega_{\alpha}\beta]_{\alpha,\beta\in\mathcal{A}}\) given by \[\Omega_{\alpha\,\beta}=\left\{\begin{array}{ll}+1&\text{ if }\pi_{1}( \alpha)>\pi_{1}(\beta)\text{ and }\pi_{0}(\alpha)<\pi_{0}(\beta),\\ -1&\text{ if }\pi_{1}(\alpha)<\pi_{1}(\beta)\text{ and }\pi_{0}(\alpha)>\pi_{0}( \beta),\\ 0&\text{ in all other cases.}\end{array}\right.\] An IET \(T_{(\pi,\lambda)}\) satisfies the _Keane condition_ (see [17]) if \(T_{(\pi,\lambda)}^{m}l_{\alpha}\neq l_{\beta}\) for all \(m\geq 1\) and for all \(\alpha,\beta\in\mathcal{A}\) with \(\pi_{0}(\beta)\neq 1\). ### Rauzy-Veech induction Rauzy-Veech induction [21] and its accelerations are standard renormalization procedures for IETs. For general background, we refer the readers to the lecture notes by Yoccoz [28, 29] or Viana [26]. Let \(T=T_{(\pi,\lambda)}\) be an interval exchange transformation satisfying Keane's condition. Let \(\widetilde{I}:=\left[0,\max(l_{\pi_{0}^{-1}(d)},l_{\pi_{1}^{-1}(d)})\right)\) and denote by \(\mathcal{R}(T)=\widetilde{T}:\widetilde{I}\to\widetilde{I}\) the first return map of \(T\) to the interval \(\widetilde{I}\). Let \[\epsilon=\epsilon(\pi,\lambda)=\left\{\begin{array}{ll}0&\text{ if }\ \ \lambda_{\pi_{0}^{-1}(d)}>\lambda_{\pi_{1}^{-1}(d)},\\ 1&\text{ if }\ \ \lambda_{\pi_{0}^{-1}(d)}<\lambda_{\pi_{1}^{-1}(d)}\end{array}\right.\] and \[A(T)=A(\pi,\lambda)=Id+E_{\pi_{\epsilon}^{-1}(d)\,\pi_{1-\epsilon}^{-1}(d)} \in SL_{\mathcal{A}}(\mathbb{Z}),\] where \(Id\) is the identity matrix and \((E_{ij})_{kl}=\delta_{ik}\delta_{jl}\), using the Kronecker delta notation. Then, by Rauzy (see [21]), \(\widetilde{T}\) is also an IET on \(d\)-intervals satisfying Keane's condition and \(\widetilde{T}=T_{(\widetilde{\pi},\widetilde{\lambda})}\) for some \(\widetilde{\pi}=(\widetilde{\pi}_{0},\widetilde{\pi}_{1})\in\mathcal{S}_{ \mathcal{A}}^{0}\) and \(\tilde{\lambda}=A^{-1}(\pi,\lambda)\lambda\). Moreover, the renormalized version of the matrix \(\Omega_{\widetilde{\pi}}\) is of the form \[\Omega_{\widetilde{\pi}}=A^{t}(\pi,\lambda)\cdot\Omega_{\pi}\cdot A(\pi, \lambda).\] Thus taking \(H(\pi)=\Omega_{\pi}(\mathbb{R}^{\mathcal{A}})\), we have \(H(\tilde{\pi})=A^{t}(\pi,\lambda)H(\pi)\). ### Kontsevich-Zorich cocycle and its accelerations Let \(T=T_{(\pi,\lambda)}\) be an IET satisfying Keane's condition. For every \(n\geq 1\), \[A^{(n)}(T)=A(T)\cdot A(\mathcal{R}(T))\cdot\ldots\cdot A(\mathcal{R}^{n-1}(T)) \in SL_{\mathcal{A}}(\mathbb{Z}).\] This defines a multiplicative cocycle \(A\) over the transformation \(\mathcal{R}\) and it is called the _Kontsevich-Zorich cocycle_. Let \((n_{k})_{k\geq 0}\) be an increasing sequence of integers with \(n_{0}=0\) called an _accelerating sequence_. For every \(k\geq 0\), let \(T^{(k)}:=\mathcal{R}^{n_{k}}(T):I^{(k)}\to I^{(k)}\). Then \(T^{(k)}:I^{(k)}\to I^{(k)}\) is the first return map of \(T:I\to I\) to the interval \(I^{(k)}\subset I\). The sequence of IETs \((T^{(k)})_{k\geq 0}\) gives an _acceleration_ of the Rauzy-Veech renomalization procedure associated with the accelerating sequence \((n_{k})_{k\geq 0}\). Let \((\pi^{(k)},\lambda^{(k)})\) be the pair defining \(T^{(k)}\) and let \(I^{(k)}_{\alpha}\), \(\alpha\in\mathcal{A}\) be intervals exchanged by \(T^{(k)}\). Then \(\lambda^{(k)}=(\lambda^{(k)}_{\alpha})_{\alpha\in\mathcal{A}}\), where \(\lambda^{(k)}_{\alpha}=|I^{(k)}_{\alpha}|\) for \(\alpha\in\mathcal{A}\). For every \(k\geq 0\) let \(Z(k+1):=A^{(n_{k+1}-n_{k})}(\mathcal{R}^{n_{k}}(T))^{t}\). We then have \[\lambda^{(k)}=Z(k+1)^{t}\lambda^{(k+1)},\quad k\geq 0.\] By following notations from [18], for each \(0\leq k<l\) let \[Q(k,l)=Z(l)\cdot Z(l-1)\cdot\ldots\cdot Z(k+2)\cdot Z(k+1)=A^{(n_{l}-n_{k})}( \mathcal{R}^{n_{k}}(T))^{t}.\] Then, \(Q(k,l)\in SL_{\mathcal{A}}(\mathbb{Z})\) and \(\lambda^{(k)}=Q(k,l)^{t}\lambda^{(l)}\). We write \(Q(k)=Q(0,k)\). ### Rokhlin towers related to accelerations Note that \(Q_{\alpha\beta}(k)\) is the time spent by any point of \(I^{(k)}_{\alpha}\) in \(I_{\beta}\) until it returns to \(I^{(k)}\). Then \(Q_{\alpha}(k)=\sum_{\beta\in\mathcal{A}}Q_{\alpha\beta}(k)\) is the first return time of points of \(I^{(k)}_{\alpha}\) to \(I^{(k)}\). Then the IET \(T:I\to I\) splits into a set of \(d\)_Rokhlin tower_ of the form \[\big{\{}T^{i}(I^{(k)}_{\alpha}),\ 0\leq i<Q_{\alpha}(k)\big{\}},\quad\alpha\in \mathcal{A}\] so that \(Q_{\alpha}(k)\) floors of the \(\alpha\)-th tower are pairwise disjoint intervals. ## 3. Diophantine conditions for IETs In this section we introduce a new Diophantine condition for IETs which is a full measure condition on the set of IETs. The Diophantine condition is a modified version of the previously introduced one in [11] (see also [14]), so called Filtration Diophantine condition (FDC). It is improved by extending the Oseledets filtration to stable and central subspaces. Based on this condition, we show that certain series involving matrices of the accelerated cocycle grows in a controlled way. ### Oseledets filtration Fix \(\pi\in\mathcal{S}^{0}_{\mathcal{A}}\). Suppose that there exist \(\lambda_{1}>\ldots>\lambda_{g}>\lambda_{g+1}=0\) such that for a.e. IET \((\pi,\lambda)\) there exists a filtration of linear subspaces (Oseledets filtration) \[\begin{split}\{0\}&=E_{0}(\pi,\lambda)\subset E_{-1 }(\pi,\lambda)\subset\ldots\subset E_{-g}(\pi,\lambda)\subset E_{cs}(\pi, \lambda)\\ &=E_{g+1}(\pi,\lambda)\subset E_{g}(\pi,\lambda)\subset\ldots \subset E_{1}(\pi,\lambda)=\Gamma:=\mathbb{R}^{\mathcal{A}}\end{split} \tag{3.1}\] such that for every \(1\leq i\leq g\) we have \[\begin{split}&\lim_{n\to+\infty}\frac{\log\|Q(n)h\|}{n}=\lambda_{-i }:=-\lambda_{i}\text{ for all }h\in E_{-i}(\pi,\lambda)\setminus E_{-i+1}(\pi,\lambda)\\ &\lim_{n\to+\infty}\frac{\log\|Q(n)h\|}{n}=0\text{ for all }h\in E_{cs}(\pi,\lambda)\setminus E_{-g}(\pi,\lambda)\\ &\lim_{n\to+\infty}\frac{\log\|Q(n)h\|}{n}=\lambda_{i}\text{ for all }h\in E_{i}(\pi,\lambda)\setminus E_{i+1}(\pi,\lambda)\\ &\dim E_{-i}(\pi,\lambda)-\dim E_{-i+1}(\pi,\lambda)=\dim E_{i} (\pi,\lambda)-\dim E_{i+1}(\pi,\lambda)=1.\end{split} \tag{3.2}\] Suppose that there exists a filtration of linear subspaces which is complementary to the Oseledets filtration (3.1): \[\begin{split}&\{0\}=U_{1}\subset U_{2}\subset\ldots\subset U_{g} \subset U_{g+1}\subset U_{-g}\subset\ldots\subset U_{-1}\subset U_{0}= \Gamma\\ &\text{ such that }U_{g+1}\subset H(\pi)\text{ and }E_{j}(\pi, \lambda)\oplus U_{j}=\Gamma\text{ for }-g\leq j\leq g+1.\end{split} \tag{3.3}\] As \(E_{-g}\oplus U_{g+1}=H(\pi)\), \(U_{j+1}=U_{j}\oplus(U_{j+1}\cap E_{j})\) and \(\dim(U_{j+1}\cap E_{j})=1\), for every \(j\in\pm\{1,\ldots,g\}\) there exists \(h_{j}\in U_{j+1}\cap E_{j}\) such that \[h_{j}\in H(\pi),\quad U_{j+1}=U_{j}\oplus\mathbb{R}h_{j}\text{ and }\lim_{n\to+ \infty}\frac{\log\|Q(n)h_{j}\|}{n}=\lambda_{j}.\] Let \(c_{1},\ldots,c_{\gamma-1}\) be a basis of \(U_{-g}\cap E_{g+1}\). Then for every \(2\leq j\leq g+1\) the linear subspace \(U_{j}\subset\Gamma\) is generated by \(h_{1},\ldots,h_{j-1}\) and for every \(0\leq j\leq g\) the linear subspace \(U_{-j}\subset\Gamma\) is generated by \(h_{1},\ldots,h_{g}\), \(c_{1},\ldots,c_{\gamma-1}\) and \(h_{-g},\ldots,h_{-j-1}\). Moreover, \[\text{ if }0\neq h\in U_{j}\text{ then }\lim_{n\to+\infty}\frac{\log\|Q(n)h\|}{n} \geq\lambda_{j-1}, \tag{3.4}\] where \(\lambda_{-g-1}=-\lambda_{g+1}=0\). For every \(k\geq 0\) and \(-g\leq j\leq g+1\) let \(E_{j}^{(k)}:=Q(k)E_{j}\) and \(U_{j}^{(k)}:=Q(k)U_{j}\). ### Rokhlin Tower Condition and Filtration Diophantine Condition The following Rokhlin Towers Condition (RTC) was introduced in [14]. _Definition 1_ (Rtc).: An IET \(T_{(\pi,\lambda)}\) together with an acceleration satisfies RTC if there exists a constant \(0<\delta<1\) such that (RT) \[\begin{split}&\text{ for any }k\geq 1\text{ there exists number }0<p_{k}\leq\min_{\alpha\in\mathcal{A}}Q_{\alpha}(k)\text{ such that}\\ &\{T^{i}I^{(k)}:0\leq i<p_{k}\}\text{ is a Rokhlin of intervals with measure }\geq\delta|I|.\end{split}\] For any sequence \((r_{n})_{n\geq 0}\) of real numbers and for all \(0\leq k\leq l\), we will use the notation \(r(k,l):=\sum_{k\leq j<l}r_{j}\). _Definition 2_ (Ffdc).: An IET \(T:I\to I\) satisfying Keane's condition and Oseledets generic (i.e. there is a filtration of linear subspaces (3.1) satisfying (3.2)), satisfies the _Full Filtration Diophantine Condition (FFDC)_ if for every \(\tau>0\) there exist constants \(C,\kappa\geq 1\), an accelerating sequence \((n_{k})_{k\geq 0}\), a sequence of natural numbers \((r_{n})_{n\geq 0}\) with \(r_{0}=0\) and a complementary filtration \((U_{j})_{-g\leq j\leq g+1}\) (satisfying (3.3)) such that (RT) holds and \[\lim_{n\to+\infty}\frac{r(0,n)}{n}\in(1,1+\tau) \tag{3.6}\] \[\left\|Q\right|_{E_{j}^{(k)}}(k,l)\right\|\leq Ce^{(\lambda_{j}+ \tau)r(k,l)}\text{ for all }0\leq k<l\text{ and }1\leq j\leq g+1\] (3.7) \[\left\|Q\right|_{E_{-j}^{(k)}}(k,l)\right\|\leq Ce^{(-\lambda_{j}+ \tau)r(k,l)}\text{ for all }0\leq k<l\text{ and }1\leq j\leq g\] (3.8) \[\left\|Q\right|_{U_{j}^{(k)}}(k,l)^{-1}\right\|\leq Ce^{(-\lambda_ {j-1}+\tau)r(k,l)}\text{ for all }0\leq k<l\text{ and }2\leq j\leq g+1\] (3.9) \[\left\|Q\right|_{U_{-j}^{(k)}}(k,l)^{-1}\right\|\leq Ce^{(\lambda_ {j+1}+\tau)r(k,l)}\text{ for all }0\leq k<l\text{ and }0\leq j\leq g\] (3.10) \[\left\|Z(k+1)\right\|\leq Ce^{rk}\text{ for all }k\geq 0\] (3.11) \[C^{-1}e^{\lambda_{1}k}\leq\|Q(k)\|\leq Ce^{\lambda_{1}(1+\tau)k }\text{ for all }k\geq 0\] (3.12) \[\max_{\alpha\in A}\frac{|I^{(k)}|}{|I_{\alpha}^{(k)}|}\leq\kappa \text{ for all }k\geq 0\] (3.13) \[\left|\sin\angle\big{(}E_{j}^{(k)},U_{j}^{(k)}\big{)}\right|\geq c \left\|Q(k)\right\|^{-\tau}\text{ for all }k\geq 0\text{ and }-g\leq j\leq g+1. \tag{3.5}\] _Definition 3_.: A locally Hamiltonian flow \(\psi_{\mathbb{R}}\) on \(M\) with isolated fixed points and restricted to its minimal component \(M^{\prime}\subset M\) satisfies the _Full Filtration Diophantine Condition (FFDC)_ if there exists a transversal \(I\subset M^{\prime}\) such that the corresponding IET \(T:I\to I\) satisfies the FFDC. **Theorem 3.1**.: _Almost every IET satisfies FFDC._ Proof.: Most of the proof of Theorem follows similarly from the proof of Theorem 3.2 in [11]. In addition to the proof of FDC condition in [11], it suffices to slightly modify the construction of the full measure set \(\Xi\) (coming from [11]) to show that every \((\pi,\lambda)\in\Xi\) satisfies (3.7), (3.9) and (3.13) not only on the non-negative part of the filtration (as shown in [11]) but also on its negative part, i.e. on the \(E_{-j}^{(k)}\) and \(U_{-j}^{(k)}\) for \(1\leq j\leq g\). Since this modification is straightforward, we omit the details. _Remark 3.2_.: In view of Theorem 3.1, almost every (with respect to the Katok fundamental class) locally Hamiltonian flow \(\psi_{\mathbb{R}}\) on \(M\) with isolated fixed points and restricted to its minimal component \(M^{\prime}\subset M\) satisfies the FFDC. _Remark 3.3_.: As \(1=|I|\leq|I^{(n)}|\|Q(n)\|\leq|I|/\kappa=\kappa^{-1}\), by (3.11), we have \[|I^{(n)}|^{-1}\leq\|Q(n)\|\leq Ce^{(\lambda_{1}+\tau)n}\text{ and }|I^{(n)}|\leq\kappa^{-1}\|Q(n)\|^{-1}\leq\kappa^{-1}Ce^{-\lambda_{1}n}. \tag{3.14}\] As \(\lim_{n\to+\infty}n/r(0,n)>1/(1+\tau)>1-\tau\), there exists \(c>0\) such that \[(1-\tau)r(0,n)-c\leq n\leq r(0,n)\text{ for all }n\geq 0. \tag{3.15}\] _Remark 3.4_.: Let us consider the map \(\bar{\xi}:I\to\mathbb{R}\) given by \(\bar{\xi}(x)=x\) and the corresponding coboundary \(\bar{\xi}\circ T-\bar{\xi}\). Then \(\bar{\xi}\circ T-\bar{\xi}\in\Gamma\) and for every \(k\geq 0\), \[Q(k)(\bar{\xi}\circ T-\bar{\xi})_{\alpha}=\bar{\xi}(T^{Q_{\alpha}(k)}x)-\bar{ \xi}(x)=T^{Q_{\alpha}(k)}x-x\text{ for any }x\in I_{\alpha}^{(k)}.\] Therefore \(\|Q(k)(\bar{\xi}\circ T-\bar{\xi})\|\leq|I^{(k)}|\leq\kappa^{-1}Ce^{-\lambda_ {1}k}\). By (3.2), \(\bar{\xi}\circ T-\bar{\xi}\in E_{-1}(\pi,\lambda)\). Since the space \(E_{-1}(\pi,\lambda)\) is one-dimensional, we have \(h_{-1}=c(\bar{\xi}\circ T-\bar{\xi})\) for some \(c\neq 0\). For any \(k\geq 0\) and \(-g\leq j\leq g+1\) denote by \(P_{E_{j}^{(k)}}:\mathbb{R}^{\mathcal{A}}\to E_{j}^{(k)}\) and \(P_{U_{j}^{(k)}}:\mathbb{R}^{\mathcal{A}}\to U_{j}^{(k)}\) the corresponding projections, i.e. \(P_{E_{j}^{(k)}}+P_{U_{j}^{(k)}}=Id_{\mathbb{R}^{\mathcal{A}}}\). In view of (3.13), using the arguments of the proof of Lemma 3.5 in [11], for any \(\tau>0\) there exists \(C>0\) such that for all \(k\geq 0\) and \(-g\leq j\leq g+1\), \[\|P_{E_{j}^{(k)}}\|\leq C\,\|Q(k)\|^{\tau}\,,\quad\|P_{U_{j}^{(k)}}\|\leq C\, \|Q(k)\|^{\tau}\,. \tag{3.16}\] Moreover, by definition, for any pair \(0\leq k<l\) and any \(-g\leq j\leq g+1\) we have \[Q(k,l)\circ P_{E_{j}^{(k)}}=P_{E_{j}^{(l)}}\circ Q(k,l)\text{ and }Q(k,l) \circ P_{U_{j}^{(k)}}=P_{U_{j}^{(l)}}\circ Q(k,l).\] ### Diophantine series For every \(a\geq 0\) and \(s\geq 1\), let \(\langle s\rangle^{a}=s^{a}\) if \(a>0\) and \(\langle s\rangle^{a}=1+\log s\) if \(a=0\). _Definition 4_.: For every IET \(T:I\to I\) satisfying Keane's condition, any \(0\leq a<1\), any \(2\leq i\leq g+1\), any \(\tau>0\) and any accelerating sequence we define sequences \((K_{k}^{a,i,\tau}(T))_{k\geq 0},(C_{k}^{a,i,\tau}(T))_{k\geq 0}\) so that \[K_{k}^{a,i,\tau}(T) :=\sum_{l\geq k}\|Q\|_{U_{i}^{(k)}}(k,l+1)^{-1}\|\|Z(l+1)\|\langle \|Q(l)\|\rangle^{a}\|Q(l+1)\|^{\tau},\] \[C_{k}^{a,i,\tau}(T) :=\sum_{0\leq l<k}\|Q\|_{E_{i}^{(l+1)}}(l+1,k)\|\|Z(l+1)\|\langle \|Q(l)\|\rangle^{a}\|Q(l+1)\|^{\tau}.\] **Proposition 3.5**.: _[_11_, Proposition 3.6]_ _Let \(T:I\to I\) be an IET satisfying FFDC and let \(0\leq a<1\). Suppose that \(2\leq i\leq g+1\) is chosen such that \(a\lambda_{1}<\lambda_{i-1}\). Then for every \(0<\tau<\frac{\lambda_{i-1}-\lambda_{1}a}{3(1+\lambda_{1})}\) the sequences \((K_{k}^{a,i,\tau})_{k\geq 0},(C_{k}^{a,i,\tau})_{k\geq 0}\) are well defined and_ \[K_{k}^{a,i,\tau}(T) \leq C_{\tau}e^{(\lambda_{1}a+5\tau(1+\lambda_{1}))r(0,k)},\] \[C_{k}^{a,i,\tau}(T) \leq C_{\tau}e^{(\max\{\lambda_{i},\lambda_{1}a\}+3\tau(1+\lambda_ {1}))r(0,k)}. \tag{3.17}\] The series was originally designed to construct some correction operators (see [11, SS6]) on \(C^{0+\mathrm{P_{a}}}\). We now present a new type of series for the similar purpose on \(C^{n+\mathrm{P_{a}}}\). _Definition 5_.: For every IET \(T:I\to I\) satisfying Keane's condition, any \(2\leq j\leq g+1\), any non-negative sequence \(\bar{s}=(s_{k})_{k\geq 0}\), any \(\tau>0\) and any accelerating sequence, we define sequences \((V_{k}^{j,\tau}(T,\bar{s}))_{k\geq 0},(W_{k}^{j,\tau}(T,\bar{s}))_{k\geq 0}\) so that \[V_{k}^{j,\tau}(T,\bar{s}) :=\sum_{l\geq k}\|Q\|_{U_{-j}^{(k)}}(k,l+1)^{-1}\|\|Q(l+1)\|^{\tau }\|Z(l+1)\|s_{l},\] \[W_{k}^{j,\tau}(T,\bar{s}) :=\sum_{0\leq l<k}\|Q\|_{E_{-j}^{(l+1)}}(l+1,k)\|\|Q(l+1)\|^{\tau }\|Z(l+1)\|s_{l}.\] **Proposition 3.6**.: _Let \(T:I\to I\) be an IET satisfying FFDC. Fix \(0\leq j\leq g\), \(\lambda_{j+1}<\rho\) and \(0<\tau<\frac{\rho-\lambda_{j+1}}{\lambda_{1}+3}\). Then there exists \(C_{\tau}>0\) such that for any non-negative sequence \(\bar{s}=(s_{k})_{k\geq 0}\) with \(s_{k}\leq De^{-\rho r(0,k+1)}\) for all \(k\geq 0\) we have_ \[V_{k}^{j,\tau}(T,\bar{s}) \leq C_{\tau}De^{(-\rho+(\lambda_{1}+2)\tau)r(0,k)}, \tag{3.19}\] \[W_{k}^{j,\tau}(T,\bar{s}) \leq C_{\tau}De^{(\max\{-\rho,-\lambda_{j}\}+(\lambda_{1}+3)\tau )r(0,k)}. \tag{3.18}\] Proof.: By Definition 2, \[V_{k}^{j,\tau} \leq\sum_{l\geq k}C^{3}De^{(\lambda_{j+1}+\tau)r(k,l+1)}e^{(\lambda_ {1}+\tau)\tau(l+1)}e^{\tau(l+1)}e^{-\rho r(0,l+1)}\] \[\leq\sum_{l\geq k}C^{3}De^{(\lambda_{j+1}+\tau)r(k,l+1)}e^{(-\rho+ \tau(\lambda_{1}+2))r(0,l+1)}\] \[=De^{(-\rho+\tau(\lambda_{1}+2))r(0,k)}\sum_{l\geq k}C^{3}e^{(- \rho+\lambda_{j+1}+\tau(\lambda_{1}+3))r(k,l+1)}\] \[\leq De^{(-\rho+\tau(\lambda_{1}+2))r(0,k)}\sum_{l\geq k}C^{3}e^{( -\rho+\lambda_{j+1}+\tau(\lambda_{1}+3))(l+1-k)}\] \[=De^{(-\rho+\tau(\lambda_{1}+2))r(0,k)}\sum_{l\geq 1}C^{3}e^{(- \rho+\lambda_{j+1}+\tau(\lambda_{1}+3))l}.\] As \(-\rho+\lambda_{j+1}+\tau(\lambda_{1}+3)<0\), the above series is convergent, so we get (3.18). Moreover, again by Definition 2, \[W_{k}^{j,\tau} \leq\sum_{0\leq l<k}C^{3}De^{(-\lambda_{j}+\tau)r(l+1,k)}e^{( \lambda_{1}+\tau)\tau(l+1)}e^{\tau(l+1)}e^{-\rho r(0,l+1)}\] \[\leq\sum_{1\leq l\leq k}C^{3}De^{(-\lambda_{j}+\tau)r(l,k)}e^{(- \rho+\tau(\lambda_{1}+2))r(0,l)}\leq C^{3}Dke^{(\max\{-\lambda_{j},-\rho\}+ \tau(\lambda_{1}+2))r(0,k)}\] \[\leq C^{3}DC^{\prime}e^{(\max\{-\lambda_{j},-\rho\}+\tau(\lambda_ {1}+3))r(0,k)},\] which gives (3.19). ## 4. Extended correction operators In this section we define three types of new correction operators \(\mathfrak{h}_{j}^{*},\mathfrak{h}_{-j,i}\) and \(\mathfrak{h}_{0}\) for \(2\leq i\leq g+1\) and \(0\leq j\leq g\). These operators are motivated by the correction operator \(\mathfrak{h}_{i}\) previously defined on \(C^{0+\mathrm{Pa}}\), the space of functions with polynomial singularities. In [11, SS6], the maps from \(C^{0+\mathrm{Pa}}\) were corrected by piecewise constant functions coming from the unstable subspace. Our new operators are constructed to correct piecewise smooth functions whose higher order derivatives have polynomial singularities (elements of \(C^{n+\mathrm{Pa}\mathrm{G}}\)) by piecewise polynomial functions. ### \(C^{n+\mathrm{Pa}\mathrm{G}}\) space Fix \(0\leq a<1\) and an integer \(n\geq 0\). Following [12, SS2], \(C^{n+\mathrm{Pa}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) is the space of \(C^{n+1}\)-functions on \(\bigcup_{\alpha\in\mathcal{A}}\mathrm{Int}\,I_{\alpha}\) such that \[p_{a}(D^{n}\varphi):=\max_{\alpha\in\mathcal{A}}\sup_{x\in(l_{\alpha},r_{ \alpha})}\max\{|D^{n+1}\varphi(x)(x-l_{\alpha})^{1+a}|,|D^{n+1}\varphi(x)(r_{ \alpha}-x)^{1+a}|\}\] is finite and \[C^{a,+}_{\alpha,n}(\varphi) =(-1)^{n}C^{+}_{\alpha}(D^{n}\varphi):=(-1)^{n+1}\lim_{x\searrow l _{\alpha}}D^{n+1}\varphi(x)(x-l_{\alpha})^{1+a},\] \[C^{a,-}_{\alpha,n}(\varphi) =C^{-}_{\alpha}(D^{n}\varphi):=\lim_{x\nearrow r_{\alpha}}D^{n+1} \varphi(x)(r_{\alpha}-x)^{1+a}\] exist. The space \(C^{n+\mathrm{Pa}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) is a Banach space equipped with the norm \[\|\varphi\|_{C^{n+\mathrm{Pa}}}:=\sum_{k=0}^{n}\|D^{k}\varphi\|_{L^{1}(I)}+p_{ a}(D^{n}\varphi).\] We denote by \(C^{n+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\subset C^{n+ \mathrm{P_{a}}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) the space of functions with _geometric type_, i.e. such that \[C^{a,-}_{\pi_{0}^{-1}(d),n}\cdot C^{a,-}_{\pi_{1}^{-1}(d),n}=0\quad\text{and} \quad C^{a,+}_{\pi_{0}^{-1}(1),n}\cdot C^{a,+}_{\pi_{1}^{-1}(1),n}=0.\] ### Special Birkhoff sums Assume that an IET \(T:I\to I\) satisfies Keane's condition. For any \(0\leq k<l\) and any measurable map \(\varphi:I^{(k)}\to\mathbb{R}\) over the IET \(T^{(k)}:I^{(k)}\to I^{(k)}\), denote by \(S(k,l)\varphi:I^{(l)}\to\mathbb{R}\) the renormalized map over \(T^{(l)}\) given by \[S(k,l)\varphi(x)=\sum_{0\leq i<Q_{\beta}(k,l)}\varphi((T^{(k)})^{i}x)\text{ for }x\in I^{(l)}_{\beta}.\] Sums of this form are called _special Birkhoff sums_. In convention, we write \(S(k)\varphi\) for \(S(0,k)\varphi\) and \(S(k,k)\varphi=\varphi\). If \(\varphi\) is integrable then \[\|S(k,l)\varphi\|_{L^{1}(I^{(l)})}\leq\|\varphi\|_{L^{1}(I^{(k)})}\quad\text{ and}\quad\int_{I^{(l)}}S(k,l)\varphi(x)\,dx=\int_{I^{(k)}}\varphi(x)\,dx. \tag{4.1}\] If additionally \(\varphi\in\mathrm{BV}(\sqcup_{\alpha\in\mathcal{A}}I^{(k)}_{\alpha})\) (is of bounded variation), then \[\mathrm{Var}\,S(k,l)\varphi\leq\mathrm{Var}\,\varphi\ \text{ and }\ \|S(k,l)\varphi\|_{\sup}\leq\|Q(k,l)\|\|\varphi\|_{\sup}, \tag{4.2}\] where \(\mathrm{Var}\,\varphi\) is the sum of variations of \(\varphi\) restricted to \(\mathrm{Int}\,I_{\alpha}\) for \(\alpha\in\mathcal{A}\). Denote by \(\Gamma^{(k)}\) the set of functions on \(I^{(k)}\) which are constant on all \(I^{(k)}_{\alpha}\), \(\alpha\in\mathcal{A}\). Clearly, \(S(k,l)\Gamma^{(k)}=\Gamma^{(l)}\) and \(S(k,l)\) is the linear automorphism of \(\mathbb{R}^{\mathcal{A}}\) whose matrix in the canonical basis is \(Q(k,l)\). _Remark 4.1_.: In view of SS5 in [11], \(S(k,l):C^{n+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I^{(k)}_{\alpha}) \to C^{n+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I^{(l)}_{\alpha})\). Moreover, for every IET \(T\) satisfying FFDC, there exists \(C\geq 1\) such that for all \(0\leq k\leq l\) and for every function \(\varphi\in C^{0+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I^{(k)}_{\alpha})\) \[\begin{split}& p_{a}(S(k,l)\varphi)\leq Cp_{a}(\varphi)\text{ if }0<a<1,\\ & p_{a}(S(k,l)\varphi)\leq C(1+\log\|Q(k,l)\|)p_{a}(\varphi)\text{ if }a=0.\end{split} \tag{4.3}\] ### Correction operator on \(C^{0+\mathrm{P_{a}G}}\) For any integrable map \(f:I\to\mathbb{R}\) and any subinterval \(J\subset I\), let \(m(f,J)\) stand for the mean value of \(f\) on \(J\), that is \[m(f,J)=\frac{1}{|J|}\int_{J}f(x)\,dx.\] For the IET \(T^{(k)}\) let \(\mathcal{M}^{(k)}:L^{1}(I^{(k)})\to\Gamma^{(k)}\) be the corresponding mean value _projection operator_ given by \[\mathcal{M}^{(k)}(f)=\sum_{\alpha\in\mathcal{A}}m(f,I^{(k)}_{\alpha})\chi_{I^{ (k)}_{\alpha}}.\] This operator projects any map onto a piecewise constant function, whose values are equal to the mean value of \(f\) on the exchanged intervals \(I^{(k)}_{\alpha}\), \(\alpha\in\mathcal{A}\). **Theorem 4.2**.: _[_11_, Theorem 6.1]_ _Assume that \(T\) satisfies FFDC. For any \(0\leq a<1\), take \(2\leq j\leq g+1\) so that \(\lambda_{1}a<\lambda_{j-1}\). There exists a bounded linear operator \(\mathfrak{h}_{j}:C^{0+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\to U _{j}\) such that for any \(\tau>0\) there exists a constant \(C=C_{\tau}\geq 1\) such that for every \(\varphi\in C^{0+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) with \(\mathfrak{h}_{j}(\varphi)=0\) we have_ \[\|\mathcal{M}^{(k)}(S(k)\varphi)\|\leq C\left(\big{(}K^{a,j,\tau}_{k}+C^{a,j, \tau}_{k}\big{)}p_{a}(\varphi)+\|Q_{E_{j}}(k)\|\frac{\|\varphi\|_{L^{1}(I^{(0)} )}}{|I^{(0)}|}\right). \tag{4.4}\] The operator \(\mathfrak{h}_{j}:C^{0+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha}) \to U_{j}\subset H(\pi)\) called the _correction operator_ is given by \[\mathfrak{h}_{j}(\varphi) =\lim_{k\to\infty}Q(0,k)^{-1}\circ P_{U_{j}^{(k)}}\circ\mathcal{ M}^{(k)}\circ S(k)(\varphi)\] \[=\sum_{l\geq 0}Q(0,l)^{-1}\circ P_{U_{j}^{(l)}}\circ\big{(} \mathcal{M}^{(l)}\circ S(l)-Z(l)\circ\mathcal{M}^{(l-1)}\circ S(l-1)\big{)}(\varphi) \tag{4.5}\] where \(\mathcal{M}^{(-1)}=0\). _Remark 4.3_.: Note that for all \(2\leq j^{\prime}\leq j\leq g+1\) we have \(P_{U_{j^{\prime}}^{(0)}}\circ P_{U_{j}^{(0)}}=P_{U_{j^{\prime}}^{(0)}}\). It follows that \(P_{U_{j^{\prime}}^{(0)}}\circ\mathfrak{h}_{j}=\mathfrak{h}_{j^{\prime}}\), hence \(\mathfrak{h}_{j}(\varphi)=0\) implies \(\mathfrak{h}_{j^{\prime}}(\varphi)=0\). Moreover, by definition, \(\mathfrak{h}_{j}(h)=0\) for every \(h\in E_{j}\) and \(\mathfrak{h}_{j}(h)=h\) for every \(h\in U_{j}\), in particular \(\mathfrak{h}_{j}\circ\mathfrak{h}_{j}=\mathfrak{h}_{j}\). ### First step: correction operator \(\mathfrak{h}_{j}^{*}\) on \(\mathbf{BV}\) As a first step, we construct an initial extended correction operator \(\mathfrak{h}_{j}^{*}\) on the space of bounded variation functions taking value in the space \(U_{-j}\) from the complimentary filtration. By definition, for every \(\varphi\in\mathrm{BV}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha}^{(k)})\) we have \[\big{\|}\mathcal{M}^{(k)}(\varphi)\big{\|}\leq\|\varphi\|_{\sup}\text{ and }\big{\|}\varphi-\mathcal{M}^{(k)}(\varphi)\big{\|}_{\sup}\leq\operatorname{ Var}\varphi. \tag{4.6}\] Let \(P_{0}^{(k)}:L^{1}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha}^{(k)})\to L^{1}( \sqcup_{\alpha\in\mathcal{A}}I_{\alpha}^{(k)})\) be a linear operator given by \[P_{0}^{(k)}(\varphi)=\varphi-\mathcal{M}^{(k)}(\varphi).\] If \(\varphi\in BV(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\), then \[\|P_{0}^{(k)}(S(k)\varphi)\|_{\sup}\leq\operatorname{Var}(S(k)\varphi). \tag{4.7}\] By SS6.1 in [11], for every \(0\leq a<1\) and \(\varphi\in C^{0+\mathrm{P_{a}}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha}^{(k)})\), \[\big{\|}\mathcal{M}^{(k)}(\varphi)\big{\|}_{L^{1}(I^{(k)})}\leq 2\,\|\varphi\|_{L^{1}(I^{(k)})} \tag{4.8}\] \[\big{\|}\varphi-\mathcal{M}^{(k)}(\varphi)\big{\|}_{L^{1}(I^{(k)})}\leq\frac{2 ^{2+a}d}{1-a}p_{a}(\varphi)|I^{(k)}|^{1-a}. \tag{4.9}\] Therefore for \(\varphi\in C^{0+\mathrm{P_{a}}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) we obtain \[\frac{\|S(k)\varphi\|_{L^{1}(I^{(k)})}}{|I^{(k)}|}\leq\big{\|}\mathcal{M}^{(k )}(S(k)\varphi)\big{\|}+p_{a}(S(k)\varphi)\frac{2^{2+a}}{(1-a)|I^{(k)}|^{a}}. \tag{4.10}\] As \[\frac{|I^{(k)}|\,\|h\|}{\kappa}\leq\min_{\beta\in\mathcal{A}}|I_{\beta}^{(k)}| \,\|h\|\leq\|h\|_{L^{1}(I^{(k)})}\leq|I^{(k)}|\,\|h\|\,\text{ for every }h\in\Gamma^{(k)}, \tag{4.11}\] by (4.8), for every \(\varphi\in C^{0+\mathrm{P_{a}}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\), \[\|\mathcal{M}^{(k)}(\varphi)\|\leq\frac{2\kappa}{|I^{(k)}|}\,\|\varphi\|_{L^{1 }(I^{(k)})}\,. \tag{4.12}\] **Lemma 4.4**.: _Let \(0\leq j\leq g\) and \(\varphi\in BV(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) be such that_ \[\sum_{l\geq 1}\|Q\|_{U_{-j}^{(0)}}(l)^{-1}\|\|Q(l)\|^{\tau}\|Z(l)\|\operatorname{ Var}(S(l-1)\varphi)<+\infty. \tag{4.13}\] _Then the limit_ \[\mathfrak{h}_{j}^{*}(\varphi)=\lim_{l\to\infty}Q(0,l)^{-1}\circ P_{U_{-j}^{(0) }}\circ\mathcal{M}^{(l)}\circ S(l)(\varphi)\in U_{-j} \tag{4.14}\] exists and there exists a universal constant \(C>0\) such that_ \[\|\mathfrak{h}_{j}^{*}(\varphi)\|\leq C\Big{(}\,\|\varphi\|_{\sup}+\sum_{l\geq 1 }\|Q\|_{U_{-j}^{(0)}}(l)^{-1}\|\|Q(l)\|^{\top}\|Z(l)\|\operatorname{Var}(S(l-1) \varphi)\Big{)}. \tag{4.15}\] _Moreover, for every \(k\geq 1\) we have_ \[\begin{split}&\big{\|}\mathcal{M}^{(k)}(S(k)(\varphi-\mathfrak{h}_{ j}^{*}(\varphi)))\big{\|}\\ &\leq C\Big{(}\sum_{l>k}\|Q\|_{U_{-j}^{(k)}}(k,l)^{-1}\|\|Q(l)\|^{ \top}\|Z(l)\|\operatorname{Var}(S(l-1)\varphi)\\ &+\sum_{1\leq l\leq k}\|Q\|_{E_{-j}^{(l)}}(l,k)\|\|Q(l)\|^{\top} \|Z(l)\|\operatorname{Var}(S(l-1)\varphi)+\|Q_{E_{-j}^{(0)}}(k)\|\,\|\varphi\| _{\sup}\,\Big{)}.\end{split} \tag{4.16}\] Proof.: Let \(v_{k}:=\mathcal{M}^{(k)}\circ S(k)(\varphi)\). Direct calculation shows that \[\begin{split}\big{(}S(k,k+1)&\circ P_{0}^{(k)}\circ S (k)(\varphi)-P_{0}^{(k+1)}\circ S(k,k+1)\circ S(k)(\varphi)\big{)}\\ &=-S(k,k+1)\circ\mathcal{M}^{(k)}\circ S(k)(\varphi)+\mathcal{M} ^{(k+1)}\circ S(k+1)(\varphi)\\ &=-Z(k+1)v_{k}+v_{k+1}.\end{split} \tag{4.17}\] Then, by (4.7) and (4.2), \[\begin{split}\|S(k,k+1)\circ P_{0}^{(k)}\circ S(k)(\varphi)\|_{ \sup}&\leq\|Z(k+1)\|\|P_{0}^{(k)}\circ S(k)(\varphi)\|_{\sup}\\ &\leq\|Z(k+1)\|\operatorname{Var}(S(k)\varphi)\end{split}\] and \[\|P_{0}^{(k+1)}\circ S(k+1)(\varphi)\|_{\sup}\leq\operatorname{Var}(S(k+1) \varphi)\leq\operatorname{Var}(S(k)\varphi).\] This gives \[\|Z(k+1)v_{k}-v_{k+1}\|\leq 2\|Z(k+1)\|\operatorname{Var}(S(k)\varphi). \tag{4.18}\] For any sequence \((x_{k})_{k\geq 0}\) in \(\mathbb{R}^{\mathcal{A}}\), let \(\Delta x_{k+1}=x_{k+1}-Z(k+1)x_{k}\) for \(k\geq 0\) and \(\Delta x_{0}=x_{0}\). Then, by telescoping, \[x_{k}=\sum_{j=0}^{k}Q(j,k)\Delta x_{j}. \tag{4.19}\] By (4.6) and (4.18), \[\|\Delta v_{0}\|\leq\|\varphi\|_{\sup}\text{ and }\|\Delta v_{k+1}\|\leq 2\|Z(k+1) \|\operatorname{Var}(S(k)\varphi). \tag{4.20}\] For every \(k\geq 0\) let \(e_{k}=P_{E_{-j}^{(k)}}v_{k}\in E_{-j}^{(k)}\) and \(u_{k}=P_{U_{-j}^{(k)}}v_{k}\in U_{-j}^{(k)}\). Then \(v_{k}=u_{k}+e_{k}\). Since \(Z(k+1)(E_{-j}^{(k)})=E_{-j}^{(k+1)}\) and \(Z(k+1)(U_{-j}^{(k)})=U_{-j}^{(k+1)}\) we have \[\begin{split}&\Delta u_{k+1}=u_{k+1}-Z(k+1)u_{k}=P_{U_{-j}^{(k+1)} }\Delta v_{k+1},\\ &\Delta e_{k+1}=e_{k+1}-Z(k+1)e_{k}=P_{E_{-j}^{(k+1)}}\Delta v_{k +1},\\ &\Delta u_{0}=u_{0}=P_{U_{-j}^{(0)}}\Delta v_{0},\ \Delta e_{0}=e_{0}=P_{E_{-j}^{(0)}}\Delta v_{0}.\end{split} \tag{4.21}\] In view of (3.16) and (4.20), we have \[\|\Delta u_{0}\|\leq C\,\|\Delta v_{0}\|\leq C\,\|\varphi\|_{\sup}\,,\quad\| \Delta e_{0}\|\leq C\,\|\Delta v_{0}\|\leq C\,\|\varphi\|_{\sup} \tag{4.22}\] and for every \(k\geq 1\) we have \[\begin{split}\|\Delta u_{k}\|&\leq 2C\|Q(k)\|^{ \tau}\|Z(k)\|\operatorname{Var}(S(k-1)\varphi),\\ \|\Delta e_{k}\|&\leq 2C\|Q(k)\|^{\tau}\|Z(k)\| \operatorname{Var}(S(k-1)\varphi).\end{split} \tag{4.23}\] Let us consider the infinite series \(v:=\sum_{l\geq 0}Q(l)^{-1}\Delta u_{l}\). Since \[\begin{split}&\sum_{l\geq 0}\lVert Q\rVert_{U_{-j}^{(0)}}(l)^{-1} \|\|\Delta u_{l}\|\\ &\quad\leq C\big{(}\,\|\varphi\|_{\sup}+2\sum_{l\geq 1}\|Q\|_{U_{-j}^{ (0)}}(l)^{-1}\|\|Q(l)\|^{\tau}\|Z(l)\|\operatorname{Var}(S(l-1)\varphi)\big{)} \end{split} \tag{4.24}\] is finite, \(v\in U_{-j}\) is well defined. In view of (4.17) and (4.21), \[\begin{split}& Q(l)^{-1}\Delta u_{l}=Q(l)^{-1}\circ P_{U_{-j}^{ (l)}}(\mathcal{M}^{(l)}\circ S(l)(\varphi)-S(l-1,l)\circ\mathcal{M}^{(l-1)} \circ S(l-1)(\varphi))\\ &\quad=Q(l)^{-1}\circ P_{U_{-j}^{(l)}}\circ\mathcal{M}^{(l)} \circ S(l)(\varphi)-Q(l-1)^{-1}\circ P_{U_{-j}^{(l-1)}}\circ\mathcal{M}^{(l-1) }\circ S(l-1)(\varphi).\end{split}\] It follows that \(\mathfrak{h}_{j}^{*}(\varphi)\) is well defined and \(\mathfrak{h}_{j}^{*}(\varphi)=v\), so by (4.24), we obtain (4.15). By the definition of \(v\), (4.19) and (4.23), for every \(k\geq 0\) we have \[\begin{split}\|Q(k)v-u_{k}\|&=\Big{\|}\sum_{l>k}Q \rvert_{U_{-j}^{(k)}}(k,l)^{-1}\Delta u_{l}\Big{\|}\leq\sum_{l>k}\|Q\rvert_{U_ {-j}^{(k)}}(k,l)^{-1}\|\|\Delta u_{l}\|\\ &\leq 2C\sum_{l>k}\|Q\rvert_{U_{-j}^{(k)}}(k,l)^{-1}\|\|Q(l)\|^{\tau}\| Z(l)\|\operatorname{Var}(S(l-1)\varphi).\end{split} \tag{4.25}\] To obtain the bound of norm of \(e_{k}\in E_{-j}^{(k)}\), we apply (4.19), (4.23) and (4.22), \[\begin{split}&\|e_{k}\|\leq\sum_{0\leq l\leq k}\|Q(l,k)\Delta e_{l} \|\leq\sum_{0\leq l\leq k}\|Q\rvert_{E_{-j}^{(l)}}(l,k)\|\|\Delta e_{l}\|\\ &\leq C\big{(}\|Q\|_{E_{-j}^{(0)}}(k)\|\|\varphi\|_{\sup}+2\sum_{ 1\leq l\leq k}\|Q\|_{E_{-j}^{(l)}}(l,k)\|\|Q(l)\|^{\tau}\|Z(l)\|\operatorname{ Var}(S(l-1)\varphi)\big{)}.\end{split}\] Combining with (4.25), we conclude \[\begin{split}&\|Q(k)v-v_{k}\|\leq 2C\Big{(}\sum_{l>k}\|Q\|_{U_{-j}^{ (k)}}(k,l)^{-1}\|\|Q(l)\|^{\tau}\|Z(l)\|\operatorname{Var}(S(l-1)\varphi)\\ &\quad+\sum_{1\leq l\leq k}\|Q\|_{E_{-j}^{(l)}}(l,k)\|\|Q(l)\|^{ \tau}\|Z(l)\|\operatorname{Var}(S(l-1)\varphi)+\|Q_{E_{-j}^{(0)}}(k)\|\,\| \varphi\|_{\sup}\Big{)}.\end{split}\] Since \(\mathcal{M}^{(k)}(S(k)(\mathfrak{h}_{j}^{*}(\varphi)))=Q(k)v\), this gives (4.16). _Remark 4.5_.: Suppose that \(0\leq j\leq j^{\prime}\leq g\). Then the operator \(\mathfrak{h}_{j^{\prime}}^{*}\) is well defined and \(P_{U_{-j^{\prime}}^{(0)}}\circ\mathfrak{h}_{j}^{*}=\mathfrak{h}_{j^{\prime}}^ {*}\). Hence \(\mathfrak{h}_{j}^{*}(\varphi)=0\) implies \(\mathfrak{h}_{j^{\prime}}^{*}(\varphi)=0\). In view of (4.5) and (4.14), the same arguments show that for every \(2\leq l\leq g+1\) we have \(P_{U_{l}^{(0)}}\circ\mathfrak{h}_{j}^{*}=\mathfrak{h}_{l}\). Hence \(\mathfrak{h}_{j}^{*}(\varphi)=0\) implies \(\mathfrak{h}_{l}(\varphi)=0\). Moreover, by definition, \(\mathfrak{h}_{j}^{*}(h)=0\) for every \(h\in E_{-j}\) and \(\mathfrak{h}_{j}^{*}(h)=h\) for every \(h\in U_{-j}\). ### Second step: correction operator \(\mathfrak{h}_{-j,i}\) Now we introduce second type correction operators \(\mathfrak{h}_{-j,i}:C^{1+\mathrm{P}_{\mathrm{a}}G}(\sqcup_{\alpha\in\mathcal{ A}}I_{\alpha})\to U_{-j}\) for \(2\leq i\leq g+1\) and \(1\leq j\leq g\). They extend previous (standard) correction operators \(\mathfrak{h}_{i}\) to the complement of the stable part of the Oseledets filtration. For this purpose, we use a certain modification of the operator \(\mathfrak{h}_{j}^{*}\), which we link with the derivative of \(\mathfrak{h}_{i}\). For all \(0\leq a<1\) and \(2\leq i\leq g+1\) let \[C_{i}^{1+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})=\{\varphi\in C ^{1+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha}):\mathfrak{h}_{i}( D\varphi)=0\}.\] Let us consider the sequence \(\bar{s}=(s_{k})_{k\geq 0}\) given by \(s_{k}:=|I^{(k)}|(K_{k}^{a,i,\tau}+C_{k}^{a,i,\tau})\). **Theorem 4.6**.: _Assume that \(T\) satisfies FFDC. Let \(0\leq a<1\), \(2\leq i\leq g+1\) and \(1\leq j\leq g\) so that \(a\lambda_{1}<\lambda_{i-1}\) and \(\max\{a\lambda_{1},\lambda_{i}\}<\lambda_{1}-\lambda_{j+1}\). Then the linear operator \(\mathfrak{h}_{j}^{*}:C_{i}^{1+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I _{\alpha})\to U_{-j}\) is well defined and bounded. Moreover, for any \(0<\tau<\frac{\max\{a\lambda_{1},\lambda_{i}\}-\lambda_{1}+\lambda_{j+1}}{9(1+ \lambda_{1})}\) there exists a constant \(C=C_{\tau}\geq 1\) such that for any \(\varphi\in C_{i}^{1+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) with \(\mathfrak{h}_{j}^{*}(\varphi)=0\) we have_ \[\mathrm{Var}(S(k)\varphi) \leq Cs_{k}\left\|D\varphi\right\|_{C^{0+\mathrm{P_{a}}}} \tag{4.27}\] \[\left\|\mathcal{M}^{(k)}(S(k)\varphi)\right\| \leq C\big{(}\big{(}W_{k}^{j,\tau}(\bar{s})+V_{k}^{j,\tau}(\bar{s}) \big{)}\left\|D\varphi\right\|_{C^{0+\mathrm{P_{a}}}}+\left\|Q_{E_{-j}^{(0)}} (k)\right\|\left\|\varphi\right\|_{\sup}\big{)} \tag{4.26}\] _with_ \[s_{k} =O(e^{(\max\{\lambda_{i},\lambda_{1}a\}-\lambda_{1}+6\tau(1+ \lambda_{1}))r(0,k+1)})\] \[V_{k}^{j,\tau}(T,\bar{s}) =O(e^{(\max\{\lambda_{i},\lambda_{1}a\}-\lambda_{1}+8\tau(1+ \lambda_{1}))r(0,k)})\] \[W_{k}^{j,\tau}(T,\bar{s}) =O(e^{(\max\{\lambda_{i}-\lambda_{1},\lambda_{1}a-\lambda_{1},- \lambda_{j}\}+9\tau(1+\lambda_{1}))r(0,k)}).\] Proof.: As \(\varphi\in C_{i}^{1+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\), we have \(D\varphi\in C^{0+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) and \(\mathfrak{h}_{i}(D\varphi)=0\). By (4.10), (4.3), (3.14) and Theorem 4.2, \[\mathrm{Var}(S(k)\varphi)=\left\|S(k)(D\varphi)\right\|_{L^{1}(I^{ (k)})}\] \[\quad\leq|I^{(k)}|\Big{(}\left\|\mathcal{M}^{(k)}(S(k)D\varphi) \right\|+p_{a}(S(k)D\varphi)\frac{2^{a+2}}{(1-a)||I^{(k)}|^{a}}\Big{)}\] \[\quad\leq C_{a}|I^{(k)}|\left(\big{(}K_{k}^{a,i,\tau}+C_{k}^{a,i, \tau}\big{)}p_{a}(D\varphi)+\left\|Q_{E_{i}}(k)\right\|\frac{\left\|D\varphi \right\|_{L^{1}(I^{(0)})}}{|I^{(0)}|}\right)\] \[\quad\leq C_{a}|I^{(k)}|\big{(}K_{k}^{a,i,\tau}+C_{k}^{a,i,\tau} \big{)}\left\|D\varphi\right\|_{C^{0+\mathrm{P_{a}}}}\leq C_{\tau}s_{k}\left\| D\varphi\right\|_{C^{0+\mathrm{P_{a}}}},\] which gives (4.26). It follows that \[\|Q\|_{U_{-j}^{(k)}}(k,l)^{-1}\|\|Q(l)\|^{\tau}\|Z(l)\|\,\mathrm{ Var}(S(l-1)\varphi)\] \[\quad=O\big{(}\|Q\|_{U_{-j}^{(k)}}(k,l)^{-1}\|\|Q(l)\|^{\tau}\|Z(l )\|s_{l-1}\left\|D\varphi\right\|_{C^{0+\mathrm{P_{a}}}}\big{)},\] \[\|Q\|_{E_{-j}^{(l)}}(l,k)\|\|Q(l)\|^{\tau}\|Z(l)\|\,\mathrm{Var}(S(l -1)\varphi)\] \[\quad=O\big{(}\|Q\|_{E_{-j}^{(l)}}(l,k)\|\|Q(l)\|^{\tau}\|Z(l)\| s_{l-1}\left\|D\varphi\right\|_{C^{0+\mathrm{P_{a}}}}\big{)}.\] In view of (3.17), (3.14) and (3.15), we have \[s_{k} =O(e^{-\lambda_{1}k}e^{(\max\{\lambda_{i},\lambda_{1}a\}+5\tau(1+ \lambda_{1}))r(0,k)})\] \[=O(e^{-\lambda_{1}(1-\tau)r(0,k+1)}e^{(\max\{\lambda_{i}, \lambda_{1}a\}+5\tau(1+\lambda_{1}))r(0,k)})\] \[=O(e^{(\max\{\lambda_{i},\lambda_{1}a\}-\lambda_{1}+6\tau(1+ \lambda_{1}))r(0,k+1)}).\] As \(\max\{\lambda_{i},\lambda_{1}a\}+\lambda_{j+1}-\lambda_{1}+6\tau(1+\lambda_{1} )+\tau(3+\lambda_{1})<0\), by Proposition 3.6, \[V_{k}^{j,\tau}(T,\bar{s}) =O(e^{(\max\{\lambda_{i},\lambda_{1}a\}-\lambda_{1}+8\tau(1+ \lambda_{1}))r(0,k)})\] \[W_{k}^{j,\tau}(T,\bar{s}) =O(e^{(\max\{\lambda_{i}-\lambda_{1},\lambda_{1}a-\lambda_{1},- \lambda_{j}\}+9\tau(1+\lambda_{1}))r(0,k)}).\] As \(V_{0}^{j,\tau}(T,\bar{s})\) is finite, the series (4.13) is convergent. By Lemma 4.4 (see (4.15)), the operator \(\mathfrak{h}_{j}^{*}:C_{i}^{1+\mathrm{Pa}\mathrm{G}}(\sqcup_{\alpha\in\mathcal{A }}I_{\alpha})\to U_{-j}\) is well defined and bounded. Moreover, in view of (4.16), this also gives (4.27). For every \(\varphi\in L^{1}(I)\) denote by \(\widetilde{\varphi}\in AC(I)\) its primitive integral \(\widetilde{\varphi}(x)=\int_{0}^{x}\varphi(y)dy\). **Corollary 4.7**.: _Assume that \(T\) satisfies FFDC. Let \(0\leq a<1\), \(2\leq i\leq g+1\) and \(1\leq j\leq g\) so that \(a\lambda_{1}<\lambda_{i-1}\) and \(\max\{a\lambda_{1},\lambda_{i}\}<\lambda_{1}-\lambda_{j+1}\). There exists a bounded operator \(\mathfrak{h}_{-j,i}:C^{1+\mathrm{Pa}\mathrm{G}}(\sqcup_{\alpha\in\mathcal{A }}I_{\alpha})\to U_{-j}\) such that for every \(\varphi\in C^{1+\mathrm{Pa}\mathrm{G}}(\sqcup_{\alpha\in\mathcal{A}}I_{ \alpha})\) with \(\mathfrak{h}_{-j,i}(\varphi)=0\) and \(\mathfrak{h}_{j}(D\varphi)=0\) we have_ \[\|S(k)\varphi\|_{\mathrm{sup}}\leq O(e^{(\max\{\lambda_{i}-\lambda_{1}, \lambda_{1}a-\lambda_{1},-\lambda_{j}\}+\tau)r(0,k)})\|\varphi\|_{C^{1+ \mathrm{Pa}}}\text{ for every }\tau>0. \tag{4.28}\] Proof.: Let \(K_{i}:C^{1+\mathrm{Pa}\mathrm{G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha}) \to C_{i}^{1+\mathrm{Pa}\mathrm{G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) be the bounded operator defined by \(K_{i}(\varphi)=\varphi-\mathfrak{h}_{i}(D\varphi)\). Since \(\mathfrak{h}_{i}(DK_{i}(\varphi))=\mathfrak{h}_{i}(D\varphi)-\mathfrak{h}_{i} (\mathfrak{h}_{i}(D\varphi))=0\), we really have \(K_{i}(\varphi)\in C_{i}^{1+\mathrm{Pa}\mathrm{G}}(\sqcup_{\alpha\in\mathcal{A }}I_{\alpha})\). We can use Theorem 4.6 to define \(\mathfrak{h}_{-j,i}:C^{1+\mathrm{Pa}\mathrm{G}}(\sqcup_{\alpha\in\mathcal{A}}I _{\alpha})\to U_{-j}\) as \(\mathfrak{h}_{-j,i}:=\mathfrak{h}_{j}^{*}\circ K_{i}\). Suppose that \(\varphi\in C^{1+\mathrm{Pa}\mathrm{G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) is such that \(\mathfrak{h}_{-j,i}(\varphi)=0\) and \(\mathfrak{h}_{i}(D\varphi)=0\). Then \(\mathfrak{h}_{j}^{*}(K_{i}(\varphi))=\mathfrak{h}_{-j,i}(\varphi)=0\) and \(\varphi=K_{i}(\varphi)+\widehat{\mathfrak{h}_{i}(D\varphi)}=K_{i}(\varphi)\), so \(\mathfrak{h}_{j}^{*}(\varphi)=0\). In view of (4.7) and Theorem 4.6, \[\|S(k)(\varphi)\|_{\mathrm{sup}}\leq\|\mathcal{M}^{(k)}(S(k)\varphi)\|+ \mathrm{Var}(S(k)\varphi)=O(e^{(\max\{\lambda_{i}-\lambda_{1},\lambda_{1}a- \lambda_{1},-\lambda_{j}\}+\tau)r(0,k)}).\] _Remark 4.8_.: Suppose that \(1\leq j\leq j^{\prime}\leq g\) and \(2\leq i^{\prime}\leq i\leq g+1\). Then the operator \(\mathfrak{h}_{-j^{\prime},i^{\prime}}\) is well defined. By Remarks 4.3 and 4.5, \(\mathfrak{h}_{-j,i}(\varphi)=0\) and \(\mathfrak{h}_{i}(D\varphi)=0\) imply \(\mathfrak{h}_{-j^{\prime},i^{\prime}}(\varphi)=0\), \(\mathfrak{h}_{i^{\prime}}(D\varphi)=0\) and \(\mathfrak{h}_{l}(\varphi)=0\) for every \(2\leq l\leq g+1\). Moreover, by the same remarks, we also have \(\mathfrak{h}_{-j,i}(h)=0\) for every \(h\in E_{-j}\) and \(\mathfrak{h}_{-j,i}(h)=h\) for every \(h\in U_{-j}\), in particular \(\mathfrak{h}_{-j,i}\circ\mathfrak{h}_{-j,i}=\mathfrak{h}_{-j,i}\). ### Third step: correction operator \(\mathfrak{h}_{0}\) The last correction operator \(\mathfrak{h}_{0}:C^{2+\mathrm{Pa}\mathrm{G}}(\sqcup_{\alpha\in\mathcal{A}}I_{ \alpha})\to U_{0}=\Gamma\) plays the same roles as \(\mathfrak{h}_{-j,i}\) but for the parameter \(j=0\). As in the construction of \(\mathfrak{h}_{-j,i}\), we also use the operator \(\mathfrak{h}_{j}^{*}\) (for \(j=0\)), but we need to link it with the derivative of \(\mathfrak{h}_{-g,2}\) and the second derivative of \(\mathfrak{h}_{2}\). **Theorem 4.9**.: _Assume that \(T\) satisfies FFDC. Let \(0\leq a<1\). There exists a bounded operator \(\mathfrak{h}_{0}:C^{2+\mathrm{Pa}\mathrm{G}}(\sqcup_{\alpha\in\mathcal{A}}I_{ \alpha})\to U_{0}\) such that if \(\varphi\in C^{2+\mathrm{Pa}\mathrm{G}}(\sqcup_{\alpha\in\mathcal{A}}I_{ \alpha})\) with_ \[\mathfrak{h}_{0}(\varphi)=0,\quad\mathfrak{h}_{-g,2}(D\varphi)=0, \quad\mathfrak{h}_{2}(D^{2}\varphi)=0\text{ and }\] \[\|S(k)D\varphi\|_{\mathrm{sup}}=O(e^{-\rho r(0,k)}c(D\varphi) \text{ for some }\rho>0,\] _then for every \(0<\tau<\min\{\lambda_{1}-\lambda_{2},\lambda_{1}(1-a),\lambda_{g},\rho\}/3(1+ \max\{\lambda_{1},\rho\})\), we have_ \[\|S(k)\varphi\|_{\mathrm{sup}}=O(e^{(-\rho-\lambda_{1}+2\tau(\lambda_{1}+\rho+ 1))r(0,k)})c(D\varphi). \tag{4.29}\] Proof.: Let us consider \[C^{2+\mathrm{Pa}\mathrm{G}}_{-g,2}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})=\{ \varphi\in C^{2+\mathrm{Pa}\mathrm{G}}(\sqcup_{\alpha\in\mathcal{A}}I_{ \alpha}):\mathfrak{h}_{-g,2}(D\varphi)=0,\mathfrak{h}_{2}(D^{2}\varphi)=0\}.\] By Corollary 4.7, for every \(\varphi\in C^{2+\mathrm{Pa}\mathrm{G}}_{-g,2}(\sqcup_{\alpha\in\mathcal{A}}I_{ \alpha})\) we have \[\|S(k)(D\varphi)\|_{\mathrm{sup}} =O(e^{(\max\{\lambda_{2}-\lambda_{1},\lambda_{1}a-\lambda_{1},- \lambda_{g}\}+\tau)r(0,k)})\|D\varphi\|_{C^{1+\mathrm{Pa}}}\] \[=O(e^{-\rho_{0}r(0,k)})\|D\varphi\|_{C^{1+\mathrm{Pa}}}\] with \(\rho_{0}:=\min\{\lambda_{1}-\lambda_{2},\lambda_{1}(1-a),\lambda_{g}\}-\tau>0\). As \[\operatorname{Var}(S(k)\varphi)=\|S(k)D\varphi\|_{L^{1}(I^{(k)})}\leq|I^{(k)}| \|S(k)D\varphi\|_{\sup},\] it follows that for \(l>k\) we have \[\|Q|_{U_{0}^{(k)}}(k,l)^{-1}\|\|Q(l)\|^{\tau}\|Z(l)\|\operatorname {Var}(S(l-1)\varphi)\] \[\qquad\leq\|Q(k,l)^{-1}\|\|Q(l)\|^{\tau}\|Z(l)\|\|I^{(l-1)}\|\|S( l-1)D\varphi\|_{\sup}\] \[\qquad=O(e^{(\lambda_{1}+\tau)r(k,l)}e^{\tau(\lambda_{1}+\tau)l} e^{\tau l}e^{-\lambda_{1}(l-1)}e^{-\rho_{0}r(0,l-1)})\|D\varphi\|_{C^{1+\mathrm{P_{ a}}}}.\] By (3.15), it follows that \[\|Q|_{U_{0}^{(k)}}(k,l)^{-1}\|\|Q(l)\|^{\tau}\|Z(l)\|\operatorname {Var}(S(l-1)\varphi)\] \[\qquad=O(e^{(\lambda_{1}+\tau)r(k,l)}e^{\tau(\lambda_{1}+2)r(0,l )}e^{-(1-\tau)\lambda_{1}r(0,l)}e^{-(1-\tau)\rho_{0}r(0,l)})\|D\varphi\|_{C^{1+ \mathrm{P_{a}}}}\] \[\qquad=O(e^{(-\lambda_{1}-\rho_{0}+\tau(3\lambda_{1}+2))r(0,k)}e^ {(-\rho_{0}+\tau(3\lambda_{1}+3))r(k,l)})\|D\varphi\|_{C^{1+\mathrm{P_{a}}}}. \tag{4.30}\] The same arguments show that if additionally \(\|S(k)D\varphi\|_{\sup}=O(e^{-\rho r(0,k)})c(D\varphi)\) then \[\|Q|_{U_{0}^{(k)}}(k,l)^{-1}\|\|Q(l)\|^{\tau}\|Z(l)\|\operatorname {Var}(S(l-1)\varphi)\] \[\qquad=O(e^{(-\lambda_{1}-\rho+\tau(2\lambda_{1}+\rho+2))r(0,k)}e ^{(-\rho+\tau(2\lambda_{1}+\rho+3))r(k,l)})c(D\varphi). \tag{4.31}\] As \(-\rho_{0}+3\tau(\lambda_{1}+1)<0\), by (4.30), the series (4.13) is convergent for \(j=0\). By Lemma 4.4, the operator \(\mathfrak{h}_{0}^{*}:C^{2+\mathrm{P_{a}G}}_{-g,2}(\sqcup_{\alpha\in\mathcal{A }}I_{\alpha})\to U_{0}=\Gamma\) is well defined and if \(\mathfrak{h}_{0}^{*}(\varphi)=0\) then \[\big{\|}\mathcal{M}^{(k)}(S(k)\varphi)\big{\|} \leq C\sum_{l>k}\|Q|_{U_{0}^{(k)}}(k,l)^{-1}\|\|Q(l)\|^{\tau}\|Z( l)\|\operatorname{Var}(S(l-1)\varphi).\] Therefore \[\|S(k)\varphi\|_{\sup} \leq\big{\|}\mathcal{M}^{(k)}(S(k)\varphi)\big{\|}+\operatorname {Var}(S(k)\varphi)\] \[\leq 2C\sum_{l>k}\|Q|_{U_{0}^{(k)}}(k,l)^{-1}\|\|Q(l)\|^{\tau}\|Z( l)\|\operatorname{Var}(S(l-1)\varphi). \tag{4.32}\] Let \(K:C^{2+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\to C^{2+ \mathrm{P_{a}G}}_{-g,2}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) be the bounded operator defined by \[K(\varphi):=\varphi-\widetilde{\mathfrak{h}_{2}(D^{2}\varphi)}-\widetilde{ \mathfrak{h}_{-g,2}(D\varphi)}+\widetilde{\mathfrak{h}_{-g,2}(\mathfrak{h}_{2} (D^{2}\varphi))}.\] Then \[DK(\varphi)=D\varphi-\widetilde{\mathfrak{h}_{2}(D^{2}\varphi)}-\mathfrak{h}_ {-g,2}(D\varphi-\widetilde{\mathfrak{h}_{2}(D^{2}\varphi)}),\quad D^{2}K( \varphi)=D^{2}\varphi-\mathfrak{h}_{2}(D^{2}\varphi).\] Since \(\mathfrak{h}_{2}(D^{2}K(\varphi))=\mathfrak{h}_{2}(D^{2}\varphi)-\mathfrak{h}_ {2}(\mathfrak{h}_{2}(D^{2}\varphi))=0\) and \[\mathfrak{h}_{-g,2}(DK(\varphi))=\mathfrak{h}_{-g,2}(D\varphi-\widetilde{ \mathfrak{h}_{2}(D^{2}\varphi)})-\mathfrak{h}_{-g,2}(\mathfrak{h}_{-g,2}(D \varphi-\widetilde{\mathfrak{h}_{2}(D^{2}\varphi)}))=0,\] we really have \(K(\varphi)\in C^{2+\mathrm{P_{a}G}}_{-g,2}(\sqcup_{\alpha\in\mathcal{A}}I_{ \alpha})\). Finally we define \(\mathfrak{h}_{0}:C^{2+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha}) \to U_{0}\) as \(\mathfrak{h}_{0}=\mathfrak{h}_{0}^{*}\circ K\). Suppose that \(\varphi\in C^{2+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) is such that \(\mathfrak{h}_{0}(\varphi)=0\), \(\mathfrak{h}_{-g,2}(D\varphi)=0\), \(\mathfrak{h}_{2}(D^{2}\varphi)=0\) and \(\|S(k)D\varphi\|_{\sup}=O(e^{-\rho r(0,k)})c(D\varphi)\). Then \(\mathfrak{h}_{0}^{*}(K(\varphi))=\mathfrak{h}_{0}(\varphi)=0\) with \(K(\varphi)=\varphi\), so \(\mathfrak{h}_{0}^{*}(\varphi)=0\). In view of (4.32) and (4.31), this gives \[\|S(k)(\varphi)\|_{\sup}\leq e^{(-\lambda_{1}-\rho+\tau(2\lambda_{1}+\rho+2))r(0,k)}O\Big{(}\sum_{l>k}e^{(-\rho+\tau(2\lambda_{1}+\rho+3))r(k,l)}\Big{)}c(D \varphi).\] As \[\sum_{l>k}e^{(-\rho+\tau(2\lambda_{1}+\rho+3))r(k,l)}\leq\sum_{l\geq 1}e^{(-\rho+ \tau(2\lambda_{1}+\rho+3))l}<+\infty,\] this gives (4.29). _Remark 4.10_.: Using Remark 4.5 and 4.8, we obtain that \(\mathfrak{h}_{0}(\varphi)=0\), \(\mathfrak{h}_{-g,2}(D\varphi)=0\) and \(\mathfrak{h}_{2}(D^{2}\varphi)=0\) imply \(\mathfrak{h}_{-j,i}(\varphi)=0\) for any pair \(i,j\) and \(\mathfrak{h}_{l}(\varphi)=0\) for any \(2\leq l\leq g+1\). By Remark 4.5, we also have \(\mathfrak{h}_{0}(h)=h\) for every \(h\in\Gamma\), in particular \(\mathfrak{h}_{0}\circ\mathfrak{h}_{0}=\mathfrak{h}_{0}\). Finally, we prove a fast decay of special Birkhoff sums for \(\varphi\in C^{n+\mathrm{P}_{\mathrm{a}}\mathrm{G}}(\sqcup_{\alpha\in\mathcal{ A}}I_{\alpha})\) under some vanishing conditions for derivatives of previously defined correction operators. This result is a key step in the construction of invariant distributions \(\mathfrak{f}_{\bar{t}}\) and the proof of the spectral theorem. **Theorem 4.11**.: _Assume that \(T\) satisfies FFDC. Let \(0\leq a<1\), \(2\leq i\leq g+1\) and \(1\leq j\leq g\) with \(a\lambda_{1}<\lambda_{i-1}\) and \(\max\{a\lambda_{1},\lambda_{i}\}<\lambda_{1}-\lambda_{j+1}\). Let \(n\geq 1\). Suppose that \(\varphi\in C^{n+\mathrm{P}_{\mathrm{a}}\mathrm{G}}(\sqcup_{\alpha\in\mathcal{ A}}I_{\alpha})\) is such that \(\mathfrak{h}_{-j,i}(D^{n-1}\varphi)=0\), \(\mathfrak{h}_{i}(D^{n}\varphi)=0\) and \(\mathfrak{h}_{0}(D^{l}\varphi)=0\) for all \(0\leq l<n-1\). Then for every small enough \(\tau>0\), we have_ \[\|S(k)\varphi\|_{\mathrm{sup}}\leq O(e^{(-n\lambda_{1}+\max\{\lambda_{i}, \lambda_{1}a,\lambda_{1}-\lambda_{j}\}+\tau)r(0,k)})\|\varphi\|_{C^{n+\mathrm{ P}_{\mathrm{a}}}}. \tag{4.33}\] Proof.: First we show that \(\mathfrak{h}_{-g,2}(D^{l+1}\varphi)=0\), \(\mathfrak{h}_{2}(D^{l+2}\varphi)=0\) for all \(0\leq l<n-1\). As \(\mathfrak{h}_{-j,i}(D^{n-1}\varphi)=0\) and \(\mathfrak{h}_{i}(D^{n}\varphi)=0\), by Remark 4.8 applied to \(D^{n-1}\varphi\), we have \(\mathfrak{h}_{-g,2}(D^{n-1}\varphi)=0\), \(\mathfrak{h}_{2}(D^{n}\varphi)=0\) and \(\mathfrak{h}_{2}(D^{n-1}\varphi)=0\). This gives our claim for \(l=n-2\). As \(\mathfrak{h}_{0}(D^{n-2}\varphi)=0\), by Remark 4.10 applied to \(D^{n-2}\varphi\), we obtain \(\mathfrak{h}_{-g,2}(D^{n-2}\varphi)=0\). Together with \(\mathfrak{h}_{2}(D^{n-1}\varphi)=0\) this gives our claim for \(l=n-3\). Repeating the same arguments for lower-order derivatives and using induction, we get our claim for every \(0\leq l<n-1\). The proof of (4.33) is also done by induction on \(n\). The base case \(n=1\) follows directly from Corollary 4.7. Assume that the induction hypothesis (4.33) holds for a particular \(n\geq 1\). Suppose that \(\varphi\in C^{n+1+\mathrm{P}_{\mathrm{a}}\mathrm{G}}(\sqcup_{\alpha\in \mathcal{A}}I_{\alpha})\) is such that \(\mathfrak{h}_{-j,i}(D^{n}\varphi)=0\), \(\mathfrak{h}_{i}(D^{n+1}\varphi)=0\) and \(\mathfrak{h}_{0}(D^{l}\varphi)=0\) for all \(0\leq l<n\). By the induction hypothesis, applied to \(D\varphi\), for every small enough \(\tau>0\), we have \[\|S(k)D\varphi\|_{\mathrm{sup}}\leq O(e^{(-n\lambda_{1}+\max\{\lambda_{i}, \lambda_{1}a,\lambda_{1}-\lambda_{j}\}+\tau)r(0,k)})\|D\varphi\|_{C^{n+\mathrm{ P}_{\mathrm{a}}}}.\] By assumption and the first part of the proof, \(\mathfrak{h}_{0}(\varphi)=0\), \(\mathfrak{h}_{-g,2}(D\varphi)=0\), \(\mathfrak{h}_{2}(D^{2}\varphi)=0\). In view of Theorem 4.9 applied to \(\rho=n\lambda_{1}-\max\{\lambda_{i},\lambda_{1}a,\lambda_{1}-\lambda_{j}\}-\tau\), we get \[\|S(k)\varphi\|_{\mathrm{sup}}\leq O(e^{(-(n+1)\lambda_{1}+\max\{\lambda_{i}, \lambda_{1}a,\lambda_{1}-\lambda_{j}\}+2(n+1)(\lambda_{1}+1)\tau)r(0,k)})\| \varphi\|_{C^{n+1+\mathrm{P}_{\mathrm{a}}}}.\] ## 5. Spectrum of the functional KZ-cocycles Special Birkhoff sums cocycle \(S(k)\) is an infinite dimensional extension of the KZ-cocycle. In this section we compute Lyapunov exponents of the cocycle \(S(k)\) on \(C^{n+\mathrm{P}_{\mathrm{a}}}\). We construct a finite set of piecewise polynomial functions that form the basis for the spectral Theorem 5.6. These piecewise polynomials are obtained by applying correction operators constructed in the previous section and their Lyapunov exponents correspond to Lyapunov exponents of standard KZ-cocycle. ### Lyapunov exponents for piecewise polynomials For every \(l\geq 0\) denote by \(\mathbb{R}_{l}[x]\) the linear space of polynomials of degree not greater than \(l\). Since every linear operator defined on a finite dimensional linear space is bounded, for every \(l\geq 0\) there exists a constant \(c_{l}>0\) such that for every \(f\in\mathbb{R}_{l}[x]\) we have \(c_{l}\|D^{f}f\|_{C^{0}([0,1])}\leq\|f\|_{L^{1}([0,1])}\). Therefore, for every interval \(I=[a,b]\subset\mathbb{R}\) we obtain \[\frac{\|f\|_{L^{1}(I)}}{|I|}=\|f(a+|I|(\,\cdot\,))\|_{L^{1}([0,1])}\geq c_{l}\| \frac{d^{l}}{dx^{l}}f(a+|I|x)\|_{C^{0}([0,1])}=c_{l}|I|^{l}\|D^{l}f\|_{C^{0}(I)}.\] For every \(l\geq 0\) denote by \(\Gamma_{l}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) the space of maps \(f:I\to\mathbb{R}\) such that for every \(\alpha\in\mathcal{A}\) the restriction of \(f\) to \(I_{\alpha}\) belongs to \(\mathbb{R}_{l}[x]\). Then \(\Gamma_{0}=\Gamma\) and for every \(f\in\Gamma_{l}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) we have \(D^{l}f\in\Gamma\) and \[\frac{1}{|I|}\left\|f\right\|_{L^{1}(I)}\geq c_{l}\Big{(}\min_{\alpha\in \mathcal{A}}|I_{\alpha}|\Big{)}^{l}\|D^{l}f\|. \tag{5.1}\] Let \(h_{1},\ldots,h_{g},c_{1},\ldots,c_{\gamma-1},h_{-g},\ldots,h_{-1}\) be a basis of \(\Gamma\) described in Section 3.1. Then \[\lim_{k\to\infty}\frac{\log\|Q(k)h_{i}\|}{k}=\lambda_{i}\text{ for }1\leq|i|\leq g,\ \lim_{k\to\infty}\frac{\log\|Q(k)c_{s}\|}{k}=0\text{ for }1\leq s<\gamma. \tag{5.2}\] For every \(2\leq i\leq g+1\) choose \(1\leq j_{i}\leq g\) such that \(\lambda_{1}-\lambda_{j_{i}}\leq\lambda_{i}<\lambda_{1}-\lambda_{j_{i}+1}\) and for every \(1\leq j\leq g\) choose \(2\leq i_{j}\leq g+1\) such that \(\lambda_{i_{j}}\leq\lambda_{1}-\lambda_{j}<\lambda_{i_{j}-1}\). _Definition 6_.: For every \(l\geq 0\) let \(h_{i,l}\) for \(1\leq i\leq g\), \(c_{s,l}\) for \(1\leq s<\gamma\), and \(h_{-j,l}\) for \(1\leq j\leq g\) be elements of \(\Gamma_{l}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) defined inductively as follows: \[h_{i,0}=h_{i},\ h_{i,1}=\widetilde{h_{i}}-\mathfrak{h}_{-j_{i},i}(\widetilde{h _{i}}),\ h_{i,l+1}=\widetilde{h_{i,l}}-\mathfrak{h}_{0}(\widetilde{h_{i,l}}) \text{ for }l\geq 1\text{ if }2\leq i\leq g,\] \[h_{1,0}=h_{1},\ h_{1,1}=\widetilde{h_{1}}-\mathfrak{h}_{g+1}(\widetilde{h_{1} }),\ h_{1,2}=\widetilde{h_{1,1}}-\mathfrak{h}_{-1,g+1}(\widetilde{h_{1,1}}),\] \[h_{1,l+1}=\widetilde{h_{1,l}}-\mathfrak{h}_{0}(\widetilde{h_{1,l}})\text{ for }l\geq 2,\] \[c_{s,0}=c_{s},\ c_{s,1}=\widetilde{c_{s}}-\mathfrak{h}_{-1,g+1}(\widetilde{c_{ s}}),\ c_{s,l+1}=\widetilde{c_{s,l}}-\mathfrak{h}_{0}(\widetilde{c_{s,1}}) \text{ for }l\geq 1,\] \[h_{-j,0}=h_{-j},\ h_{-j,l+1}=\widetilde{h_{-j,l}}-\mathfrak{h}_{0}(\widetilde{ h_{-j,l}})\text{ for }l\geq 0.\] Since \(\mathfrak{h}_{0}\circ\mathfrak{h}_{0}=\mathfrak{h}_{0}\), \(\mathfrak{h}_{g+1}\circ\mathfrak{h}_{g+1}=\mathfrak{h}_{g+1}\) and \(\mathfrak{h}_{-1,g+1}\circ\mathfrak{h}_{-1,g+1}=\mathfrak{h}_{-1,g+1}\), we obtain \[D^{n}h_{i,l}=h_{i,l-n},\ D^{n}c_{s,l}=c_{s,l-n},\ D^{n}h_{-j,l}=h_ {-j,l-n}\text{ if }0\leq n\leq l, \tag{5.4}\] \[\mathfrak{h}_{-j_{i},i}(h_{i,1})=0,\ \mathfrak{h}_{0}(h_{i,l})=0 \text{ for }l\geq 2\text{ if }2\leq i\leq g,\] (5.5) \[\mathfrak{h}_{g+1}(h_{1,1})=0,\ \mathfrak{h}_{-1,g+1}(h_{1,2})=0,\ \mathfrak{h}_{0}(h_{1,l})=0 \text{ for }l\geq 3,\] (5.6) \[\mathfrak{h}_{-1,g+1}(c_{s,1})=0,\ \mathfrak{h}_{0}(c_{s,l})=0 \text{ for }l\geq 2,\] (5.7) \[\mathfrak{h}_{0}(h_{-j,l})=0 \text{ for }l\geq 1. \tag{5.3}\] In view of (5.3), \(h_{i,l}\) for \(1\leq|i|\leq g\), \(0\leq l\leq n\) together with \(c_{s,l}\) for \(1\leq s<\gamma\), \(0\leq l\leq n\) is a basis of the space \(\Gamma_{n}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\). Hence every \(h\in\Gamma_{n}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) has a unique decomposition \[h=\sum_{0\leq l\leq n}\Big{(}\sum_{1\leq|i|\leq g}d(h,h_{i,l})h_{i,l}+\sum_{1 \leq s<\gamma}d(h,c_{s,l})c_{s,l}\Big{)}.\] Lyapunov exponents of \(S(k)\) for \(h_{i,l}\), \(c_{s,l}\) are computed by adapting inductive definitions and using Theorem 4.11. Their lower bounds are obtained by FFDC properties of \(T\). **Proposition 5.1**.: _Assume that \(T\) satisfies FFDC. Then for every \(l\geq 0\),_ \[\begin{split}\lim_{k\to\infty}\frac{\log\left\|S(k)h_{i,l}\right\|_ {\sup}}{k}&=\lim_{k\to\infty}\frac{\log(\left\|S(k)h_{i,l} \right\|_{L^{1}(I^{(k)})}/|I^{(k)}|)}{k}=\lambda_{i}-l\lambda_{1}\\ \lim_{k\to\infty}\frac{\log\left\|S(k)c_{s,l}\right\|_{\sup}}{k}& =\lim_{k\to\infty}\frac{\log(\left\|S(k)c_{s,l}\right\|_{L^{1}(I^ {(k)})}/|I^{(k)}|)}{k}=-l\lambda_{1}\end{split} \tag{5.8}\] _for \(i\in\pm\{1,\ldots,g\}\) and for \(1\leq s<\gamma\). Moreover, for every \(h\in\Gamma_{n}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\),_ \[\begin{split}\lim_{k\to\infty}\frac{\log\left\|S(k)h\right\|_{ \sup}}{k}=\max\big{(}&\{\lambda_{i}-l\lambda_{1}:0\leq l\leq n,1 \leq|i|\leq g,d(h,h_{i,l})\neq 0\}\\ &\cup\{-l\lambda_{1}:0\leq l\leq n,1\leq s<\gamma,d(h,c_{s,l}) \neq 0\}\big{)}.\end{split} \tag{5.9}\] Proof.: If \(l=0\) then (5.8) follows directly from (5.2). Suppose that \(\varphi=h_{-j,l}\) for some \(l\geq 1\). Then \(\varphi\in C^{l+1+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) with \(a=0\), \(\mathfrak{h}_{-j,i_{j}}(D^{l}\varphi)=0\), \(\mathfrak{h}_{i_{j}}(D^{l+1}\varphi)=0\) and \(\mathfrak{h}_{0}(D^{p}\varphi)=0\) for all \(0\leq p<l\). Indeed, as \(D^{l}\varphi=h_{-j}\in E_{-j}\), by Remark 4.8, we have \(\mathfrak{h}_{-j}(D^{l}\varphi)=\mathfrak{h}_{-j}(h_{-j})=0\) and \(D^{l+1}\varphi=Dh_{-j}=0\). In view of Theorem 4.11, this gives \[\limsup_{k\to\infty}\frac{\log\left\|S(k)\varphi\right\|_{\sup}}{k}\leq-l+1) \lambda_{1}+\max\{\lambda_{i_{j}},\lambda_{1}a,\lambda_{1}-\lambda_{j}\}=-l \lambda_{1}-\lambda_{j}.\] Suppose that \(\varphi=h_{i,l}\) for some \(2\leq i\leq g\) and \(l\geq 1\). Then \(\varphi\in C^{l+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) with \(a=0\), \(\mathfrak{h}_{-j_{i},i}(D^{l-1}\varphi)=0\), \(\mathfrak{h}_{i}(D^{l}\varphi)=0\) and \(\mathfrak{h}_{0}(D^{p}\varphi)=0\) for all \(0\leq p<l-1\). Indeed, as \(D^{l}\varphi=h_{i}\in E_{i}\), by Remark 4.3, we have \(\mathfrak{h}_{i}(D^{l}\varphi)=0\). Moreover, by definition, \(\mathfrak{h}_{-j_{i},i}(D^{l-1}h_{i,l})=0\). In view of Theorem 4.11, this gives \[\limsup_{k\to\infty}\frac{\log\left\|S(k)\varphi\right\|_{\sup}}{k}\leq-l \lambda_{1}+\max\{\lambda_{i},\lambda_{1}a,\lambda_{1}-\lambda_{j_{i}}\}=-l \lambda_{1}+\lambda_{i}.\] Suppose that \(\varphi=h_{1,1}\). Then \(\varphi\in C^{0+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) with \(a=0\) and \(\mathfrak{h}_{g+1}(\varphi)=\mathfrak{h}_{g+1}(h_{1,1})=0\). In view of Theorem 4.2 and Proposition 3.5, for every \(\tau>0\) small enough, \(\|\mathcal{M}^{(k)}(S(k)\varphi)\|=O(e^{\tau k})\). As \(\varphi=h_{1,1}\) is of bounded variation, we also have \(\mathrm{Var}(S(k)\varphi)\leq\mathrm{Var}(\varphi)\). Since \(\|S(k)\varphi\|_{\sup}\leq\|\mathcal{M}^{(k)}(S(k)\varphi)\|+\mathrm{Var}(S(k )\varphi)\), this gives \[\limsup_{k\to\infty}\frac{\log\|S(k)\varphi\|_{\sup}}{k}\leq 0=-\lambda_{1}+ \lambda_{1}.\] Suppose that \(\varphi=h_{1,l}\) for some \(l\geq 2\). Then \(\varphi\in C^{l-1+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) with \(a=0\), \(\mathfrak{h}_{-1,g+1}(D^{l-2}\varphi)=0\), \(\mathfrak{h}_{g+1}(D^{l-1}\varphi)=0\) and \(\mathfrak{h}_{0}(D^{p}\varphi)=0\) for all \(0\leq p<l-2\). In view of Theorem 4.11, this gives \[\limsup_{k\to\infty}\frac{\log\left\|S(k)\varphi\right\|_{\sup}}{k}\leq-(l-1) \lambda_{1}+\max\{\lambda_{g+1},\lambda_{1}a,\lambda_{1}-\lambda_{1}\}=-l \lambda_{1}+\lambda_{1}.\] Suppose that \(\varphi=c_{s,l}\) for some \(l\geq 1\). Then \(\varphi\in C^{l+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) with \(a=0\), \(\mathfrak{h}_{-1,g+1}(D^{l-1}\varphi)=0\), \(\mathfrak{h}_{g+1}(D^{l}\varphi)=0\) and \(\mathfrak{h}_{0}(D^{p}\varphi)=0\) for all \(0\leq p<l-1\). Indeed, as \(D^{l}\varphi=c_{s}\in E_{g+1}\), by Remark 4.3, we have \(\mathfrak{h}_{g+1}(D^{l}\varphi)=0\). In view of Theorem 4.11, this gives \[\limsup_{k\to\infty}\frac{\log\left\|S(k)\varphi\right\|_{\sup}}{k}\leq-l \lambda_{1}+\max\{\lambda_{g+1},\lambda_{1}a,\lambda_{1}-\lambda_{1}\}=-l \lambda_{1}.\] In summary, for every \(\varphi\in\Gamma_{l}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) of the form \(h_{i,l}\), \(c_{s,l}\) or \(h_{-j,l}\) we have \(D^{l}\varphi\in\Gamma\) and \[\limsup_{k\to\infty}\frac{\log\lVert S(k)\varphi\rVert_{\sup}}{k}\!\leq\!-l \lambda_{1}+\lambda(D^{l}\varphi)\text{ for }\lambda(D^{l}\varphi)\!=\!\lim_{k\to\infty}\frac{\log \bigl{\lVert}Q(k)D^{l}\varphi\bigr{\rVert}}{k}. \tag{5.10}\] It follows that (5.10) holds also for any \(\varphi\in\Gamma_{l}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\). On the other hand, if additionally \(D^{l}\varphi\neq 0\) then, by (5.1), (3.12) and (3.14), \[\frac{1}{|I^{(k)}|}\left\lVert S(k)\varphi\right\rVert_{L^{1}(I^{(k)})}\geq c_ {l}\kappa^{l}|I^{(k)}|^{l}\left\lVert S(k)D^{l}\varphi\right\rVert\geq c_{l} \kappa^{l}C^{-l}e^{-(\lambda_{1}+\tau)lk}\left\lVert Q(k)D^{l}\varphi\right\rVert.\] It follows that \[\liminf_{k\to\infty}\frac{\log(\lVert S(k)\varphi\rVert_{L^{1}(I^{(k)})}/|I^ {(k)}|)}{k}\geq-l\lambda_{1}+\lambda(D^{l}\varphi),\] so \[\lim_{k\to\infty}\frac{\log\lVert S(k)\varphi\rVert_{\sup}}{k}=\lim_{k\to \infty}\frac{\log(\lVert S(k)\varphi\rVert_{L^{1}(I^{(k)})}/|I^{(k)}|)}{k}=-l \lambda_{1}+\lambda(D^{l}\varphi).\] This completes the proof. ### New functionals arising from correcting operators In this section, we develop the idea of constructing invariant distributions by decomposing correction operators with respect to the base elements, introduced in [14] and [11, SS9.1]. The original idea is to decompose the operator \(\mathfrak{h}_{i}:C^{0+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha} )\to U_{i}\) relative to its base elements \(h_{1},\dots,h_{i-1}\) of \(U_{i}\). We extend this idea by taking the decomposition of correction operators \(\mathfrak{h}_{-j,i}\) and \(\mathfrak{h}_{0}\). Using an inductive procedure, we get a new family of functionals defined on \(C^{n+\mathrm{P_{a}}}\), which in Section 5.3 are adjusted to define invariant distributions \(\mathfrak{f}_{\bar{\imath}}\). For every \(0\leq a<1\) let \(2\leq i_{a}\leq g+1\) and \(1\leq j_{a}\leq g\) such that \(\lambda_{i_{a}}\leq\lambda_{1}a<\lambda_{i_{a}-1}\) and \(\lambda_{1}-\lambda_{j_{a}}\leq\lambda_{1}a<\lambda_{1}-\lambda_{j_{a}+1}\). Let us consider the bounded operators \(d_{i,0}^{+}:C^{0+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha}) \to\mathbb{R}\) for \(1\leq i<i_{a}\) such that for every \(\varphi\in C^{0+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\), \[\mathfrak{h}_{i_{a}}(\varphi)=\sum_{1\leq i<i_{a}}d_{i,0}^{+}(\varphi)h_{i}. \tag{5.11}\] Since \(\mathfrak{h}_{i_{a}}:C^{0+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I_{ \alpha})\to U_{i_{a}}\) is bounded and \(h_{1},\dots,h_{i_{a}-1}\) is a basis of \(U_{i_{a}}\), they are well defined and bounded. Next let us consider the bounded operators \(d_{i,1}^{+}:C^{1+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\to \mathbb{R}\) for \(1\leq i\leq g\), \(d_{s,1}^{0}:C^{1+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\to \mathbb{R}\) for \(1\leq s<\gamma\), \(d_{-j,1}^{-}:C^{1+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\to \mathbb{R}\) for \(j_{a}<j\leq g\), such that for every \(\varphi\in C^{1+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\), \[\begin{split}\mathfrak{h}_{-j_{a},i_{a}}\Bigl{(}& \varphi-\sum_{1\leq i<i_{a}}d_{i,0}^{+}(D\varphi)h_{i,1}\Bigr{)}\\ &=\sum_{1\leq i\leq g}d_{i,1}^{+}(\varphi)h_{i}+\sum_{1\leq s< \gamma}d_{s,1}^{0}(\varphi)c_{s}+\sum_{j_{a}<j\leq g}d_{-j,1}^{-}(\varphi)h_{- j}.\end{split} \tag{5.12}\] Since \(\mathfrak{h}_{-j_{a},i_{a}}:C^{1+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}} I_{\alpha})\to U_{-j_{a}}\) is bounded and \(h_{1},\dots,h_{g},c_{1},\dots c_{s},h_{-g},\dots,\)\(h_{-j_{a}+1}\) is a basis of \(U_{-j_{a}}\), they are well defined and bounded. Next let us consider the bounded operators \(d_{i,2}^{+}:C^{2+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\to \mathbb{R}\) for \(1\leq i\leq g\), \(d_{s,2}^{0}:C^{2+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\to \mathbb{R}\) for \(1\leq s<\gamma\), \(d_{-j,2}^{-}:C^{2+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\to \mathbb{R}\) for \(1\leq s<\gamma\), \(d_{-j,1}^{-}:C^{2+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\to \mathbb{R}\) for \(1\leq s<\gamma\), \(d_{-j,2}^{-}:C^{2+\mathrm{P_{ \(1\leq j\leq g\), such that for every \(\varphi\in C^{2+\mathrm{P}_{\mathrm{a}}G}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\), \[\begin{split}\mathfrak{h}_{0}&\Big{(}\varphi-\sum_{1 \leq i<i_{a}}d_{i,0}^{+}(D^{2}\varphi)h_{i,2}-\sum_{1\leq i\leq g}d_{i,1}^{+}( D\varphi)h_{i,1}-\sum_{1\leq s<\gamma}d_{s,1}^{0}(D\varphi)c_{s,1}\Big{)}\\ &=\sum_{1\leq i\leq g}d_{i,2}^{+}(\varphi)h_{i}+\sum_{1\leq s< \gamma}d_{s,2}^{0}(\varphi)c_{s}+\sum_{1\leq j\leq g}d_{-j,2}^{-}(\varphi)h_{- j}.\end{split} \tag{5.13}\] For any \(l\geq 3\) let us consider the bounded operators \(d_{i,l}^{+}:C^{l+\mathrm{P}_{\mathrm{a}}G}(\sqcup_{\alpha\in\mathcal{A}}I_{ \alpha})\to\mathbb{R}\) for \(1\leq i\leq g\), \(d_{s,l}^{0}:C^{l+\mathrm{P}_{\mathrm{a}}G}(\sqcup_{\alpha\in\mathcal{A}}I_{ \alpha})\to\mathbb{R}\) for \(1\leq s<\gamma\), \(d_{-j,l}^{-}:C^{l+\mathrm{P}_{\mathrm{a}}G}(\sqcup_{\alpha\in\mathcal{A}}I_{ \alpha})\to\mathbb{R}\) for \(1\leq j\leq g\) such that for every \(\varphi\in C^{l+\mathrm{P}_{\mathrm{a}}G}(\sqcup_{\alpha\in\mathcal{A}}I_{ \alpha})\), \[\begin{split}\mathfrak{h}_{0}&\Big{(}\varphi-\sum_{1 \leq i\leq g}d_{i,l-2}^{+}(D^{2}\varphi)h_{i,2}-\sum_{1\leq i\leq g}d_{i,l-1}^ {+}(D\varphi)h_{i,1}-\sum_{1\leq s<\gamma}d_{s,l-1}^{0}(D\varphi)c_{s,1}\Big{)} \\ &=\sum_{1\leq i\leq g}d_{i,l}^{+}(\varphi)h_{i}+\sum_{1\leq s< \gamma}d_{s,l}^{0}(\varphi)c_{s}+\sum_{1\leq j\leq g}d_{-j,l}^{-}(\varphi)h_{- j}.\end{split} \tag{5.14}\] The following lemma is necessery for proving lower bounds for the growth of the cocycle \(S(k)\) in the sense of \(L^{1}\)-norm. **Lemma 5.2**.: _Assume that \(T\) satisfies FFDC. Let \(0\leq a<1\) and \(n\geq 0\). Then for every \(\varphi\in C^{n+\mathrm{P}_{\mathrm{a}}G}(\sqcup_{\alpha\in\mathcal{A}}I_{ \alpha})\) with \(\sum_{\alpha\in\mathcal{A}}(|C^{a,+}_{\alpha,n}(\varphi)|+|C^{a,-}_{\alpha,n}( \varphi)|)>0\), we have_ \[\liminf_{k\to\infty}\frac{\log(\left\|S(k)(\varphi)\right\|_{L^{1}(I^{(k)})}/ \left|I^{(k)}\right|)}{k}\geq(a-n)\lambda_{1}. \tag{5.15}\] Proof.: By the proof of Theorem 1.1 (see Part V) in [11], if \(C^{\pm}_{\alpha}(D^{n}\varphi)\neq 0\) then there exists \(\varepsilon>0\) and a sequence of intervals \(\widehat{J}^{(k)}\subset I^{(k)}_{\alpha}\), \(k\geq 1\) such that \[|\widehat{J}^{(k)}|\geq\frac{\varepsilon|I^{(k)}_{\alpha}|}{4}\text{ and }\ |(S(k)D^{n}\varphi)(x)|\geq\frac{|C^{\pm}_{\alpha}|}{|I^{(k)}_{\alpha}|^{a}}\text{ for all }x\in\widehat{J}^{(k)}\text{ and }k\geq 1. \tag{5.16}\] An elementary argument shows that if \(f:I\to\mathbb{R}\) is a \(C^{1}\) function such that \(|Df(x)|\geq a>0\) for all \(x\in I\), then there exists a subinterval \(J\subset I\) such that \(|J|\geq|I|/4\) and \(|f(x)|\geq a|I|/4\) (see [11, Lemma 4.7]). It follows that for every \(n\geq 1\) if \(f:I\to\mathbb{R}\) is a \(C^{n}\) function such that \(|D^{n}f(x)|\geq a>0\) for all \(x\in I\), then there exists a subinterval \(J\subset I\) such that \(|J|\geq|I|/4^{n}\) and \(|f(x)|\geq a|I|^{n}/4^{n(n+1)/2}\). In view of (5.16), it follows that there exists a sequence of intervals \(J^{(k)}\subset\widehat{J}^{(k)}\subset I^{(k)}_{\alpha}\), \(k\geq 1\) such that \[|J^{(k)}|\geq\frac{\varepsilon|I^{(k)}_{\alpha}|}{4^{n+1}}\text{ and }\ |(S(k) \varphi)(x)|\geq\varepsilon^{n}\frac{|C^{\pm}_{\alpha}|}{|I^{(k)}_{\alpha}|^{a} }\frac{|I^{(k)}_{\alpha}|^{n}}{4^{n(n+3)/2}}\text{ for all }x\in J^{(k)}\text{ and }k\geq 1.\] Therefore, \[\frac{1}{|I^{(k)}|}\left\|S(k)(\varphi)\right\|_{L^{1}(I^{(k)})}\geq\varepsilon ^{n+1}\frac{1}{|I^{(k)}|}\frac{|C^{\pm}_{\alpha}|}{|I^{(k)}_{\alpha}|^{a}} \frac{|I^{(k)}_{\alpha}|^{n+1}}{4^{(n+1)^{2}}}.\] By (3.12) and (3.14), \[\frac{|I^{(k)}_{\alpha}|^{n+1-a}}{|I^{(k)}|}\geq\kappa^{n+1-a}|I^{(k)}|^{n-a} \geq\kappa^{n+1-a}C^{a-n}e^{-(\lambda_{1}+\tau)(n-a)k}.\] This gives (5.15). In the following theorem, we prove the first version of the spectral result for the cocycle \(S(k)\) on \(C^{n+\mathrm{P_{a}G}}\). Any map \(\varphi\in C^{n+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) is decomposed with respect to the base elements \(h_{i,l}\), \(c_{s,l}\), \(h_{-j,l}\) with weights determined by the derivatives of the functionals defined at the beginning of the subsection. The main tool of the proof is again Theorem 4.11. **Theorem 5.3**.: _Assume that \(T\) satisfies FFDC. For any \(0\leq a<1\) and \(n\geq 1\) there exists a bounded operator \(\mathfrak{r}_{a,n}:C^{n+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I_{ \alpha})\to C^{n+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) such that for every \(\varphi\in C^{n+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\),_ \[\varphi=\mathfrak{r}_{a,n}(\varphi)+\sum_{1\leq i<i_{a}}d_{i,0}^{+ }(D^{n}\varphi)h_{i,n}\] \[+\sum_{1\leq i\leq g}d_{i,1}^{+}(D^{n-1}\varphi)h_{i,n-1}+\sum_{ 1\leq s<\gamma}d_{s,1}^{0}(D^{n-1}\varphi)c_{s,n-1}+\sum_{j_{a}<j\leq g}d_{-j, 1}^{-}(D^{n-1}\varphi)h_{-j,n-1}\] \[+\sum_{2\leq l\leq n}\Bigl{(}\sum_{1\leq i\leq g}d_{i,l}^{+}(D^{ n-l}\varphi)h_{i,n-l}+\sum_{1\leq s<\gamma}d_{s,l}^{0}(D^{n-l}\varphi)c_{s,n-l}+ \sum_{1\leq j\leq g}d_{-j,l}^{-}(D^{n-l}\varphi)h_{-j,n-l}\Bigr{)}\] _and for any \(\tau>0\),_ \[\left\|S(k)\mathfrak{r}_{a,n}(\varphi)\right\|_{\sup}\leq O(e^{\lambda_{1}(a- n+\tau)k})\|\mathfrak{r}_{a,n}(\varphi)\|_{C^{n+\mathrm{P_{a}}}}. \tag{5.17}\] _If additionally \(\sum_{\alpha\in\mathcal{A}}(|C^{a,+}_{\alpha,n}(\varphi)|+|C^{a,-}_{\alpha,n}( \varphi)|)>0\) then_ \[\lim_{k\to\infty}\frac{\log\left\|S(k)\mathfrak{r}_{a,n}(\varphi)\right\|_{ \sup}}{k}=\lim_{k\to\infty}\frac{\log\frac{\left\|S(k)\mathfrak{r}_{a,n}( \varphi)\right\|_{L^{1}(l^{k})}}{|I^{(k)}|}}{k}=(a-n)\lambda_{1}. \tag{5.18}\] Proof.: In view of (5.3), for every \(0\leq m\leq n-1\), \[D^{m}\mathfrak{r}_{a,n}(\varphi)=D^{m}\varphi-\sum_{1\leq i<i_{a }}d_{i,0}^{+}(D^{n}\varphi)h_{i,n-m}\] \[-\sum_{1\leq i\leq g}d_{i,1}^{+}(D^{n-1}\varphi)h_{i,n-1-m}-\sum_ {1\leq s<\gamma}d_{s,1}^{0}(D^{n-1}\varphi)c_{s,n-1-m}-\sum_{j_{a}<j\leq g}d_{ -j,1}^{-}(D^{n-1}\varphi)h_{-j,n-1-m}\] \[-\sum_{2\leq l\leq n-m}\Bigl{(}\sum_{1\leq i\leq g}d_{i,l}^{+}(D^ {n-l}\varphi)h_{i,n-l-m}-\sum_{1\leq s<\gamma}d_{s,l}^{0}(D^{n-l}\varphi)c_{s, n-l-m}-\sum_{1\leq j\leq g}d_{-j,l}^{-}(D^{n-l}\varphi)h_{-j,n-l-m}\Bigr{)}.\] Suppose that \(0\leq m\leq n-3\). Since \(\mathfrak{h}_{0}(h_{i,l})=0\) for \(l\geq 3\) (see (5.5)), \(\mathfrak{h}_{0}(c_{s,l})=0\) for \(l\geq 2\) (see (5.6)), \(\mathfrak{h}_{0}(h_{-j,l})=0\) for \(l\geq 1\) (see (5.7)) and \(\mathfrak{h}_{0}(h)=h\) for \(h\in\Gamma\) (see Remark 4.10), it follows that \[\mathfrak{h}_{0}(D^{m}\mathfrak{r}_{a,n}(\varphi))=\mathfrak{h}_{ 0}\Bigl{(}D^{m}\varphi-\sum_{1\leq i\leq g}d_{i,n-m-2}^{+}(D^{m+2}\varphi)h_{ i,2}\] \[-\sum_{1\leq i\leq g}d_{i,n-m-1}^{+}(D^{m+1}\varphi)h_{i,1}-\sum_ {1\leq s<\gamma}d_{s,n-m-1}^{0}(D^{m+1}\varphi)c_{s,1}\Bigr{)}\] \[-\sum_{1\leq i\leq g}d_{i,n-m}^{+}(D^{m}\varphi)h_{i}-\sum_{1\leq s <\gamma}d_{s,n-m}^{0}(D^{m}\varphi)c_{s}-\sum_{1\leq j\leq g}d_{-j,n-m}^{-}(D^ {m}\varphi)h_{-j}.\] In view of (5.14), this gives \(\mathfrak{h}_{0}(D^{m}\mathfrak{r}_{a,n}(\varphi))=0\). The same arguments show that \[\mathfrak{h}_{0}(D^{n-2}\mathfrak{r}_{a,n}(\varphi))=\mathfrak{h}_{0 }\Big{(}D^{n-2}\varphi-\sum_{1\leq i<i_{a}}d_{i,0}^{+}(D^{n}\varphi)h_{i,2}\\ -\sum_{1\leq i\leq g}d_{i,1}^{+}(D^{n-1}\varphi)h_{i,1}-\sum_{1 \leq s<\gamma}d_{s,1}^{0}(D^{n-1}\varphi)c_{s,1}\Big{)}\\ -\sum_{1\leq i\leq g}d_{i,2}^{+}(D^{n-2}\varphi)h_{i}-\sum_{1 \leq s<\gamma}d_{s,2}^{0}(D^{n-2}\varphi)c_{s}-\sum_{1\leq j\leq g}d_{-j,2}^{-} (D^{n-2}\varphi)h_{-j}.\] In view of (5.13), this gives \(\mathfrak{h}_{0}(D^{n-2}\mathfrak{r}_{a,n}(\varphi))=0\). Next we pass to the \(n-1\)-th derivative, \[D^{n-1}\mathfrak{r}_{a,n}(\varphi)=D^{n-1}\varphi-\sum_{1\leq i <i_{a}}d_{i,0}^{+}(D^{n}\varphi)h_{i,1}\\ -\sum_{1\leq i\leq g}d_{i,1}^{+}(D^{n-1}\varphi)h_{i}-\sum_{1 \leq s<\gamma}d_{s,1}^{0}(D^{n-1}\varphi)c_{s}-\sum_{j_{a}<j\leq g}d_{-j,1}^{- }(D^{n-1}\varphi)h_{-j}.\] In view of (5.12), this gives \(\mathfrak{h}_{-j_{a},i_{a}}(D^{n-1}\mathfrak{r}_{a,n}(\varphi))=0\). Finally we pass to the \(n\)-th derivative, \[D^{n}\mathfrak{r}_{a,n}(\varphi)=D^{n}\varphi-\sum_{1\leq i<i_{a}}d_{i,0}^{+} (D^{n}\varphi)h_{i}.\] In view of (5.11), this gives \(\mathfrak{h}_{i_{a}}(D^{n}\mathfrak{r}_{a,n}(\varphi))=0\). Since \(\max\{\lambda_{i_{a}},a\lambda_{1},\lambda_{1}-\lambda_{j_{a}}\}=a\lambda_{1}\), by Theorem 4.11, for any \(\tau>0\), \[\|S(k)\mathfrak{r}_{a,n}(\varphi)\|_{\sup}=O(e^{\lambda_{1}(a-n+\tau)k})\| \mathfrak{r}_{a,n}(\varphi)\|_{C^{n+\mathrm{P}_{a}}}.\] The final lower bound in (5.18) follows directly from Lemma 5.2. _Remark 5.4_.: Theorem 5.3 remains true also in the case when \(n=0\), except that in formulas (5.17) and (5.18) we must replace the sup norm by the \(L^{1}\) norm. Here, \(\mathfrak{r}_{a,0}:C^{0+\mathrm{P}_{a}\mathrm{G}}(\sqcup_{\alpha\in\mathcal{A }}I_{\alpha})\to C^{0+\mathrm{P}_{a}\mathrm{G}}(\sqcup_{\alpha\in\mathcal{A }}I_{\alpha})\) is given by \[\mathfrak{r}_{a,0}(\varphi)=\varphi-\sum_{1\leq i<i_{a}}d_{i,0}^{+}(\varphi)h _{i},\] so \(\mathfrak{h}_{i_{a}}(\mathfrak{r}_{a,0}(\varphi))=0\) for every \(\varphi\in C^{0+\mathrm{P}_{a}\mathrm{G}}(\sqcup_{\alpha\in\mathcal{A}}I_{ \alpha})\). By Theorem 4.2 and Proposition 3.5, for any \(\tau>0\), \[\|\mathcal{M}^{(k)}(S(k)(\mathfrak{r}_{a,0}(\varphi)))\|=O(e^{(a\lambda_{1}+ \tau)k})\|\mathfrak{r}_{a,0}(\varphi)\|_{C^{0+\mathrm{P}_{a}}}.\] In view of (4.10) and (4.3), it follows that \[\|S(k)(\mathfrak{r}_{a,0}(\varphi))\|_{L^{1}(I^{(k)})}/|I^{(k)}|=O(e^{(a \lambda_{1}+\tau)k})\|\mathfrak{r}_{a,0}(\varphi)\|_{C^{0+\mathrm{P}_{a}}}.\] The lower bound follows again directly from Lemma 5.2. Invariant distributions on \(C^{n+\mathrm{P}_{a}\mathrm{G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) For every \(0\leq a<1\) and \(n\geq 0\) denote by \(\mathscr{T}_{a,n}^{*}\) (\(\mathscr{T}_{a,n}\) resp.) the subset of triples \(\bar{t}\in\mathscr{T}\mathscr{F}^{*}\) (\(\mathscr{T}\mathscr{F}\) resp.) of the form \((l,+,i)\), \((l,0,s)\) or \((l,-,j)\) such that \(0\leq l\leq n\) with the additional restriction that * if \(l=n\) then we deal only with \((n,+,i)\) for \(1\leq i<i_{a}\); * if \(l=n-1\) then we deal only with \((n-1,+,i)\) for all \(1\leq i\leq g\), \((n-1,0,s)\) for all \(1\leq s<\gamma\) and \((n-1,-,j)\) for \(j_{a}<j\leq g\). Recall that \(\mathscr{T}\mathscr{F}\) is the subset of triples in \(\mathscr{T}\mathscr{F}^{*}\) after removing all triples of the form \((l,-,1)\). _Remark 5.5_.: By definition, \[\bar{t}\in\mathscr{T}^{*}_{a,n}\Longleftrightarrow\mathfrak{o}(\bar{t})\leq(n- \tfrac{\lambda_{i_{a}-1}}{\lambda_{1}})\vee(n-1+\tfrac{\lambda_{i_{a}+1}}{ \lambda_{1}}).\] As \(\lambda_{i_{a}}\leq\lambda_{1}a<\lambda_{i_{a}-1}\) and \(\lambda_{j_{a}+1}<\lambda_{1}(1-a)\leq\lambda_{j_{a}}\), it follows that \[\bar{t}\in\mathscr{T}^{*}_{a,n}\Longleftrightarrow\mathfrak{o}(\bar{t})<n-a. \tag{5.19}\] _Definition 7_.: For every \(\bar{t}\in\mathscr{T}^{*}_{a,n}\) let \(\mathfrak{f}_{\bar{t}}:C^{n+\mathrm{P}_{\mathrm{a}}\mathrm{G}}(\sqcup_{ \alpha\in\mathcal{A}}I_{\alpha})\to\mathbb{C}\) and \(h_{\bar{t}}\in\Gamma_{n}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) be defined as follows: * \(\mathfrak{f}_{\bar{t}}=d^{+}_{i,n-l}\circ D^{l}\) and \(h_{\bar{t}}:=h_{i,l}\) if \(\bar{t}=(l,+,i)\); * \(\mathfrak{f}_{\bar{t}}=d^{0}_{s,n-l}\circ D^{l}\) and \(h_{\bar{t}}:=c_{s,l}\) if \(\bar{t}=(l,0,s)\); * \(\mathfrak{f}_{\bar{t}}=d^{-}_{-j,n-l}\circ D^{l}\) and \(h_{\bar{t}}:=h_{-j,l}\) if \(\bar{t}=(l,-,j)\). **Theorem 5.6**.: _Assume that \(T\) satisfies FFDC. Then given \(0\leq a<1\) and \(n\geq 0\), every \(\varphi\in C^{n+\mathrm{P}_{\mathrm{a}}\mathrm{G}}(\sqcup_{\alpha\in \mathcal{A}}I_{\alpha})\) is decomposed as follows:_ \[\varphi=\sum_{\bar{t}\in\mathscr{T}^{*}_{a,n}}\mathfrak{f}_{\bar{t}}(\varphi) h_{\bar{t}}+\mathfrak{r}_{a,n}(\varphi), \tag{5.20}\] _so that for any \(\tau>0\) and for all \(0\leq l<n\),_ \[\|S(k)(D^{l}\mathfrak{r}_{a,n}(\varphi))\|_{\sup}=O(e^{(-\lambda_ {1}(n-l-a)+\tau)k})\|D^{l}\mathfrak{r}_{a,n}(\varphi)\|_{C^{n-l+\mathrm{P}_{ \mathrm{a}}}}, \tag{5.22}\] \[\|S(k)(D^{n}\mathfrak{r}_{a,n}(\varphi))\|_{L^{1}(I^{(k)})}/|I^{( k)}|=O(e^{(\lambda_{1}a+\tau)k})\|D^{n}\mathfrak{r}_{a,n}(\varphi)\|_{C^{0+ \mathrm{P}_{\mathrm{a}}}}\text{ and }\] (5.23) \[\lim_{k\to\infty}\frac{1}{k}\log\left\|S(k)\sum_{\bar{t}\in \mathscr{T}^{*}_{a,n}}a_{\bar{t}}h_{\bar{t}}\right\|_{\sup}=-\lambda_{1}\min \{\mathfrak{o}(\bar{t}):\bar{t}\in\mathscr{T}^{*}_{a,n},a_{\bar{t}}\neq 0\}. \tag{5.21}\] _If additionally \(\sum_{\alpha\in\mathcal{A}}(|C^{a,+}_{\alpha,n}(\varphi)|+|C^{a,-}_{\alpha,n}( \varphi)|)>0\) then_ \[\lim_{k\to\infty}\frac{1}{k}\log\|S(k)(D^{l}\mathfrak{r}_{a,n}( \varphi))\|_{\sup}=-\lambda_{1}(n-l-a)\text{ for }0\leq l<n\text{ and } \tag{5.25}\] \[\lim_{k\to\infty}\frac{1}{k}\log\left(\|S(k)(D^{l}\mathfrak{r}_{ a,n}(\varphi))\|_{L^{1}(I^{(k)})}/|I^{(k)}|\right)=-\lambda_{1}(n-l-a)\text{ for }0\leq l\leq n. \tag{5.24}\] _Moreover, for each \(\bar{t}\in\mathscr{T}_{a,n}\) the functional \(\mathfrak{f}_{\bar{t}}:C^{n+\mathrm{P}_{\mathrm{a}}\mathrm{G}}(\sqcup_{\alpha\in \mathcal{A}}I_{\alpha})\to\mathbb{C}\) is invariant, i.e. for every \(\varphi\in C^{n+\mathrm{P}_{\mathrm{a}}\mathrm{G}}(\sqcup_{\alpha\in \mathcal{A}}I_{\alpha})\) such that \(\varphi=v\circ T-v\) for some \(v\in C^{r}(I)\) with \(\mathfrak{o}(\bar{t})<r\leq n-a\), we have \(\mathfrak{f}_{\bar{t}}(\varphi)=0\). Also, the functionals \(C^{a,\pm}_{\alpha,n}:C^{n+\mathrm{P}_{\mathrm{a}}\mathrm{G}}(\sqcup_{\alpha \in\mathcal{A}}I_{\alpha})\to\mathbb{C}\) are invariant, i.e. if \(\varphi=v\circ T-v\) for some \(v\in C^{r}(I)\) with \(n-a<r\), then \(C^{a,\pm}_{\alpha,n}(\varphi)=0\) for every \(\alpha\in\mathcal{A}\)._ Proof.: All claims of the theorem, in addition to invariance, are derived directly from Proposition 5.1, Theorem 5.3 and Remark 5.4, so we focus only on invariance. Suppose that \(\varphi=v\circ T-v\) for some \(v\in C^{r}(I)\) with \(r\leq n-a\). Let \(r=m+b\) with an integer \(0\leq m<n\) and \(0<b\leq 1\). By (5.20), for every \(0\leq j\leq n\), \[D^{j}(\varphi-\mathfrak{r}_{a,n}(\varphi))=\sum_{\bar{t}\in\mathscr{T}^{*}_{a,n }}\mathfrak{f}_{\bar{t}}(\varphi)D^{j}h_{\bar{t}}. \tag{5.26}\] Then for every \(0\leq j\leq m\) we have \(D^{j}v\in C^{m-j+b}(I)\) and for every \(x\in I^{(k)}_{\alpha}\), \[|S(k)D^{j}\varphi(x)|=|D^{j}v(T^{Q_{\alpha}(k)}(x))-D^{j}v(x)|\leq\left\{ \begin{array}{ll}\|D^{j}v\|_{C^{1}}|I^{(k)}|&\text{if }0\leq j<m\\ \|D^{m}v\|_{C^{b}}|I^{(k)}|^{b}&\text{if }j=m.\end{array}\right.\] This also gives \[\Big{|}\int_{I_{\alpha}^{(k)}}S(k)D^{m+1}\varphi(x)\,dx\Big{|}=|S(k)D^{m}\varphi(r_ {\alpha}^{(k)})-S(k)D^{m}\varphi(l_{\alpha}^{(k)})|\leq 2\|v\|_{C^{m+b}}|I^{(k)}|^{b}.\] As \(|I^{(k)}|=O(e^{-\lambda_{1}k})\), \(|I_{\alpha}^{(k)}|^{-1}=O(|I^{(k)}|^{-1})\) and \(|I^{(k)}|^{-1}=O(e^{(\lambda_{1}+\tau)k})\) for every \(\tau>0\), we obtain \[\limsup_{k\to\infty}\frac{1}{k}\log\|S(k)D^{j}\varphi\|_{\sup}\leq- \lambda_{1}\text{ if }j<m;\] \[\limsup_{k\to\infty}\frac{1}{k}\log\|S(k)D^{m}\varphi\|_{\sup} \leq-b\lambda_{1};\] \[\limsup_{k\to\infty}\frac{1}{k}\log\|\mathcal{M}^{(k)}(S(k)D^{m+ 1}\varphi)\|\leq(1-b)\lambda_{1}.\] As \(m<n\), in view of (5.21) and (5.22), it follows that \[\limsup_{k\to\infty}\frac{1}{k}\log\|S(k)(D^{j}(\varphi-\mathfrak{ r}_{a,n}(\varphi)))\|_{\sup}\leq-\lambda_{1}\text{ if }0\leq j<m; \tag{5.28}\] \[\limsup_{k\to\infty}\frac{1}{k}\log\|S(k)(D^{m}(\varphi-\mathfrak{ r}_{a,n}(\varphi)))\|_{\sup}\leq-b\lambda_{1};\] (5.29) \[\limsup_{k\to\infty}\frac{1}{k}\log\|\mathcal{M}^{(k)}(S(k)(D^{m+ 1}(\varphi-\mathfrak{r}_{a,n}(\varphi))))\|\leq(1-b)\lambda_{1}. \tag{5.27}\] In view of (5.26), \(\widetilde{\varphi}=D^{m+1}(\varphi-\mathfrak{r}_{a,n}(\varphi))\in\Gamma_{n -m-1}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\). Therefore, by (4.6) and (4.2), \[\|S(k)\widetilde{\varphi}\|_{\sup}\leq\|\mathcal{M}^{(k)}(S(k)\widetilde{ \varphi})\|+\operatorname{Var}(S(k)\widetilde{\varphi})\leq\|\mathcal{M}^{(k) }(S(k)\widetilde{\varphi})\|+\operatorname{Var}\widetilde{\varphi}.\] In view of (5.29), this gives \[\limsup_{k\to\infty}\frac{1}{k}\log\|S(k)(D^{m+1}(\varphi-\mathfrak{r}_{a,n} (\varphi)))\|_{\sup}\leq(1-b)\lambda_{1}. \tag{5.30}\] On the other hand, by (5.26) and (5.9), \[\lim_{k\to\infty}\frac{1}{k}\log\|S(k)(D^{j}(\varphi-\mathfrak{r}_{a,n}( \varphi)))\|_{\sup}=\lim_{k\to\infty}\frac{1}{k}\log\|S(k)\sum_{\bar{t}\in \mathscr{T}_{a,n}^{*}}\mathfrak{f}_{\bar{t}}(\varphi)D^{j}h_{\bar{t}}\|_{\sup}\] \[=\lambda_{1}\max\left\{-\mathfrak{o}(\bar{t})+j:\bar{t}\in\mathscr{T}_{a,n}^{ *},\mathfrak{f}_{\bar{t}}(\varphi)\neq 0,D^{j}h_{\bar{t}}\neq 0\right\}.\] In view of (5.27), (5.28), (5.30), this yields \[\min\left\{\mathfrak{o}(\bar{t}):\bar{t}\in\mathscr{T}_{a,n}^{*}, \mathfrak{f}_{\bar{t}}(\varphi)\neq 0,D^{j}h_{\bar{t}}\neq 0\right\}\geq l+1 \text{ if }0\leq l<m, \tag{5.32}\] \[\min\left\{\mathfrak{o}(\bar{t}):\bar{t}\in\mathscr{T}_{a,n}^{*}, \mathfrak{f}_{\bar{t}}(\varphi)\neq 0,D^{m}h_{\bar{t}}\neq 0\right\}\geq m+b,\] (5.33) \[\min\left\{\mathfrak{o}(\bar{t}):\bar{t}\in\mathscr{T}_{a,n}^{*}, \mathfrak{f}_{\bar{t}}(\varphi)\neq 0,D^{m+1}h_{\bar{t}}\neq 0\right\}\geq m+b. \tag{5.31}\] Let \(\bar{t}\in\mathscr{T}_{a,n}\) be any triple such that \(\mathfrak{o}(\bar{t})<r=m+b\). By definition, \(\mathfrak{f}_{\bar{t}}\), \(h_{\bar{t}}\) and \(\mathfrak{o}(\bar{t})\) are of the form: \[\mathfrak{f}_{\bar{t}}=d_{i,n-l}^{+}\circ D^{l},\quad h_{\bar{t}}=h_{i,l}\text { and }\mathfrak{o}(\bar{t})=l-\frac{\lambda_{i}}{\lambda_{1}}\text{ or}\] \[\mathfrak{f}_{\bar{t}}=d_{s,n-l}^{0}\circ D^{l},\quad h_{\bar{t}}=c_{s,l}\text { and }\mathfrak{o}(\bar{t})=l\text{ or}\] \[\mathfrak{f}_{\bar{t}}=d_{-j,n-l}^{-}\circ D^{l},\quad h_{\bar{t}}=h_{-j,l} \text{ and }\mathfrak{o}(\bar{t})=l+\frac{\lambda_{j}}{\lambda_{1}}\text{ with }j\neq 1\] for \(0\leq l\leq m+1\). If \(0\leq l<m\), then \(D^{l}h_{\bar{t}}\neq 0\) and \(\mathfrak{o}(\bar{t})\leq l+\lambda_{2}/\lambda_{1}<l+1\). Then, by (5.31), \(\mathfrak{f}_{\bar{t}}(\varphi)=0\). If \(l=m\) or \(m+1\), then \(D^{l}h_{\bar{t}}\neq 0\) and \(\mathfrak{o}(\bar{t})<m+b\). Then, by (5.32) and (5.33), \(\mathfrak{f}_{\bar{t}}(\varphi)=0\) as well. This completes the proof of invariance for the functionals \(\mathfrak{f}_{\bar{t}}\), \(\bar{t}\in\mathscr{T}_{a,n}\). Suppose that \(\varphi=v\circ T-v\) for some \(v\in C^{r}(I)\) with \(r>n-a\). Assume that \(0<a<1\). Then \(D^{n-1}\varphi=D^{n-1}v\circ T-D^{n-1}v\) with \(D^{n-1}v\in C^{1-a+\tau}(I)\), where \(0<\tau<(r-n+a)\wedge a\). Therefore, \(D^{n-1}\varphi\) is \((1-a+\tau)\)-Holder on any interval \(I_{\alpha}\), \(\alpha\in\mathcal{A}\). Suppose, contrary to our claim, that \(C^{+}_{\alpha}(D^{n}\varphi)=C^{a,+}_{\alpha,n}(\varphi)\neq 0\). Then there exists \(\varepsilon>0\) such that \[0<c:=|C^{+}_{\alpha}(D^{n}\varphi)|/2\leq|D^{n+1}\varphi(x)||x-l_{\alpha}|^{1 +a}\text{ for }x\in(l_{\alpha},l_{\alpha}+\varepsilon].\] Hence, for every \(x\in(l_{\alpha},l_{\alpha}+\varepsilon]\), \[\Big{|}\frac{c}{a(x-l_{\alpha})^{a}}-\frac{c}{a\varepsilon^{a}} \Big{|} =\int_{x}^{l_{\alpha}+\varepsilon}\frac{c}{(s-l_{\alpha})^{1+a}} ds\leq\Big{|}\int_{x}^{l_{\alpha}+\varepsilon}D^{n+1}\varphi(s)ds\Big{|}\] \[\leq|D^{n}\varphi(x)-D^{n}\varphi(l_{\alpha}+\varepsilon)|.\] It follows that there exists \(0<\delta<\varepsilon\) such that \[\frac{c}{2a(x-l_{\alpha})^{a}}\leq|D^{n}\varphi(x)|\text{ for }x\in(l_{\alpha},l_{ \alpha}+\delta].\] Hence, for every \(x,y\in(l_{\alpha},l_{\alpha}+\delta]\), \[\frac{c}{2a(1-a)} |(y-l_{\alpha})^{1-a}-(x-l_{\alpha})^{1-a}|=\int_{x}^{y}\frac{c}{2 a(s-l_{\alpha})^{a}}ds\leq\Big{|}\int_{x}^{y}D^{n}\varphi(s)ds\Big{|}\] \[\leq|D^{n-1}\varphi(x)-D^{n-1}\varphi(y)|\leq\|D^{n-1}\varphi\|_ {C^{1-a+\tau}}|(y-l_{\alpha})-(x-l_{\alpha})|^{1-a+\tau}.\] It follows that \(c\leq 2a(1-a)\|D^{n-1}\varphi\|_{C^{1-a+\tau}}s^{\tau}\) for every \(s\in(0,\delta]\), contrary to \(|C^{+}_{\alpha}(D^{n}\varphi)|=2c>0\). This gives \(C^{a,+}_{\alpha,n}(\varphi)=C^{+}_{\alpha}(D^{n}\varphi)=0\) and the same arguments also show that \(C^{a,-}_{\alpha,n}(\varphi)=C^{-}_{\alpha}(D^{n}\varphi)=0\). If \(a=0\) then the proof runs in the same way. In this case \(D^{n}\varphi=D^{n}v\circ T-D^{n}v\) with \(D^{n}v\in C^{\tau}(I)\), where \(0<\tau<(r-n)\wedge 1\). Therefore, \(D^{n}\varphi\) is \(\tau\)-Holder on any \(I_{\alpha}\), \(\alpha\in\mathcal{A}\). Suppose that \(C^{a,+}_{\alpha,n}(\varphi)\neq 0\). As in the previous case, there exists \(\varepsilon>0\) such that \[0<c:=|C^{+}_{\alpha}(D^{n}\varphi)|/2\leq|D^{n+1}\varphi(x)||x-l_{\alpha}| \text{ for }x\in(l_{\alpha},l_{\alpha}+\varepsilon].\] Hence, for every \(x,y\in(l_{\alpha},l_{\alpha}+\varepsilon]\), \[c|\log(y-l_{\alpha})-\log(x-l_{\alpha})|=\int_{x}^{y}\frac{c}{s-l_{\alpha}} ds\leq\Big{|}\int_{x}^{y}D^{n+1}\varphi(s)ds\Big{|}\] \[\leq|D^{n}\varphi(x)-D^{n}\varphi(y)|\leq\|D^{n}\varphi\|_{C^{ \tau}}|(y-l_{\alpha})-(x-l_{\alpha})|^{\tau}.\] It follows that \(c\log 2\leq\|D^{n}\varphi\|_{C^{\tau}}s^{\tau}\) for every \(s\in(0,\varepsilon/2]\), contrary to \(|C^{+}_{\alpha}(D^{n}\varphi)|=2c>0\). This completes the proof. **Lemma 5.7**.: _The decomposition (5.20) is unique, i.e. if_ \[\varphi=\sum_{\bar{t}\in\mathscr{T}_{a,n}^{*}}a_{\bar{t}}h_{\bar{t}}+\widetilde {\varphi}\quad\text{with}\quad\limsup_{k\to\infty}\frac{1}{k}\log\|S(k) \widetilde{\varphi}\|_{\sup}\leq-\lambda_{1}(n-a),\] _then \(a_{\bar{t}}=\mathfrak{f}_{\bar{t}}(\varphi)\) for every \(\bar{t}\in\mathscr{T}_{a,n}^{*}\). In particular, \(\mathfrak{f}_{\bar{t}}(\mathfrak{r}_{a,n}(\varphi))=0\) for every \(\bar{t}\in\mathscr{T}_{a,n}^{*}\)._ Proof.: By assumption, \(\widetilde{\varphi}-\mathfrak{r}_{a,n}(\varphi)=\sum_{\bar{t}\in\mathscr{T}_{a,n}^{* }}(\mathfrak{f}_{\bar{t}}(\varphi)-a_{\bar{t}})h_{\bar{t}}\) and \[\limsup_{k\to\infty}\frac{1}{k}\log\|S(k)(\widetilde{\varphi}-\mathfrak{r}_{a, n}(\varphi))\|_{\sup}\leq-\lambda_{1}(n-a).\] On the other hand, by (5.23), \[\lim_{k\to\infty}\frac{1}{k}\log\Big{\|}S(k)\sum_{\bar{t}\in\mathscr{T}_{a,n}^ {*}}(\mathfrak{f}_{\bar{t}}(\varphi)-a_{\bar{t}})h_{\bar{t}}\Big{\|}_{\sup}=- \lambda_{1}\min\{\mathfrak{o}(\bar{t}):\bar{t}\in\mathscr{T}_{a,n}^{*}, \mathfrak{f}_{\bar{t}}(\varphi)\neq a_{\bar{t}}\}.\] In view of (5.19), both give \(a_{\bar{t}}=\mathfrak{f}_{\bar{t}}(\varphi)\) for every \(\bar{t}\in\mathscr{T}_{a,n}^{*}\). _Remark 5.8_.: Let us consider two pairs \((n_{1},a_{1})\), \((n_{2},a_{2})\) such that \(n_{1}-a_{1}<n_{2}-a_{2}\). Then \(C^{n_{2}+\mathrm{P}_{a_{2}}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\subset C ^{n_{1}+\mathrm{P}_{a_{1}}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) and \(\mathscr{T}_{a_{1},n_{1}}^{*}\subset\mathscr{T}_{a_{2},n_{2}}^{*}\). Suppose that \(\bar{t}_{1}\in\mathscr{T}_{a_{1},n_{1}}^{*}\), \(\bar{t}_{2}\in\mathscr{T}_{a_{2},n_{2}}^{*}\) are such that \(\bar{t}_{1}=\bar{t}_{2}\). By Lemma 5.7, \(\mathfrak{f}_{\bar{t}_{1}}:C^{n_{1}+\mathrm{P}_{a_{1}}}(\sqcup_{\alpha\in \mathcal{A}}I_{\alpha})\to\mathbb{C}\) is an extension of \(\mathfrak{f}_{\bar{t}_{2}}:C^{n_{2}+\mathrm{P}_{a_{2}}}(\sqcup_{\alpha\in \mathcal{A}}I_{\alpha})\to\mathbb{C}\). ## 6. Solving cohomological equations on IET Given \(\varphi\in C^{n+\mathrm{P}_{a}\mathrm{G}}(\sqcup_{\alpha\in\mathcal{A}}I_{ \alpha})\), we provide a smooth solution \(v\) (whose some derivative is Holder) of the cohomological equation \(v\circ T-v=\varphi\) for the IET \(T\) provided that the sequence \(S(k)\varphi\) decays fast enough. Combining this with the spectral result (Theorem 5.6), we get a regularity of the solutions depending on the vanishing of the invariant distributions \(\mathfrak{f}_{\bar{t}}\). Main estimates for regularity are carried out by decompositions of orbits and space decompositions invented by Marmi-Moussa-Yoccoz [18, SS2.2.3] and [20, SS3.7-8]. _Time decomposition._ * Let \(T\) an IET satisfying Keane's condition, \(x\in I\) and \(N\geq 1\). Let \(y\) be the point of the orbit \((T^{j}x)_{0\leq j<N}\) which is closest to \(0\). * We split the orbit into positive/negative parts \((T^{j}y)_{0\leq j<N^{+}}\) and \((T^{j}y)_{N^{-}\leq j<0}\), where \(N=N^{+}-N^{-}\). * Let \(k\geq 0\) be the largest number such that at least one element of \((T^{j}y)_{0<j<N^{+}}\) belongs to \(I^{(k)}\). * Let \(y,T^{(k)}y,\ldots,{(T^{(k)})}^{q(k)}y\) be all points of \((T^{j}y)_{0\leq j<N^{+}}\) that belong to \(I^{(k)}\) for some \(q(k)>0\). Let \(y(k):=y\). * We define \(y(l),q(l)\) inductively backward for \(0\leq l<k\). Let \(y(k-1)=(T^{(k)})^{q(k)}(y)\) and let \(y(l)=T^{N(l)}(y)\) be the last point of the orbit \((T^{j}y)_{0\leq j<N}\) which belongs to \(I^{(l+1)}\). Let \(y(l),T^{(l)}(y(l)),\ldots,{(T^{(l)})}^{q(l)}(y(l)):=y(l-1)\) be all points of \((T^{j}y)_{N(l)\leq j<N^{+}}\) that belong to \(I^{(l)}\) for some \(q(l)\geq 0\). Then, \[\sum_{0\leq j<N^{+}}\varphi(T^{i}y)=\sum_{l=0}^{k}\sum_{0\leq j<q(l)}S(l) \varphi((T^{(l)})^{j}(y(l)))\text{ with }q(l)\leq\|Z(l+1)\|. \tag{6.1}\] The negative part of the orbit is divided in a similar way. _Space decomposition._ Recall the partition into Rokhlin towers in SS 2.3 \[I=\bigcup_{\alpha\in\mathcal{A}}\bigcup_{i=0}^{Q_{\alpha}(k)-1}T^{i}(I_{ \alpha}^{(k)}).\] * For any pair \(x_{-}<x_{+}\) of points in \(I\), let \(k\geq 0\) be the smallest integer such that \((x_{-},x_{+})\) contains at least of one interval of the \(k\)-th partition. * Let \(J^{(k)}(1),\ldots,J^{(k)}(q(k))\) be all intervals of the \(k\)-th partition contained in \((x_{-},x_{+})\). Then \(0<q(k)\leq\|Z(k)\|\). * For every \(l\geq k\), let \(x_{+}(l)<x_{+}\) be the largest end point of an interval of the \(l\)-th partition. Then \(x_{+}(l)\geq x_{+}(l-1)\) for any \(l>k\). * For any \(l>k\) the interval \((x_{+}(l-1),x_{+}(l))\) is the union of intervals \(J^{(l)}_{+}(1),\ldots,\)\(J^{(l)}_{+}(q_{+}(l))\) of the \(l\)-th partition for some \(0\leq q_{+}(l)\leq\|Z(l)\|\). * The point \(x_{-}(l)\), \(0\leq q_{-}(l)\leq\|Z(l)\|\) and intervals \(J^{(l)}_{-}(1),\ldots,J^{(l)}_{-}(q_{-}(l))\) of the \(l\)-th partition are defined in the similar way. This yields the following decomposition of \((x_{-},x_{+})\): \[(x_{-},x_{+})=\bigcup_{1\leq q\leq q(k)}J^{(k)}(q)\cup\bigcup_{l>k}\bigcup_{ \epsilon=\pm 1}\bigcup_{1\leq q\leq q_{\epsilon}(l)}J^{(l)}_{\epsilon}(q). \tag{6.2}\] ### Holder solutions In this section solutions of the cohomological equation \(v\circ T-v=\varphi\) are obtained by applying standard Gottschalk-Hedlund arguments for \(\varphi\in C^{1+\mathbb{P}_{\mathrm{a}}\mathrm{G}}\). A Holder regularity of solutions follows from exponential decay of \(S(k)\varphi\) and some bounds on the growth of \(S(k)D\varphi\). **Lemma 6.1**.: _Suppose that \(0\leq a<1\) and \(\varphi\in C^{1+\mathbb{P}_{\mathrm{a}}\mathrm{G}}(\sqcup_{\alpha\in A}I_{ \alpha})\) is such that for any \(\tau>0\) we have \(\|S(k)\varphi\|_{\mathrm{sup}}=O(e^{(-\lambda_{1}(1-a)+\tau)k})c_{1}(\varphi)\). Then there exists a continuous solution \(v\in C^{0}(I)\) of the cohomological equation \(\varphi=v\circ T-v\) such that \(v(0)=0\) and_ \[\sup\{|v(x)-v(y)|:x,y\in I\}\leq 2\sum_{l=0}^{\infty}\|Z(l+1)\|\,\|S(l)\varphi \|_{\mathrm{sup}}\,. \tag{6.3}\] Proof.: In view of (6.1) for any \(n\in\mathbb{N}\), \[\left\|\varphi^{(n)}\right\|_{\mathrm{sup}}\leq 2\sum_{l=0}^{\infty}\|Z(l+1) \|\,\|S(l)\varphi\|_{\mathrm{sup}}\,.\] As \(\|Z(l+1)\|=O(e^{\tau l})\) and \(\|S(l)\varphi\|_{\mathrm{sup}}=O(e^{(-\lambda_{1}(1-a)+\tau)l})c_{1}(\varphi)\), the series on the right side of the inequality converges and the \(n\)-th Birkhoff sums of \(\varphi\) are uniformly bounded. By classical Gottschalk-Hedlund type arguments (see [19, Theorem 3.4]), the cohomological equation has a continuous solution \(v\). Moreover, for any \(x\in I\) and \(n\geq 1\), \[|v(T^{n}x)-v(x)|=|\varphi^{(n)}(x)|\leq 2\sum_{l=0}^{\infty}\|Z(l+1)\|\,\|S(l) \varphi\|_{\mathrm{sup}}\,.\] As the orbit \(\{T^{n}x\}_{n\geq 0}\) is dense and \(v\) is continuous, this gives (6.3). Since the function \(v\) is unique up to an additive constant, it can be always chosen so that \(v(0)=0\). In what follows, we will always deal with solutions satisfying \(v(0)=0\). For any interval \(J\subset I\), let \(\mathrm{osc}(v,J):=\sup\{|v(x)-v(y)|:x,y\in J\}\). **Corollary 6.2**.: _Let \(\varphi\in C^{1+\mathbb{P}_{\mathrm{a}}\mathrm{G}}(\sqcup_{\alpha\in A}I_{ \alpha})\) be such that for any \(\tau>0\) we have \(\|S(k)\varphi\|_{\mathrm{sup}}=O(e^{(-\lambda_{1}(1-a)+\tau)k})c_{1}(\varphi)\). Then for every \(\tau>0\),_ \[\mathrm{osc}(v,I^{(k)})=O(e^{(-\lambda_{1}(1-a)+\tau)k})c_{1}(\varphi). \tag{6.4}\] Proof.: As \(\varphi=v\circ T-v\), for every \(k\geq 0\) we have \(S(k)\varphi=v\circ T^{(k)}-v\) on \(I^{(k)}\). Then, by (6.3) applied to \(T^{(k)}:I^{(k)}\to I^{(k)}\), we have \[\operatorname{osc}(v,I^{(k)})=\sup\{|v(x)-v(y)|:x,y\in I^{(k)}\}\leq 2\sum_{l \geq k}^{\infty}\|Z(l+1)\|\,\|S(l)\varphi\|_{\sup}\,.\] As \(\|Z(l+1)\|=O(e^{\tau l})\) and \(\|S(l)\varphi\|_{\sup}=O(e^{(-\lambda_{1}(1-a)+\tau)l})c_{1}(\varphi)\), this gives (6.4). The following elementary calculations will be used in estimating \(\operatorname{osc}(v,T^{i}(I_{\alpha}^{(k)}))\) for \(1\leq i<Q_{\alpha}(k)\) in Lemma 6.4. **Lemma 6.3**.: _Let \(\varphi\in C^{0+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\). Then for every \(\alpha\in\mathcal{A}\) and any Borel set \(J\subset I_{\alpha}\),_ \[\int_{J}|\varphi(x)|dx\leq\left\{\begin{array}{cl}\frac{\|\varphi\|_{L^{1}(I )}|J|}{|I|}+\frac{2^{a+3}p_{a}(\varphi)|J|^{1-a}}{a(1-a)}&\text{if }0<a<1,\\ \frac{\|\varphi\|_{L^{1}(I)}|J|}{|I|}+4p_{a}(\varphi)|J|(1+\log\frac{|I|}{|J|} )&\text{if }a=0.\end{array}\right. \tag{6.5}\] Proof.: By Remark 2.1 in [12], for any \(x\in\operatorname{Int}I_{\alpha}\), \[|\varphi(x)|\leq\frac{\|\varphi\|_{L^{1}}}{|I|}+p_{a}(\varphi) \Big{(}\frac{1}{a\min\{x-l_{\alpha},r_{\alpha}-x\}^{a}}+\frac{2^{a+2}}{a(1-a) |I_{\alpha}|^{a}}\Big{)}\text{ if }0<a<1,\] \[|\varphi(x)|\leq\frac{\|\varphi\|_{L^{1}}}{|I|}+p_{a}(\varphi) \Big{(}\log\frac{|I_{\alpha}|}{2\min\{x-l_{\alpha},r_{\alpha}-x\}}+2\Big{)} \text{ if }a=0.\] It follows that if \(0<a<1\) then \[\int_{J}|\varphi(x)|dx\leq\frac{\|\varphi\|_{L^{1}(I)}\,|J|}{|I|}+\frac{2^{a+2 }p_{a}(\varphi)|J|}{a(1-a)|I|^{a}}+\frac{2p_{a}(\varphi)}{a}\int_{0}^{|J|}x^{- a}dx\] and if \(a=0\) then \[\int_{J}|\varphi(x)|dx\leq\frac{\|\varphi\|_{L^{1}(I)}\,|J|}{|I|}+2p_{a}( \varphi)|J|-2p_{a}(\varphi)\int_{0}^{|J|}\log(x/|I|)dx.\] This gives (6.5). **Lemma 6.4**.: _Suppose that \(\varphi\in C^{1+\mathrm{P_{a}G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) is such that for any \(\tau>0\) we have \(\|S(k)\varphi\|_{\sup}=O(e^{(-\lambda_{1}(1-a)+\tau)k})c_{1}(\varphi)\) and \(\frac{\|S(k)D\varphi\|_{L^{1}(I^{(k)})}}{|I^{(k)}|}=O(e^{(\lambda_{1}a+\tau)k })c_{0}(D\varphi)\). Then for any \(k\geq 0\), \(\alpha\in\mathcal{A}\) and \(0\leq N<Q_{\alpha}(k)\),_ \[\operatorname{osc}(v,T^{N}(I_{\alpha}^{(k)}))=\operatorname{osc}(v,I_{\alpha} ^{(k)})+O(e^{(-\lambda_{1}(1-a)+\tau)k})(c_{0}(D\varphi)+p_{a}(D\varphi)). \tag{6.6}\] Proof.: Since \(\varphi=v\circ T-v\), by telescoping, for any \(x_{1},x_{2}\in I_{\alpha}^{(k)}\) \[v(T^{N}x_{2})-v(T^{N}x_{1})-(v(x_{2})-v(x_{1}))=\varphi^{(N)}(x_{2})-\varphi^{ (N)}(x_{1})=\int_{x_{1}}^{x_{2}}\sum_{i=0}^{N-1}D\varphi(T^{i}x)\,dx.\] Hence \[\operatorname{osc}(v,T^{N}(I_{\alpha}^{(k)}))\leq\operatorname{osc}(v,I_{ \alpha}^{(k)})+\int_{I_{\alpha}^{(k)}}\Big{|}\sum_{i=0}^{N-1}D\varphi(T^{i}x) \Big{|}\,dx. \tag{6.7}\] In view of (6.1), for every \(x\in I_{\alpha}^{(k)}\) we have \[\sum_{i=0}^{N-1}D\varphi(T^{i}x)=\sum_{l=0}^{k}\sum_{0\leq i<q(l)}S(l)D\varphi ((T^{(l)})^{i}x(l)) \tag{6.8}\] with \(0\leq q(l)\leq\|Z(l+1)\|\) and \(I_{\alpha}^{(k)}\ni x\mapsto x(l)\in J_{l}\subset I^{(l)}\) is a translation and \(J_{l}\) is the image of \(I_{\alpha}^{(k)}\) by this translation. It follows that \[\int_{I_{\alpha}^{(k)}}\Big{|}\sum_{i=0}^{N-1}D\varphi(T^{i}x)\Big{|}\,dx\leq \sum_{l=0}^{k}\sum_{0\leq i<q(l)}\int_{(T^{(l)})^{i}J_{l}}|S(l)D\varphi(x)|dx. \tag{6.9}\] Assume that \(0<a<1\). As \(|(T^{(l)})^{i}J_{l}|=|J_{l}|=|I_{\alpha}^{(k)}|\), in view of (6.5), \[\int_{(T^{(l)})^{i}J_{l}}|S(l)D\varphi(x)|dx\leq\frac{\|S(l)D\varphi\|_{L^{1}( I^{(l)})}\,|I_{\alpha}^{(k)}|}{|I^{(l)}|}+\frac{2^{a+3}p_{a}(S(l)D\varphi)|I_{ \alpha}^{(k)}|^{1-a}}{a(1-a)}.\] By (4.3), there exists \(C>0\) such that \[\begin{split}& p_{a}(S(l)D\varphi)\leq Cp_{a}(D\varphi)\text{ if }0<a<1,\\ & p_{a}(S(l)D\varphi)\leq C(1+\log\|Q(l)\|)p_{a}(D\varphi)\text{ if }a=0.\end{split} \tag{6.10}\] As \(\frac{\|S(l)D\varphi\|_{L^{1}(I^{(l)})}}{|I^{(l)}|}=O(e^{(\lambda_{1}a+\tau)l} )c_{0}(D\varphi)\) and \(|I^{(k)}|=O(e^{-\lambda_{1}k})\), it follows that \[\int_{(T^{(l)})^{i}J_{l}}|S(l)D\varphi(x)|dx=O(e^{(-\lambda_{1}(1-a)+\tau)k}) (c_{0}+p_{a})(D\varphi). \tag{6.11}\] If \(a=0\) then, by (6.5), \[\int_{(T^{(l)})^{i}J_{l}}|S(l)D\varphi(x)|dx\leq\frac{\|S(l)D\varphi\|_{L^{1}( I^{(l)})}|I_{\alpha}^{(k)}|}{|I^{(l)}|}+4p_{a}(S(l)D\varphi)|I_{\alpha}^{(k)}|(1 +\log\frac{|I^{(l)}|}{|I_{\alpha}^{(k)}|}).\] In view of (3.14), \(\log|I^{(l)}|/|I_{\alpha}^{(k)}|\leq\log|I|/|I_{\alpha}^{(k)}|=\log O(e^{( \lambda_{1}+\tau)k})=O(e^{\tau k})\) and \(\log\|Q(l)\|=O(e^{\tau k})\) for \(l\leq k\), and by (6.10) we also get (6.11) when \(a=0\). By (6.9), this gives \[\int_{I_{\alpha}^{(k)}}\Big{|}\sum_{i=0}^{N-1}D\varphi(T^{i}x) \Big{|}\,dx =k\|Z(k+1)\|O(e^{(-\lambda_{1}(1-a)+\tau)k})(c_{0}+p_{a})(D\varphi)\] \[=O(e^{(-\lambda_{1}(1-a)+3\tau)k})(c_{0}+p_{a})(D\varphi).\] In view of (6.7), this gives (6.6). By combining previous lemmas, under a decaying condition on \(S(k)\varphi\) and some bound on the growth of \(S(k)D\varphi\), a Holder solution of the cohomological equation is obtained. **Theorem 6.5**.: _Suppose that \(\varphi\in C^{1+\mathrm{P}_{\mathrm{a}}\mathrm{G}}(\sqcup_{\alpha\in A}I_{ \alpha})\) is such that for any \(\tau>0\) we have \(\|S(k)\varphi\|_{\mathrm{sup}}=O(e^{(-\lambda_{1}(1-a)+\tau)k})c_{1}(\varphi)\) and \(\frac{\|S(k)D\varphi\|_{L^{1}(I^{(k)})}}{|I^{(k)}|}=O(e^{(\lambda_{1}a+\tau)k} )c_{0}(D\varphi)\). There exists a continuous solution \(v:I\to\mathbb{R}\) of the cohomological equation \(\varphi=v\circ T-v\) such that \(v(0)=0\) and for any \(0<\tau<1-a\) we have \(v\in C^{(1-a)-\tau}(I)\). Moreover, there exists \(C_{\tau}>0\) such that \(\|v\|_{C^{(1-a)-\tau}}\leq C_{\tau}(c_{1}(\varphi)+c_{0}(D\varphi)+p_{a}(D \varphi))\)._ Proof.: For any pair \(x<y\) of points in \(I\) we use the space decomposition of the interval \((x,y)\) introduced in the beginning of the section. Then \[|v(y)-v(x)|\leq\sum_{q=1}^{q(k)}\mathrm{osc}(v,J^{(k)}(q))+\sum_{l>k}\sum_{ \epsilon=\pm}\sum_{q=1}^{q_{\epsilon}(l)}\mathrm{osc}(v,J_{\epsilon}^{(l)}(q))\] with \(q(k)\leq\|Z(k)\|\) and \(q_{\pm}(l)\leq\|Z(l)\|\). As each \(J^{(k)}(q)\) is of the form \(T^{n}I_{\alpha}^{(k)}\) for some \(0\leq n<Q_{\alpha}(k)\) and each \(J_{\pm}^{(l)}(q)\) is of the form \(T^{n}I_{\alpha}^{(l)}\) for some \(0\leq n<Q_{\alpha}(l)\), in view of Corollary 6.2 and Lemma 6.4, for any \(\tau>0\), \[\operatorname{osc}(v,J^{(k)}(q)) \leq O(e^{(-\lambda_{1}(1-a)+\tau)k})(c_{1}(\varphi)+c_{0}(D \varphi)+p_{a}(D\varphi)),\] \[\operatorname{osc}(v,J_{\pm}^{(l)}(q)) \leq O(e^{(-\lambda_{1}(1-a)+\tau)l})(c_{1}(\varphi)+c_{0}(D \varphi)+p_{a}(D\varphi)).\] It follows that \[|v(y)-v(x)|\leq O\Big{(}\sum_{l\geq k}\|Z(l)\|\,e^{(-\lambda_{1}(1-a)+\tau)l} \Big{)}(c_{1}(\varphi)+c_{0}(D\varphi)+p_{a}(D\varphi)).\] As \(\|Z(l)\|=O(e^{\tau l})\), we obtain \[|v(y)-v(x)|\leq O(e^{(-\lambda_{1}(1-a)+2\tau)k})(c_{1}(\varphi)+c_{0}(D \varphi)+p_{a}(D\varphi)).\] By the choice of \(k\), \(|y-x|\geq\min_{\alpha\in\mathcal{A}}|I_{\alpha}^{(k)}|\geq c_{\tau}e^{-( \lambda_{1}+\tau)k}\) for some \(c_{\tau}>0\). It follows that \[|v(y)-v(x)|\leq O(1)(c_{1}(\varphi)+c_{0}(D\varphi)+p_{a}(D\varphi))|y-x|^{ \frac{\lambda_{1}(1-a)-2\tau}{\lambda_{1}+\tau}}.\] As \(v(0)=0\), this completes the proof. ### Higher regularity Higher regularity of solutions is obtained by applying Theorem 6.5 as the initial step of induction. **Theorem 6.6**.: _Let \(n\geq 1\) and \(0\leq a<1\). Assume that \(T\) satisfies the FFDC. Let \(\varphi\in C^{n+\mathrm{P}_{\mathrm{a}}\mathrm{G}}(\sqcup_{\alpha\in\mathcal{A }}I_{\alpha})\) be a map such that for any \(\tau>0\) we have_ \[\|S(k)D^{l}\varphi\|_{\mathrm{sup}}=O(e^{(-\lambda_{1}(n-l-a)+\tau)k})\|D^{l} \varphi\|_{C^{n-l+\mathrm{P}_{\mathrm{a}}}}\text{ for }0\leq l<n \tag{6.12}\] _and_ \[\frac{1}{|I^{(k)}|}\|S(k)D^{n}\varphi\|_{L^{1}(I^{(k)})}=O(e^{(\lambda_{1}a+ \tau)k})\|D^{n}\varphi\|_{C^{0+\mathrm{P}_{\mathrm{a}}}}. \tag{6.13}\] _Then there exists a \(C^{n-1}\)-solution \(v:I\to\mathbb{R}\) of the cohomological equation \(\varphi=v\circ T-v\) such that \(v(0)=0\) and for any \(0<\tau<1-a\) we have \(v\in C^{n-a-\tau}(I)\). Moreover, there exists \(C_{\tau,n}>0\) such that \(\|v\|_{C^{n-a-\tau}}\leq C_{\tau,n}\|\varphi\|_{C^{n+\mathrm{P}_{\mathrm{a}}}}\)._ Proof.: The proof is by induction on \(n\). For \(n=1\), our claim follows from Theorem 6.5 applied to \(c_{1}(\varphi)=\|\varphi\|_{C^{1+\mathrm{P}_{\mathrm{a}}}}\) and \(c_{0}(D\varphi)=\|D\varphi\|_{C^{0+\mathrm{P}_{\mathrm{a}}}}\). Suppose that for some \(n\geq 1\) if \(\varphi\in C^{n+\mathrm{P}_{\mathrm{a}}\mathrm{G}}(\sqcup_{\alpha\in\mathcal{A }}I_{\alpha})\) satisfies (6.12) and (6.13) then there exists a \(C^{n-1}\)-solution \(v\) of the cohomological equation such that for any \(\tau>0\) we have \(v\in C^{n-a-\tau}(I)\) and \(\|v\|_{C^{n-a-\tau}}\leq C_{\tau,n}\|\varphi\|_{C^{n+\mathrm{P}_{\mathrm{a}}}}\). Let \(\varphi\in C^{n+1+\mathrm{P}_{\mathrm{a}}\mathrm{G}}(\sqcup_{\alpha\in\mathcal{ A}}I_{\alpha})\) be such that \[\|S(k)D^{l}\varphi\|_{\mathrm{sup}} =O(e^{(-\lambda_{1}(n+1-l-a)+\tau)k})\|D^{l}\varphi\|_{C^{n+1+l +\mathrm{P}_{\mathrm{a}}}}\text{ for }0\leq l\leq n\text{ and }\] \[\frac{1}{|I^{(k)}|}\|S(k)D^{n+1}\varphi\|_{L^{1}(I^{(k)})}=O(e^{ (\lambda_{1}a+\tau)k})\|D^{n+1}\varphi\|_{C^{0+\mathrm{P}_{\mathrm{a}}}}.\] It follows that \(D\varphi\in C^{n+\mathrm{P}_{\mathrm{a}}\mathrm{G}}(\sqcup_{\alpha\in\mathcal{ A}}I_{\alpha})\) satisfies (6.12) and (6.13). By induction hypothesis, there exists \(v_{0}\in C^{n-1}(I)\) such that \(D\varphi=v_{0}\circ T-v_{0}\), \(v_{0}(0)=0\) and for any \(\tau>0\) we have \(v_{0}\in C^{n-a-\tau}(I)\) with \(\|v_{0}\|_{C^{n-a-\tau}}\leq C_{\tau,n}\|D\varphi\|_{C^{n+\mathrm{P}_{\mathrm{a}}}}\). By integrating, there exists \(\chi\in\Gamma\) that satisfies \(\varphi=\widetilde{v}_{0}\circ T-\widetilde{v}_{0}+\chi\) (recall that \(\widetilde{v}_{0}(x)=\int_{0}^{x}v_{0}(s)ds\)). Note that for any \(k\geq 1\), \[S(k)\varphi=S(k)(\widetilde{v}_{0}\circ T-\widetilde{v}_{0})+Q(k)\chi.\] By assumption, \[\begin{split}\|S(k)\varphi\|_{\sup}&=O(e^{(-\lambda_{1}(n +1-a)+\tau)k})\|\varphi\|_{C^{n+1+\mathrm{P_{a}}}}\\ &\leq O(e^{-\lambda_{1}k}e^{(-\lambda_{1}(1-a)+\tau)k})\|\varphi \|_{C^{n+1+\mathrm{P_{a}}}}=O(e^{-\lambda_{1}k})\|\varphi\|_{C^{n+1+\mathrm{P_{ a}}}}.\end{split} \tag{6.14}\] On the other hand, for any \(x\in I_{\alpha}^{(k)}\), \[|S(k)(\widetilde{v}_{0}\circ T-\widetilde{v}_{0})(x)|=|\widetilde{v}_{0}(T^{Q _{\alpha}(k)}x)-\widetilde{v}_{0}(x)|\leq\|v_{0}\|_{\sup}|x-T^{Q_{\alpha}(k)}x|. \tag{6.15}\] It follows that \[\|S(k)(\widetilde{v}_{0}\circ T-\widetilde{v}_{0})\|_{\sup}\leq\|v_{0}\|_{ \sup}|I^{(k)}|=O(e^{-\lambda_{1}k})\|D\varphi\|_{C^{n+\mathrm{P_{a}}}}.\] Therefore, \(\|Q(k)\chi\|=O(e^{-\lambda_{1}k})\|\varphi\|_{C^{n+1+\mathrm{P_{a}}}}\). In view of (3.2), \(\chi\in E_{-1}(\pi,\lambda)\). As \(E_{-1}(\pi,\lambda)\) is one-dimensional, by Remark 3.4, \(\chi=c(\bar{\xi}-\bar{\xi}\circ T)\) for some \(c=c(\varphi)\in\mathbb{R}\) (recall that \(\bar{\xi}(x)=x\)). Note that \(|c(\varphi)|\leq\|v_{0}\|_{\sup}\). Indeed, by (6.15), \[\left\|S(k)(\widetilde{v}_{0}\circ T-\widetilde{v}_{0})\right\|_{\sup}\leq\|v _{0}\|_{\sup}\left\|S(k)(\bar{\xi}-\bar{\xi}\circ T)\right\|_{\sup}.\] As \(\frac{1}{k}\log\left\|S(k)(\bar{\xi}-\bar{\xi}\circ T)\right\|_{\sup}\to- \lambda_{1}\), in view of (6.14), we obtain \(\|S(k)\varphi\|_{\sup}=o(\left\|S(k)(\bar{\xi}-\bar{\xi}\circ T)\right\|_{\sup})\). It follows that \[\begin{split}|c(\varphi)|\left\|S(k)(\bar{\xi}-\bar{\xi}\circ T )\right\|_{\sup}&=\|S(k)\chi\|_{\sup}\leq\|S(k)\varphi\|_{\sup}+ \|S(k)(\widetilde{v}_{0}\circ T-\widetilde{v}_{0})\|_{\sup}\\ &\leq(\|v_{0}\|_{\sup}+o(1))\left\|S(k)(\bar{\xi}-\bar{\xi}\circ T )\right\|_{\sup}.\end{split}\] Hence \(|c(\varphi)|\leq\|v_{0}\|_{\sup}\). Let \(v:I\to\mathbb{R}\), \(v=\widetilde{v}_{0}-c(\varphi)\bar{\xi}\). Then \(\varphi=v\circ T-v\) and \(v\in C^{n+1-a-\tau}(I)\) with \[\|Dv\|_{C^{n-a-\tau}}=\|v_{0}\|_{C^{n-a-\tau}}+|c(\varphi)|\leq\|v_{0}\|_{C^{ n-a-\tau}}+\|v_{0}\|_{\sup}\leq 2C_{\tau,n}\|D\varphi\|_{C^{n+\mathrm{P_{a}}}}.\] As \(v(0)=0\), this gives \[\begin{split}\|v\|_{C^{n+1-a-\tau}}&=\|v\|_{\sup}+ \|Dv\|_{C^{n-a-\tau}}\leq|I|\|Dv\|_{\sup}+\|Dv\|_{C^{n-a-\tau}}\\ &\leq(|I|+1)\|Dv\|_{C^{n-a-\tau}}\leq 2(|I|+1)C_{\tau,n}\|D \varphi\|_{C^{n+\mathrm{P_{a}}}}.\end{split}\] This completes the proof. **Corollary 6.7**.: _For every \(n\geq 0\) there exists a polynomial \(v_{n}\in\mathbb{R}_{n+1}[x]\) such that \(h_{-1,n}=v_{n}\circ T-v_{n}\) and \(v_{n}(0)=0\)._ _For every \(\bar{t}\in\mathscr{T}\mathscr{F}\) if \(\mathfrak{o}(\bar{t})>r>0\), then there exists \(v_{\bar{t}}\in C^{r}(I)\) such that \(h_{\bar{t}}(0)=0\) and \(h_{\bar{t}}=v_{\bar{t}}\circ T-v_{\bar{t}}\)._ Proof.: In view of (5.3) and (5.8), for every \(0\leq l\leq n\) we have \(D^{l}h_{-1,n}=h_{-1,n-l}\), and \[\lim_{k\to\infty}\frac{1}{k}\log\|S(k)D^{l}h_{-1,n}\|_{\sup}=-\lambda_{1}(n-l+ 1)\text{ and }D^{n+1}h_{-1,n}=0.\] Therefore \(h_{-1,n}\in C^{n+1+\mathrm{P_{a}}\mathrm{G}}(\sqcup_{\alpha\in\mathcal{A}}I_{ \alpha})\) satisfies (6.12) and (6.13) for \(a=0\). Then, by Theorem 6.6, there exists \(v_{n}\in C^{n}(I)\) such that \(h_{-1,n}=v_{n}\circ T-v_{n}\) and \(v_{n}(0)=0\). As \(h_{-1}=D^{n}h_{-1,n}=D^{n}v_{n}\circ T-D^{n}v_{n}\), by Remark 3.4 and the ergodicity of \(T\), we have \(D^{n}v_{n}(x)=\bar{\xi}(x)+c=x+c\). It follows that \(v_{n}\in\mathbb{R}_{n+1}[x]\). Suppose that \(\bar{t}\in\mathscr{T}\mathscr{F}\) and \(\mathfrak{o}(\bar{t})>r>0\). Let \(n:=\lceil\mathfrak{o}(\bar{t})\rceil-1\), \(a:=\mathfrak{o}(\bar{t})-n\) and choose \(\tau>0\) so that \(r<n-a-\tau<n-a=\mathfrak{o}(\bar{t})\). In view of (5.3) and (5.8), for every \(0\leq l\leq n\), \[\lim_{k\to\infty}\frac{1}{k}\log\|S(k)D^{l}h_{\bar{t}}\|_{\sup}=-\lambda_{1}( \mathfrak{o}(\bar{t})-l)=-\lambda_{1}(n-l-a).\] As \(h_{\bar{t}}\in C^{n+\mathrm{P}_{\mathrm{a}}\mathrm{G}}(\sqcup_{\alpha\in\mathcal{ A}}I_{\alpha})\), by Theorem 6.6, there exists \(v_{\bar{t}}\in C^{n-a-\tau}(I)\) such that \(h_{\bar{t}}(0)=0\) and \(h_{\bar{t}}=v_{\bar{t}}\circ T-v_{\bar{t}}\). As \(r<n-a-\tau\), this gives our claim. We finish the section by summarizing the complete conditions for having smooth solutions of the cohomological equations for a.e IETs. **Theorem 6.8**.: _Let \(n\geq 1\), \(0\leq a<1\) and \(0<r<n-a\) such that \(r\notin\{\mathfrak{o}(\bar{t}):\bar{t}\in\mathscr{T}_{a,n}\}\). Assume that \(T\) satisfies the FFDC. Let \(\varphi\in C^{n+\mathrm{P}_{\mathrm{a}}\mathrm{G}}(\sqcup_{\alpha\in\mathcal{ A}}I_{\alpha})\) be a map such that \(\mathfrak{f}_{\bar{t}}(\varphi)=0\) for all \(\bar{t}\in\mathscr{T}_{a,n}\) with \(\mathfrak{o}(\bar{t})<r\). Then there exists a solution \(v\in C^{r}(I)\) of the cohomological equation \(\varphi=v\circ T-v\) such that \(v(0)=0\). The operator_ \[\bigcap_{\bar{t}\in\mathscr{T}_{a,n},\ \mathfrak{o}(\bar{t})<r}\ker( \mathfrak{f}_{\bar{t}})\ni\varphi\mapsto v\in C^{r}(I) \tag{6.16}\] _is linear and bounded._ _Moreover, there exist bounded operators \(\Gamma_{n}:C^{n+\mathrm{P}_{\mathrm{a}}}(\sqcup_{\alpha\in\mathcal{A}}I_{ \alpha})\to\Gamma_{n}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) and \(V_{n}:C^{n+\mathrm{P}_{\mathrm{a}}\mathrm{G}}(\sqcup_{\alpha\in\mathcal{A}}I_ {\alpha})\to C^{n-1}(I)\) such that_ \[\varphi=V_{n}(\varphi)\circ T-V_{n}(\varphi)+\Gamma_{n}(\varphi).\] _More precisely, for every \(0<\tau<1-a\) the operator \(V_{n}\) takes value in \(C^{n-a-\tau}(I)\) and \(V_{n}:C^{n+\mathrm{P}_{\mathrm{a}}\mathrm{G}}(\sqcup_{\alpha\in\mathcal{A}}I_ {\alpha})\to C^{n-a-\tau}(I)\) is also bounded._ Proof.: Assume that \(\mathfrak{f}_{\bar{t}}(\varphi)=0\) for every \(\bar{t}\in\mathscr{T}_{a,n}\) with \(\mathfrak{o}(\bar{t})<r\). Then \[\varphi=\mathfrak{r}_{a,n}(\varphi)+\sum_{\bar{t}\in\mathscr{T}_{a,n},\ \mathfrak{o}(\bar{t})>r}\mathfrak{f}_{\bar{t}}(\varphi)h_{\bar{t}}+\sum_{\bar{t }\in\mathscr{T}_{a,n}^{*}\setminus\mathscr{T}_{a,n}}\mathfrak{f}_{\bar{t}}( \varphi)h_{\bar{t}}.\] Choose \(\tau>0\) such that \(r<n-a-\tau\). In view of Theorem 5.6 and 6.6, there exists \(\bar{v}\in C^{n-a-\tau}(I)\) such that \(\mathfrak{r}_{a,n}(\varphi)=\bar{v}\circ T-\bar{v}\) and \(\bar{v}(0)=0\). There exists also \(C_{\tau,n}>0\) such that \(\|\bar{v}\|_{C^{n-a-\tau}}\leq C_{\tau,n}\|\mathfrak{r}_{a,n}(\varphi)\|_{C^{n +\mathrm{P}_{\mathrm{a}}}}\). By Corollary 6.7, for every \(\bar{t}\in\mathscr{T}_{a,n}^{*}\setminus\mathscr{T}_{a,n}\) there exists a polynomial \(v_{\bar{t}}\) such that \(h_{\bar{t}}=v_{\bar{t}}\circ T-v_{\bar{t}}\) and \(v_{\bar{t}}(0)=0\). Moreover, if \(\bar{t}\in\mathscr{T}_{a,n}\) and \(\mathfrak{o}(\bar{t})>r>0\) then, again by Corollary 6.7, there exists \(v_{\bar{t}}\in C^{r}(I)\) such that \(h_{\bar{t}}=v_{\bar{t}}\circ T-v_{\bar{t}}\) and \(v_{\bar{t}}(0)=0\). It follows that \[\varphi=\bar{v}\circ T-\bar{v}+\sum_{\bar{t}\in\mathscr{T}_{a,n},\ \mathfrak{o}(\bar{t})>r}\mathfrak{f}_{\bar{t}}(\varphi)(v_{\bar{t}}\circ T-v_{ \bar{t}})+\sum_{\bar{t}\in\mathscr{T}_{a,n}^{*}\setminus\mathscr{T}_{a,n}} \mathfrak{f}_{\bar{t}}(\varphi)(v_{\bar{t}}\circ T-v_{\bar{t}})\] and \[v=\bar{v}+\sum_{\bar{t}\in\mathscr{T}_{a,n},\ \mathfrak{o}(\bar{t})>r}\mathfrak{f}_{\bar{t}}( \varphi)v_{\bar{t}}+\sum_{\bar{t}\in\mathscr{T}_{a,n}^{*}\setminus\mathscr{T}_{a,n}}\mathfrak{f}_{\bar{t}}(\varphi)v_{\bar{t}}\in C^{r}(I)\] satisfies \(\varphi=v\circ T-v\) and \(v(0)=0\). Moreover, \[\|v\|_{C^{r}} \leq C_{\tau,n}\|\mathfrak{r}_{a,n}(\varphi)\|_{C^{n+\mathrm{P}_{ \mathrm{a}}}}+\sum_{\bar{t}\in\mathscr{T}_{a,n}^{*}}|\mathfrak{f}_{\bar{t}}( \varphi)|\|v_{\bar{t}}\|_{C^{r}}\] \[\leq C_{\tau,n}\|\varphi\|_{C^{n+\mathrm{P}_{\mathrm{a}}}}+\sum_{ \bar{t}\in\mathscr{T}_{a,n}^{*}}(C_{\tau,n}\|h_{\bar{t}}\|_{C^{n+\mathrm{P}_{ \mathrm{a}}}}+\|v_{\bar{t}}\|_{C^{r}})|\mathfrak{f}_{\bar{t}}(\varphi)|.\] As all functionals \(\mathfrak{f}_{\bar{t}}:C^{n+\mathrm{P}_{\mathrm{a}}}(\sqcup_{\alpha\in\mathcal{ A}}I_{\alpha})\to\mathbb{C}\) are bounded, the operator (6.16) is bounded as well. The second part of the theorem follows directly from Theorem 5.6 and 6.6 with \(\Gamma_{n}(\varphi)=\sum_{\bar{t}\in\mathscr{T}_{a,n}^{*}}\mathfrak{f}_{\bar{t}}( \varphi)h_{\bar{t}}\) and \(V_{n}(\varphi)\) being the solution of the cohomological equation \(\mathfrak{r}_{a,n}(\varphi)=V_{n}(\varphi)\circ T-V_{n}(\varphi)\) _Remark 6.9_.: In view of the second part of Theorem 5.6, the regularity of the solution \(v\) for the equation \(\varphi=v\circ T-v\) with \(\varphi\in C^{n+\mathrm{P_{a}}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) proved in Theorem 6.8 is optimal. Indeed, let \(r_{0}=\mathfrak{o}(\bar{t}_{0})>0\) for some \(\bar{t}_{0}\in\mathscr{T}_{a,n}\). Let \(\varphi\in C^{n+\mathrm{P_{a}}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) such that \(\mathfrak{f}_{\bar{t}_{0}}(\varphi)\neq 0\) and \(\mathfrak{f}_{\bar{t}}(\varphi)=0\) for all \(\bar{t}\in\mathscr{T}_{a,n}\) with \(\mathfrak{o}(\bar{t})<r_{0}\). By Theorem 6.8, the solution \(v\) of the cohomological equation belongs to \(C^{r}(I)\) for any \(r<r_{0}\). On the other hand, by Theorem 5.6, \(v\notin C^{r}(I)\) for any \(r>r_{0}\). Hence, the exponent \(r_{0}=\mathfrak{o}(\bar{t}_{0})\) is a threshold for the regularity of the solution. Similarly, if \(r_{0}=n-a\), \(C^{a,\pm}_{\alpha.n}(\varphi)\neq 0\) for some \(\alpha\in\mathcal{A}\) and \(\mathfrak{f}_{\bar{t}}(\varphi)=0\) for all \(\bar{t}\in\mathscr{T}_{a,n}\) with \(\mathfrak{o}(\bar{t})<r_{0}\) (in fact, by (5.19), for any \(\bar{t}\in\mathscr{T}_{a,n}\)) then \(v\in C^{r}(I)\) for every \(r<r_{0}\) and \(v\notin C^{r}(I)\) for every \(r>r_{0}\). ## 7. Proofs of the main theorems In this last section, we construct generalized Forni's invariant distributions \(\mathfrak{F}_{\bar{t}}\) on function spaces on a compact surface \(M\). Roughly speaking, \(\mathfrak{F}_{\bar{t}}\) is achieved by composing the operator \(f\mapsto\varphi_{f}\) with the functional \(\mathfrak{f}_{\bar{t}}\). Since the invariant distributions \(\mathfrak{f}_{\bar{t}}\) are on \(C^{n+\mathrm{P_{a}}}\), we need to perform in Section 7.1 an additional correction of \(\varphi_{f}\) so that the resulting function belongs to \(C^{n+\mathrm{P_{a}}}\). Finally, in Section 7.2, we apply the tools developed in [12] to make a transition from cohomological equations over IETs to equations for locally Hamiltonian flows on any minimal component \(M^{\prime}\subset M\). Then by combining them with the cohomological results over IETs in Section 6, optimal regularity of solutions to cohomological equations \(Xu=f\) is obtained. The regularity is determined by the order (or the hat-order) of three different types of invariant distributions \(\mathfrak{C}^{k}_{\sigma,l}\), \(\mathfrak{d}^{k}_{\sigma,j}\) and \(\mathfrak{F}_{\bar{t}}\). ### Counterparts of Forni's invariant distributions Let \(M\) be a compact connected orientable \(C^{\infty}\)-surface. Let \(\psi_{\mathbb{R}}\) be a locally Hamiltonian \(C^{\infty}\)-flow on \(M\) with isolated fixed points and such that all its saddles are perfect and all saddle connections are loops. Let \(M^{\prime}\subset M\) be a minimal component of the flow and let \(I\subset M^{\prime}\) be a transversal curve. The corresponding IET \(T:I\to I\) exchanges the intervals \(\{I_{\alpha}:\alpha\in\mathcal{A}\}\). Let \(\tau:I\to\mathbb{R}_{>0}\) be the first return time map. Let us consider the operator \(f\mapsto\varphi_{f}\) defined for every integrable map \(f:M\to\mathbb{R}\) as follows: \[\varphi_{f}(x)=\int_{0}^{\tau(x)}f(\psi_{t}x)dt\text{ for every }x\in I.\] If \(f\) is a smooth function on \(M\) then \(\varphi_{f}\) is also smooth on every \(\mathrm{Int}\,I_{\alpha}\), \(\alpha\in\mathcal{A}\). The function \(\varphi_{f}\) may be discontinuous at the ends of the intervals or may have singularities. A detailed description of the behavior around the ends of the exchanged intervals is described in details in [12]. Suppose that the equation \(\varphi_{f}=v\circ T-v\) has a smooth solution \(v:I\to\mathbb{R}\). This is a necessary condition for the existence of a smooth solution to the equation \(Xu=f\). In a sense, this is also a sufficient condition for the existence of a smooth solution to the equation \(Xu=f\). We can define \(u_{v,f}:M^{\prime}\setminus(\mathrm{Sd}(\psi_{\mathbb{R}})\cup\mathrm{SL}( \psi_{\mathbb{R}}))\to\mathbb{R}\) as follows: if \(\psi_{t}x\in I\) for some \(t\in\mathbb{R}\) then \[u_{v,f}(x):=v(\psi_{t}x)-\int_{0}^{t}f(\psi_{s}x)\,ds.\] The map \(u_{v,f}\) is a smooth solution of \(Xu=f\), but only on \(M^{\prime}\setminus(\mathrm{Sd}(\psi_{\mathbb{R}})\cup\mathrm{SL}(\psi_{ \mathbb{R}}))\) that is an open subset of \(M^{\prime}\). Usually \(u_{v,f}\) cannot be smoothly extended to \(M^{\prime}\) or even to the end compactification \(M^{\prime}_{e}\) defined in [12]. As proven in [12, Theorem 1.2], the vanishing of some invariant distributions \(\mathfrak{d}^{k}_{\sigma,j}(f)\) and \(\mathfrak{C}^{k}_{\sigma,l}(f)\) is the necessary and sufficient condition for the existence of a smooth solution (an extension of \(u_{v,f}\)) to \(Xu=f\) on \(M^{\prime}_{e}\). After [12], for any \([(\sigma,k,l)]\in\mathscr{T}\!\mathscr{C}/\sim\) we define a map \(\widehat{\xi}_{[(\sigma,k,l)]}:I\to\mathbb{R}\). For any closed interval \(J\subset I_{\alpha}\) denote by \(J^{\tau}\subset M\) the closure of the set of orbit segments starting from \(\operatorname{Int}J\) and running until the first return to \(I\). For any \([(\sigma,k,l)]\in\mathscr{T}\!\mathscr{C}/\sim\) there exists \(\alpha\in\mathcal{A}\) and an interval \(J\) of the form \([l_{\alpha},l_{\alpha}+\varepsilon]\) or \([r_{\alpha}-\varepsilon,r_{\alpha}]\) such that \(l_{\alpha}\) or \(r_{\alpha}\) is the first backward meeting point of a separatrix incoming to \(\sigma\in\operatorname{Sd}(\psi_{\mathbb{R}})\) and \(J^{\tau}\) contains all angular sectors \(U_{\sigma,l^{\prime}}\) for which \((\sigma,k,l^{\prime})\sim(\sigma,k,l)\). Let \(\widehat{\xi}_{[(\sigma,k,l)]}:I\to\mathbb{R}\) be a map such that * \(\widehat{\xi}_{[(\sigma,k,l)]}\) is zero on any interval \(I_{\beta}\) with \(\beta\neq\alpha\); * if \(J=[l_{\alpha},l_{\alpha}+\varepsilon]\) then for any \(s\in I_{\alpha}\), \[\widehat{\xi}_{[(\sigma,k,l)]}(s) =\frac{(s-l_{\alpha})^{\frac{k-(m_{\sigma}-2)}{m_{\sigma}}}}{m_{ \sigma}^{2}k!}\text{ if }k\neq m_{\sigma}-2\ \operatorname{mod}m_{\sigma}\] \[\widehat{\xi}_{[(\sigma,k,l)]}(s) =-\frac{(s-l_{\alpha})^{\frac{k-(m_{\sigma}-2)}{m_{\sigma}}}\log (s-l_{\alpha})}{m_{\sigma}^{2}k!}\text{ if }k=m_{\sigma}-2\ \operatorname{mod}m_{\sigma};\] * if \(J=[r_{\alpha}-\varepsilon,r_{\alpha}]\) then for any \(s\in I_{\alpha}\), \[\widehat{\xi}_{[(\sigma,k,l)]}(s) =\frac{(r_{\alpha}-s)^{\frac{k-(m_{\sigma}-2)}{m_{\sigma}}}}{m_{ \sigma}^{2}k!}\text{ if }k\neq m_{\sigma}-2\ \operatorname{mod}m_{\sigma}\] \[\widehat{\xi}_{[(\sigma,k,l)]}(s) =-\frac{(r_{\alpha}-s)^{\frac{k-(m_{\sigma}-2)}{m_{\sigma}}}\log (r_{\alpha}-s)}{m_{\sigma}^{2}k!}\text{ if }k=m_{\sigma}-2\ \operatorname{mod}m_{\sigma}.\] As \(\frac{k-(m_{\sigma}-2)}{m_{\sigma}}=\mathfrak{o}(\sigma,k)\), we have \(\widehat{\xi}_{[(\sigma,k,l)]}\in C^{n_{\sigma,k}+\mathrm{P}_{a_{\sigma,k}} \mathrm{G}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) with \(n_{\sigma,k}:=\lceil\mathfrak{o}(\sigma,k)\rceil\) and \(a_{\sigma,k}:=n-\mathfrak{o}(\sigma,k)\), and exactly one of \(C^{+}_{\alpha}(D^{n_{\sigma,k}}\widehat{\xi}_{[(\sigma,k,l)]})\), \(C^{-}_{\alpha}(D^{n_{\sigma,k}}\widehat{\xi}_{[(\sigma,k,l)]})\) is non-zero. Let us consider \(\xi_{[(\sigma,k,l)]}\in C^{n_{\sigma,k}+\mathrm{P}_{a_{\sigma,k}}\mathrm{G}}( \sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) given by \[\xi_{[(\sigma,k,l)]} :=\mathfrak{v}_{a_{\sigma,k},n_{\sigma,k}}(\widehat{\xi}_{[( \sigma,k,l)]})=\widehat{\xi}_{[(\sigma,k,l)]}-\sum_{\tilde{t}\in\mathscr{T}^{* }_{a_{\sigma,k},n_{\sigma,k}}}\mathfrak{f}_{\tilde{t}}(\widehat{\xi}_{[(\sigma, k,l)]})h_{\tilde{t}}\] \[=\widehat{\xi}_{[(\sigma,k,l)]}-\sum_{\tilde{t}\in\mathscr{T}^{* },\mathfrak{o}(\tilde{t})<\mathfrak{o}(\sigma,k)}\mathfrak{f}_{\tilde{t}}( \widehat{\xi}_{[(\sigma,k,l)]})h_{\tilde{t}}.\] In view of Lemma 5.7, \[\mathfrak{f}_{\tilde{t}}(\xi_{[(\sigma,k,l)]})=0\text{ if }\mathfrak{o}( \tilde{t})<\mathfrak{o}(\sigma,k). \tag{7.1}\] Since \(C^{\pm}_{\alpha}(D^{n_{\sigma,k}}\xi_{[(\sigma,k,l)]})=C^{\pm}_{\alpha}(D^{n_{ \sigma,k}}\widehat{\xi}_{[(\sigma,k,l)]})\neq 0\), by Theorem 5.6, \[\lim_{j\to\infty}\frac{1}{j}\log\big{(}\|S(j)(\xi_{[(\sigma,k,l)] })\|_{L^{1}(I^{j})}/|I^{(j)}|\big{)}=-\lambda_{1}(n_{\sigma,k}-a_{\sigma,k})=- \lambda_{1}\mathfrak{o}(\sigma,k)\] \[\lim_{j\to\infty}\frac{1}{j}\log\|S(j)(\xi_{[(\sigma,k,l)]})\|_{ \sup}=-\lambda_{1}\mathfrak{o}(\sigma,k)\text{ if }\mathfrak{o}(\sigma,k)>0. \tag{7.2}\] **Lemma 7.1**.: _For any \(r\geq-\frac{m-2}{m}\) let \(n=\lceil r\rceil\) and \(a=n-r\). Then for any \(f\in C^{k_{r}}(M)\) we have_ \[\mathfrak{s}_{r}(f)=\varphi_{f}-\sum_{[(\sigma,k,l)]\in\mathscr{T}\mathscr{C}/ \sim\atop\mathfrak{o}(\sigma,k)<r}\mathfrak{C}_{[(\sigma,k,l)]}(f)\xi_{[( \sigma,k,l)]}\in C^{n+\mathrm{P}_{\mathrm{a}}}(\sqcup_{\alpha\in\mathcal{A}}I _{\alpha}) \tag{7.3}\] _and the operator \(\mathfrak{s}_{r}:C^{k_{r}}(M)\to C^{n+\mathrm{P}_{\mathrm{a}}}(\sqcup_{\alpha \in\mathcal{A}}I_{\alpha})\) is bounded._ Proof.: By Theorem 5.6 in [12], \[\widehat{\mathfrak{s}}_{r}(f):=\varphi_{f}-\sum_{[(\sigma,k,l)]\in\mathscr{T} \mathscr{C}/\sim\atop\mathfrak{o}(\sigma,k)<r}\mathfrak{C}_{[(\sigma,k,l)]}(f )\widehat{\xi}_{[(\sigma,k,l)]}\in C^{n+\mathrm{P}_{\mathrm{a}}}(\sqcup_{ \alpha\in\mathcal{A}}I_{\alpha})\] and the operator \(\widehat{\mathfrak{s}}_{r}:C^{k_{r}}(M)\to C^{n+\mathrm{P}_{\mathrm{a}}}( \sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) is bounded. Moreover. \[\mathfrak{s}_{r}(f)=\widehat{\mathfrak{s}}_{r}(f)+\sum_{[(\sigma,k,l)]\in \mathscr{T}\mathscr{C}/\sim\atop\mathfrak{o}(\sigma,k)<r}\mathfrak{C}_{[( \sigma,k,l)]}(f)\big{(}\widehat{\xi}_{[(\sigma,k,l)]}-\xi_{[(\sigma,k,l)]} \big{)}.\] Since \(\widehat{\xi}_{[(\sigma,k,l)]}-\xi_{[(\sigma,k,l)]}\) is a polynomial over any exchanged interval, this gives our claim. _Definition 8_.: Let any \(r\geq-\frac{m-2}{m}\). For any \(\bar{t}\in\mathscr{T}\mathscr{F}^{*}\) with \(\mathfrak{o}(\bar{t})<r\) denote by \(\mathfrak{F}_{\bar{t}}:C^{k_{r}}(M)\to\mathbb{C}\) the operator given by \(\mathfrak{F}_{\bar{t}}:=\mathfrak{f}_{\bar{t}}\circ\mathfrak{s}_{r}\). As \(\mathfrak{s}_{r}:C^{k_{r}}(M)\to C^{n+\mathrm{P}_{\mathrm{a}}}(\sqcup_{\alpha \in\mathcal{A}}I_{\alpha})\) with \(n=\lceil r\rceil\), \(a=\lceil r\rceil-r\) and \(\bar{t}\in\mathscr{T}_{a,n}^{*}\) by (5.19), the operator is well-defined and bounded. _Remark 7.2_.: Note that the definition of \(\mathfrak{F}_{\bar{t}}\) does not depend on the choice of \(r\). Indeed, suppose that \(\mathfrak{o}(\bar{t})<r_{1}<r_{2}\). Then for every \(f\in C^{k_{r_{2}}}(M)\), \[\mathfrak{s}_{r_{1}}(f)-\mathfrak{s}_{r_{2}}(f)=\sum_{[(\sigma,k,l)]\in \mathscr{T}\mathscr{C}/\sim\atop\tau_{1}\leq\mathfrak{o}(\sigma,k)<r_{2}} \mathfrak{C}_{[(\sigma,k,l)]}(f)\xi_{[(\sigma,k,l)]}.\] In view of (7.1), it follows that \(\mathfrak{f}_{\bar{t}}(\mathfrak{s}_{r_{1}}(f))=\mathfrak{f}_{\bar{t}}( \mathfrak{s}_{r_{2}}(f))\), which yields our claim. _Remark 7.3_.: For any \(\bar{t}\in\mathscr{T}\mathscr{F}^{*}\) take \(\mathfrak{o}(\bar{t})<r<\mathfrak{o}(\bar{t})+\frac{1}{m}\). By definition, \(k_{r}\leq k_{\mathfrak{o}(\bar{t})}+1\). It follows that the functional \(\mathfrak{F}_{\bar{t}}\) is defined on \(C^{k_{\mathfrak{o}(\bar{t})+1}}(M)\). If \(\mathfrak{o}(\bar{t})\notin\mathbb{Z}/m\) then the domain of \(\mathfrak{F}_{\bar{t}}\) is enlarged to \(C^{k_{\mathfrak{o}(\bar{t})}}(M)\). ### Proofs of the main results Proof of Theorem 1.1.: Choose \(r_{0}\in\mathbb{R}_{>0}\) which is the smallest element of \(\{\mathfrak{o}(\sigma,k):k\geq 0,\sigma\in\mathrm{Sd}(\psi_{\mathbb{R}})\cap M^{ \prime}\}\cup\{\mathfrak{o}(\bar{t}):\bar{t}\in\mathscr{T}\mathscr{F}\}\) larger than \(r\). By assumption, \(T\) satisfies the FFDC, \(f\in C^{k_{r}}(M)=C^{k_{r_{0}}}(M)\) and * \(\mathfrak{d}_{\sigma,j}^{k}(f)=0\) for all \((\sigma,k,j)\in\mathscr{T}\mathscr{D}\) with \(\widehat{\mathfrak{o}}(\mathfrak{d}_{\sigma,j}^{k})<r_{0}\); * \(\mathfrak{C}_{\sigma,l}^{k}(f)=0\) for all \((\sigma,k,l)\in\mathscr{T}\mathscr{C}\) with \(\mathfrak{o}(\mathfrak{C}_{\sigma,l}^{k})<r_{0}\); * \(\mathfrak{F}_{\bar{t}}(f)=0\) for all \(\bar{t}\in\mathscr{T}\mathscr{F}\) with \(\mathfrak{o}(\mathfrak{F}_{\bar{t}})<r_{0}\). By Theorem 1.1 in [12], \(\varphi_{f}\in C^{n+\mathrm{P}_{\mathrm{a}}}(\sqcup_{\alpha\in\mathcal{A}}I _{\alpha})\) with \(n=\lceil r_{0}\rceil\) and \(a=\lceil r_{0}\rceil-r_{0}\). Moreover, there exists \(C_{r}>0\) such that \(\|\varphi_{f}\|_{C^{n+\mathrm{P}_{\mathrm{a}}}(\sqcup_{\alpha\in\mathcal{A}} I_{\alpha})}\leq C_{r}\|f\|_{C^{k_{r}}(M)}\) for all \(f\in C^{k_{r}}(M)\cap\ker(\mathfrak{C}_{\sigma,l}^{k})\) for \((\sigma,k,l)\in\mathscr{T}\mathscr{C}\) with \(\mathfrak{o}(\mathfrak{C}_{\sigma,l}^{k})<r\). By assumption, in view of (7.3), \(\varphi_{f}=\mathfrak{s}_{r_{0}}(f)\). It follows that \(\mathfrak{f}_{\bar{t}}(\varphi_{f})=\mathfrak{f}_{\bar{t}}(\mathfrak{s}_{r_{0 }}(f))=\mathfrak{F}_{\bar{t}}(f)=0\) for all \(\bar{t}\in\mathscr{T}\mathscr{F}\) with \(\mathfrak{o}(\bar{t})<r\). As \(r<r_{0}=n-a\), in view of Theorem 6.8, there exists a solution \(v\in C^{r}(I)\) of the cohomological equation \(\varphi=v\circ T-v\) such that \(v(0)=0\). Moreover, there exists \(C^{\prime}_{r}>0\) such that \(\|v\|_{C^{r}(I)}\leq C^{\prime}_{r}\|\varphi_{f}\|_{C^{n+\mathrm{P}_{\mathrm{a}}}( \sqcup_{\alpha\in\mathcal{A}}I_{\alpha})}\). By Theorem 1.2 in [12], there exists \(u_{v,f}\in C^{r}(M^{\prime}_{e})\) satisfying \(Xu_{v,f}=f\) on \(M^{\prime}_{e}\). Moreover, there exists a constant \(C^{\prime\prime}_{r}>0\) such that \(\|u_{v,f}\|_{C^{r}(M^{\prime}_{e})}\leq C^{\prime\prime}_{r}(\|v\|_{C^{r}(I)}+ \|f\|_{C^{k_{r}}(M)})\). It follows that \[\|u_{v,f}\|_{C^{r}(M^{\prime}_{e})}\leq C^{\prime\prime}_{r}(1+C_{r}C^{\prime} _{r})\|f\|_{C^{k_{r}}(M)},\] which completes the proof. Proof of Theorem 1.2.: If \(f\in C^{k_{r}}(M)\) and there exists \(u\in C^{r}(M^{\prime}_{e})\) such that \(Xu=f\) on \(M^{\prime}_{e}\), then by Theorem 1.3 in [12], \(\mathfrak{d}^{k}_{\sigma,j}(f)=0\) for all \((\sigma,k,j)\in\mathscr{T}\mathscr{D}\) with \(\mathfrak{d}(\mathfrak{d}^{k}_{\sigma,j})<r\) and \(\mathfrak{C}^{k}_{\sigma,l}(f)=0\) for all \((\sigma,k,l)\in\mathscr{T}\mathscr{C}\) with \(\mathfrak{d}(\mathfrak{C}^{k}_{\sigma,j})<r\). In view of Theorem 1.1 in [12], \(\varphi_{f}\in C^{n+\mathrm{P}_{a}}(\sqcup_{\alpha\in\mathcal{A}}I_{\alpha})\) with \(n=\lceil r\rceil\) and \(a=\lceil r\rceil-r\). By (7.3), it follows that \(\varphi_{f}=\mathfrak{s}_{r}(f)\). Hence \(\mathfrak{F}_{\bar{t}}(f)=\mathfrak{f}_{\bar{t}}(\mathfrak{s}_{r}(f))= \mathfrak{f}_{\bar{t}}(\varphi_{f})\) for all \(\bar{t}\in\mathscr{T}\mathscr{F}\) with \(\mathfrak{d}(\bar{t})<r\). On the other hand \(\varphi_{f}=v\circ T-v\), where \(v\in C^{r}(I)\) is the restriction of \(u\) to \(I\). By Theorem 5.6, this gives \(\mathfrak{f}_{\bar{t}}(\varphi_{f})=0\) for all \(\bar{t}\in\mathscr{T}\mathscr{F}\) with \(\mathfrak{d}(\bar{t})<r\). Therefore, \(\mathfrak{F}_{\bar{t}}(f)=0\) for all \(\bar{t}\in\mathscr{T}\mathscr{F}\) with \(\mathfrak{d}(\bar{t})<r\). Proof of Theorem 1.3.: In view of Lemma 7.1 and Theorem 5.6, for any \(f\in C^{k_{r}}(M)\), \[\varphi_{f} =\mathfrak{s}_{r}(f)+\sum_{\begin{subarray}{c}[(\sigma,k,l)]\in \mathscr{T}\mathscr{C}/\sim\\ \mathfrak{d}(\sigma,k)<r\end{subarray}}\mathfrak{C}_{[(\sigma,k,l)]}(f)\xi_{[( \sigma,k,l)]}\] \[=\sum_{\begin{subarray}{c}\bar{t}\in\mathscr{T}\mathscr{F}^{*}\\ \mathfrak{d}(\bar{t})<r\end{subarray}}\mathfrak{f}_{\bar{t}}(\mathfrak{s}_{r}(f ))h_{\bar{t}}+\mathfrak{r}_{a,n}(\mathfrak{s}_{r}(f))+\sum_{\begin{subarray}{c }[(\sigma,k,l)]\in\mathscr{T}\mathscr{C}/\sim\\ \mathfrak{d}(\sigma,k)<r\end{subarray}}\mathfrak{C}_{[(\sigma,k,l)]}(f)\xi_{[( \sigma,k,l)]}\] \[=\sum_{\begin{subarray}{c}\bar{t}\in\mathscr{T}\mathscr{F}^{*} \\ \mathfrak{d}(\bar{t})<r\end{subarray}}\mathfrak{F}_{\bar{t}}(f)h_{\bar{t}}+\sum_ {\begin{subarray}{c}[(\sigma,k,l)]\in\mathscr{T}\mathscr{C}/\sim\\ \mathfrak{d}(\sigma,k)<r\end{subarray}}\mathfrak{C}_{[(\sigma,k,l)]}(f)\xi_{[( \sigma,k,l)]}+\mathfrak{r}_{r}(f)\] with \(\mathfrak{r}_{r}:=\mathfrak{r}_{a,n}\circ\mathfrak{s}_{r}\), where \(n=\lceil r\rceil\) and \(a=\lceil r\rceil-r\). Note that (1.3), (1.4) and (1.5) follow directly from (5.8) and (7.2). Moreover, (1.6) follows from (5.21) and (5.22). ## Acknowledgements The authors would like to thank Giovanni Forni for his help in completing the list of references. The authors acknowledge the Center of Excellence "Dynamics, mathematical analysis and artificial intelligence" at the Nicolaus Copernicus University in Torun and Centro di Ricerca Matematica Ennio De Giorgi - Scuola Normale Superiore, Pisa for hospitality during their visits. Research was partially supported by the Narodowe Centrum Nauki Grant 2022/45/B/ST1/00179.
2305.14072
When the Music Stops: Tip-of-the-Tongue Retrieval for Music
We present a study of Tip-of-the-tongue (ToT) retrieval for music, where a searcher is trying to find an existing music entity, but is unable to succeed as they cannot accurately recall important identifying information. ToT information needs are characterized by complexity, verbosity, uncertainty, and possible false memories. We make four contributions. (1) We collect a dataset - $ToT_{Music}$ - of 2,278 information needs and ground truth answers. (2) We introduce a schema for these information needs and show that they often involve multiple modalities encompassing several Music IR subtasks such as lyric search, audio-based search, audio fingerprinting, and text search. (3) We underscore the difficulty of this task by benchmarking a standard text retrieval approach on this dataset. (4) We investigate the efficacy of query reformulations generated by a large language model (LLM), and show that they are not as effective as simply employing the entire information need as a query - leaving several open questions for future research.
Samarth Bhargav, Anne Schuth, Claudia Hauff
2023-05-23T13:50:06Z
http://arxiv.org/abs/2305.14072v1
# When the Music Stops: Tip-of-the-Tongue Retrieval for Music ###### Abstract. We present a study of _Tip-of-the-tongue (ToT)_ retrieval for _music_, where a searcher is trying to find an existing music entity, but is unable to succeed as they cannot accurately recall important identifying information. ToT information needs are characterized by complexity, verbosity, uncertainty, and possible false memories. We make four contributions. (1) We collect a dataset--_ToTMusic_--of 2,278 information needs and ground truth answers. (2) We introduce a schema for these information needs and show that they often involve multiple modalities encompassing several Music IR sub-tasks such as lyric search, audio-based search, audio fingerprinting, and text search. (3) We underscore the difficulty of this task by benchmarking a standard text retrieval approach on this dataset. (4) We investigate the efficacy of query reformulations generated by a large language model (LLM), and show that they are not as effective as simply employing the entire information need as a query-leaving several open questions for future research. Music Retrieval; Tip-of-the-Tongue Retrieval; Cross Modal Retrieval + Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+] †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+] † †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+] † †: [FOOTNOTE:+] †: [FOOTNOTE:+] † †: [FOOTNOTE:+] † †: [FOOTNOTE:+] † †: [FOOTNOTE:+] †: [FOOTNOTE:+] † †: [FOOTNOTE:+] † †: [FOOTNOTE:+] † †: [FOOTNOTE:+] † †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] † †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: †: [FOOTNOTE:+] †: †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: †: [FOOTNOTE:+] †: † †: [FOOTNOTE:+] †: †: [FOOTNOTE:+] †: †: † †: [FOOTNOTE:+] †: †: † †: †: † †: †: † †: †: † †: † †: † †: † †: † †: † †: † †: † †: † †: †: † † †: † † †: et al. ((2017)), with key differences in (1) the domain--music, (2) the corpus size--millions of items instead of thousands, and, (3) reformulation experiments utilizing an LLM. Music-ToT relates to several research areas in Music IR (MIR). **Lyric- and text-based retrieval** involves retrieving a song using lyrics or text (Krause et al., 2017; Krause et al., 2018; Krause et al., 2019). Techniques to handle _misheard_ lyrics are common (Krause et al., 2019; Krause et al., 2019; Krause et al., 2019), including modeling speech sounds (Krause et al., 2019), which may be insufficient, since ToT queries can contain _descriptions_ of lyrics, requiring semantic methods (Krause et al., 2019), or utilizing the audio itself (Krause et al., 2019). Apart from lyrics, Music-ToT queries are frequently free-form natural language queries (cf. SS4), requiring methods that can retrieve audio using text, as well as tags, genre or human-generated descriptions (Krause et al., 2017; Krause et al., 2019; Krause et al., 2019; Krause et al., 2019). **Content-based audio retrieval**(Krause et al., 2018) includes query-by-example (QBE) (Krause et al., 2019), where the audio is being queried as-is, e.g. audio fingerprinting (Krause et al., 2019). Alternatively, users can _imitate_ the wanted audio by vocalizing it, termed query-by-vocal-imitation (QBV) (Krause et al., 2019; Krause et al., 2019), which includes query-by-humming (QBH) (Krause et al., 2019). ToT queries frequently contain references to user created audio-clips as well as existing media like audio contained in videos (cf. SS4). **Other modalities** like videos may need to be handled as well, necessitating multi-modal or cross-modal (retrieving one modality using another) methods (Krause et al., 2019), e.g. retrieving audio using video (Krause et al., 2019; Krause et al., 2019). Approaches to solve Music-ToT have to account for multiple modalities and free-form natural language including noise, e.g., uncertainty (Bhargav et al., 2017) and/or false memories (Bhargav et al., 2017; Krause et al., 2019). ## 3. Methodology ### Data Collection **Gathering \(\mathit{ToT}_{All}\).** We gathered posts made across 2017-2021 in the r/TipOfMyTongue community, yielding 503,770 posts (after filtering out posts not marked _Solved_ or _Open_), each containing two fields: _title_ and _description_. We extracted text categories from the title, e.g. _S00R_ from _"[SONG] Slow dance song about the moon?"_. We manually identified a set of 11 overarching music-focused categories (e.g. _Music Video_, _Band_, _Ray Music_). We discarded the remaining non-music posts, resulting in \(\mathit{ToT}_{All}\): 94.363 (60,870 solved and 33,493 unsolved) Music-ToT posts. These posts form a large proportion--18.73%--of the 503K posts we started out with. **Extracting \(\mathit{ToT}_{Music}\).** We extracted answers from _Solved_ posts following Bhargav et al. (2017), retaining _Solved_ posts which have a URL as an answer. If the URL points to a _track_ on _Spotify_, obtaining the answer was trivial. Otherwise, the title portion of the markdown inline URLs, formatted as [title](ur1) (with title often formatted as 'Artist-Song') was used as a query to the _Spotify_ search API. Since the API returns multiple results, we created a classifier3 with 31 features based on the scores of the retriever, the edit distances between title and artist name, song title, etc. We used the classifier to predict if a title matches the track and artist, scoring 100% on precision on a held out set of 100 samples. Low-confidence candidates were filtered out. This left us with a set of 4,342 posts with _Spotify_ tracks as answers. Lastly, we only retained those posts where the \(\mathrm{ISRC}^{4}\) of the answer track is also present in the Wasabi Corpus (Bhargav et al., 2017): a total of 2,278 posts. We call this collection \(\mathit{ToT}_{Music}\). Footnote 3: Random Forest classifier, parameters selected with grid search on (Krause et al., 2017; Krause et al., 2019; Krause et al., 2019; Krause et al., 2019) estimators, max depth (\(\beta\), \(3\), \(4\)) and min/max scaled features. **Gathering reformulations.** We gathered reformulations for all posts in \(\mathit{ToT}_{Music}\) by prompting GPT-3 (Bhargav et al., 2017)5 with the respective post description and a word count limit: <description> _Summarize the query above to <N> words, focusing on musical elements_. We used \(N=\{10,25,50\}\).6 We also employed a prompt without a specific word limit: <post description> _Shorten the query above, focusing on musical elements_. Footnote 5: footnotemark: Footnote 6: footnotemark: Footnote 6: footnotemark: ### Music-ToT Schema Our annotation process involved three steps. We first developed and then refined a schema to describe Music-ToT information needs; in the final step, we annotated 100 samples from \(\mathit{ToT}_{Music}\). **Developing the schema in 2 steps.** A preliminary study conducted with one author (self-rated music expertise 7 out of 10) and two volunteers (music expertise 8/10 and 7/10 respectively) involved assigning one or more labels to 78 sentences from 25 randomly sampled posts from \(\mathit{ToT}_{Music}\). We focused on developing new labels specific to Music-ToT, while also re-using labels from Arguello et al. (2017): specifically the _Context_ labels, pertaining to the context an item was encountered in (_Temporal Context_, _Physical Medium_, _Cross Media_, _Contextual Witness_, _Physical Location_, _Concurrent Events_), and _Other_ annotations (_Previous Search_, _Social_, _Opinion_, _Emotion_, _Relative Comparison_). The latter are generally applicable across ToT information needs. This preliminary study revealed 25 new music labels, in addition to 11 labels from prior work (\(6\times\mathit{Context}\) and \(5\times\mathit{Other}\)). In the second step, the three authors (self-rated musical expertise 7, 6 and 5 respectively) of this paper labeled 110 sentences (20 posts from \(\mathit{ToT}_{Music}\)) to validate the schema. Based on our results and discussions, we combined a few finer-grained categories with low support into more general categories, e.g. specific musical elements like _Rhythm / Repetition_, _Melody_, _Tempo_, etc., were combined to _Composition_, resulting in **28 labels in total_. **Annotating.** Lastly, in step 3, two authors employed the final schema to annotate 536 sentences corresponding to 100 posts. The resulting labels, their frequency, category, inter-rater agreement (Cohen's \(\kappa\)(Bhargav et al., 2017; Krause et al., 2017)) along with their description and an example, are presented in Table 1. ## 4. Data Analysis We now first discuss Table 1, followed by a brief discussion about the modalities present in the whole collection, \(\mathit{ToT}_{All}\). **Annotation results.** Among the music-focused annotations, _Genre_ and _Composition_, a description of musical elements and how they fit together, are the two most frequent labels. This is followed by _Music Video Description_, and either direct quotes (_Lyric Quote_) or a description of the lyrics (_Story/Lyric Description_) further highlighting the different information needs that need to be addressed i.e., lyric search, text search and multi-modal search. However, a simple extraction of _Genre_ and metadata such as _Time Period/Recency_, _Instrument_, etc., may not be useful without considering the most frequent label, _Uncertainty_. Search systems therefore would have to handle these elements, as well as consider potential false memories. Furthermore, annotations like _Social_, _Opinion_ are also fairly common occurrences in our data, which may have limited utility for retrieval [1], motivating reformulations (cf. SS3.1). Searchers also express their queries in terms of other music entities in a _Relative Comparison_, and describe _Previous Search_ attempts, explicitly ruling out certain candidates. References to other modalities like user created clips (_Recording_) or existing media (_Embedded Music_) also pose a challenge. We now explore this challenge with a brief study of references to external content in the entire collection, _ToTAll_. **Cross-modal references** Music-ToT, like other ToT domains, contains cross-modal and media references [1], where a searcher refers to external content. We here show that Music-ToT posts in particular contain such references frequently. To this end, we gathered frequent websites that appear in _ToTAll_. One author manually labeled these as one of: (1) _User Created_: a clip uploaded by a user, e.g., Vocaroo, Clyp.it, Google Drive, Dropbox, Instaudio, musiclab, Onlinesequencer, Streamable, Speakpipe. (2) _Extant Media_: a clip \begin{table} \begin{tabular}{l c c l l} \hline \hline unlikely to be uploaded by a user, e.g. an existing clip, corresponding to content/social media websites like Spotify, Twitch, Tiktok, or YouTube. (3) _Other URL:_ Not belonging to the previous two categories. We find that _Extant Media_ forms a larger proportion of queries (19K, 20.9%) compared to _User Created_ queries (14K, 15.3%), with a small number of posts containing references to both types (1.1%). Therefore, Music-ToT information needs are inherently multi-modal. We characterize the remaining 57.7% of queries as _descriptive_ queries, which include references to lyrics, or story descriptions (cf. SS3.2). In summary, Music-ToT information needs are characterized by uncertainty and multi-modality, requiring methods like text-based audio retrieval, content based audio retrieval/fingerprinting and multi- or cross-modal retrieval. ## 5. Benchmarks ### Experimental Setup **Corpora.** We run experiments on two corpora. The first is the Wasabi 2.0 Corpus (Sohn et al., 2017; Sohn et al., 2017). It consists of 2M commercial songs from 77K artists and 200K albums. Crucially, (1) songs have the ISRC linked, enabling linking to data in _Spotify_; (2) it is an open dataset, consisting of rich information that includes lyrics, extensive metadata, and music snippets. We index the _Song Name_, _Artist Name_ and _Lyrics_7 of all songs using Elasticsearch (BM25 with default parameters). The second corpus corresponds to the _Spotify_ US catalog, consisting of hundreds of millions of tracks. The _Spotify_ search system (Krishtok et al., 2017) utilizes multiple retrieval stages (including lexical- and semantic search) and incorporates historic log data for retrieval purposes. Footnote 7: We also experimented with other fields like _Album Title_, but saw no improvement in retrieval effectiveness. **Queries.** We conducted experiments on the 1,256 posts (849 train, 191 validation, and 216 test) from _ToTMusic_ that contain no URLs in the post title or post text; we make this choice as in the most extreme case, the entire post may contain just a URL, requiring audio-based search while we focus on text-based methods. From each post, we create different _queries_ and label them as follows: (1) Title: using the post title only; (2) Text: post text; (3) Title+Text: title & text concatenated; and finally, (4) Keywords: extracting up to ten keywords from the post text8 with Yake (Yake, 2017); (5) Reform\({}_{N}\): reformulations with \(N=\{10,25,50,\infty\}\). Footnote 8: keywords were deduplicated with threshold = 0.2 and algorithm =seqn. **Evaluation.** We report Recall@K, equivalent to Success@K (i.e., one correct answer) for \(K=\{10,100,1000\}\) on Wasabi. All reported results are on the test set. For _Spotify_ search we describe the observed trends (due to the proprietary nature of the system). ### Results Table 2 provides an overview of our Wasabi results. **Post parts as query.** The low success across queries and \(K\) underscores the difficulty of the task. On Wasabi, Title queries are more effective than Text queries--increased verbosity leads to retrieval failure. However, the text may indeed contain data useful in retrieval, with comparable or higher effectiveness scores for Title+Text over Title at \(K=\{100,1000\}\), motivating keyword extraction: crucial details might be present in the text, but including the entire need as a query might harm effectiveness. Our keyword selection method though fails to outperform other queries except for Text on S@10. On _Spotify_ search we observe a different trend: Title+Text is the most effective query followed by Title. **LIM reformulations as query.** Examining Table 2, reformulations have limited success compared to Title queries. Reform\({}_{25}\) and Reform\({}_{50}\) perform as well as Title on S@1000, with Reform\({}_{\infty}\) outperforming it. While Keywords beat all but Reform\({}_{25}\) on S@10, it is outperformed by reformulations on S@100 and S@1000. On _Spotify_ search, we find that reformulations fare worse than Title queries for S@10, but see limited success on S@100, with Reform\({}_{25}\) and Reform\({}_{50}\) achieving higher effectiveness. Most importantly, there is no ideal \(N\) on either index, with varying success across metrics. We thus conclude that in our study, reformulations generated using state-of-the-art LLMs have only mixed success. ## 6. Conclusions We explored Tip-of-the-Tongue retrieval for music. Of the 94K posts corresponding to Music-ToT information needs from an online community for ToT requests, we linked 2,278 posts to the corresponding answers in the Wasabi corpus, resulting in _ToTMusic_, thus enabling further research for this challenging task. We iteratively developed and refined a Music-ToT schema that contains 28 fine-grained labels as shown in Table 1. Labeling 100 posts using this schema, we showed that users express uncertainty frequently, and almost as often refer to other modalities. We benchmarked a subset of 1.2K _descriptive_ queries from _ToTMusic_, and highlight the difficulty of the task. Future work should leverage cross- and multi-modal retrieval as well as better approaches for reformulations. ###### Acknowledgements. The authors would like to thank Gulfaraz Rahman and Ruben van Heusden for helping with the preliminary annotation work. The authors also thank Daniel Lazarovski and Humberto Corona Pampin for their input. Part of this research was supported by the NWO Innovational Research Incentives Scheme Vidi (016.Vidi.189.039). All content represents the opinion of the authors, which is not necessarily shared or endorsed by their respective employers and/or sponsors. \begin{table} \begin{tabular}{r c c c} \hline \hline **Query** & **S@10** & **S@100** & **S@1000** \\ \hline Title & 0.0370 & 0.0833 & 0.1389 \\ Keywords & 0.0231 & 0.0463 & 0.0787 \\ Text & 0.0139 & 0.0648 & 0.0926 \\ Title+Text & 0.0324 & 0.0833 & 0.1713 \\ \hline Reform\({}_{10}\) & 0.0139 & 0.0509 & 0.1204 \\ Reform\({}_{25}\) & 0.0278 & 0.0602 & 0.1389 \\ Reform\({}_{50}\) & 0.0185 & 0.0741 & 0.1389 \\ Reform\({}_{\infty}\) & 0.0139 & 0.0741 & 0.1574 \\ \hline \hline \end{tabular} \end{table} Table 2. Overview of retrieval experiments on Wasabi, using Elasticsearch (BM25).
2310.19938
Lyapunov-Based Dropout Deep Neural Network (Lb-DDNN) Controller
Deep neural network (DNN)-based adaptive controllers can be used to compensate for unstructured uncertainties in nonlinear dynamic systems. However, DNNs are also very susceptible to overfitting and co-adaptation. Dropout regularization is an approach where nodes are randomly dropped during training to alleviate issues such as overfitting and co-adaptation. In this paper, a dropout DNN-based adaptive controller is developed. The developed dropout technique allows the deactivation of weights that are stochastically selected for each individual layer within the DNN. Simultaneously, a Lyapunov-based real-time weight adaptation law is introduced to update the weights of all layers of the DNN for online unsupervised learning. A non-smooth Lyapunov-based stability analysis is performed to ensure asymptotic convergence of the tracking error. Simulation results of the developed dropout DNN-based adaptive controller indicate a 38.32% improvement in the tracking error, a 53.67% improvement in the function approximation error, and 50.44% lower control effort when compared to a baseline adaptive DNN-based controller without dropout regularization.
Saiedeh Akbari, Emily J. Griffis, Omkar Sudhir Patil, Warren E. Dixon
2023-10-30T18:54:08Z
http://arxiv.org/abs/2310.19938v1
# Lyapunov-Based Dropout Deep Neural Network (Lb-DDNN) Controller ###### Abstract Deep neural network (DNN)-based adaptive controllers can be used to compensate for unstructured uncertainties in nonlinear dynamic systems. However, DNNs are also very susceptible to overfitting and co-adaptation. Dropout regularization is an approach where nodes are randomly dropped during training to alleviate issues such as overfitting and co-adaptation. In this paper, a dropout DNN-based adaptive controller is developed. The developed dropout technique allows the deactivation of weights that are stochastically selected for each individual layer within the DNN. Simultaneously, a Lyapunov-based real-time weight adaptation law is introduced to update the weights of all layers of the DNN for online unsupervised learning. A non-smooth Lyapunov-based stability analysis is performed to ensure asymptotic convergence of the tracking error. Simulation results of the developed dropout DNN-based adaptive controller indicate a 38.32% improvement in the tracking error, a 53.67% improvement in the function approximation error, and 50.44% lower control effort when compared to a baseline adaptive DNN-based controller without dropout regularization. Deep neural network, dropout, adaptive control, Lyapunov methods, nonlinear control systems. ## I Introduction Empirical evidence indicates that deep neural networks (DNNs) can provide better function approximation than single layer neural networks [1]. Traditionally, DNN-based controllers are trained using offline training methods based on prior collected datasets. [2, Section 6.6]. Recent developments in [3, 4, 5, 6, 7] use Lyapunov-based methods to develop unsupervised online learning for all weights of a deep neural network (i.e., Lb-DNNs). Unfortunately, both offline and Lb-DNNs can exhibit significantly degraded performance due to data overfitting. Another challenge that decreases generalization and performance of the trained DNN is co-adaptation, where multiple neurons, or even entire layers, become overly reliant on each other during the training process [8]. One effective approach to address these issues is through dropout regularization. Dropout was originally introduced by G. Hinton in [9] to prevent co-adaptation of feature detectors and improve the generalization performance of DNNs. Dropout regularization involves stochastically dropping out neurons during training, which helps prevent over-fitting, enhances the overall function approximation performance, and efficiently allocates computational resources while updating the network's weights [10, 11]. By setting the activation of certain individual weights to zero, dropout induces sparse representation in the network which reduces co-dependency in neurons. Moreover, dropouts can be viewed as training an ensemble of multiple DNNs with smaller width that are trained independently. Independence in the training has a regularizing effect and provides better generalization to new information. This intuitive reasoning is also applicable for using dropout in DNN-based adaptive control, since dropouts mitigate co-adaptation by reducing the number of weights influencing an adaptation law. Although dropout regularization has been used for offline training of DNNs in results such as [12, 13], its application has been limited in real-time adaptive control settings. In [10], the dropout method is employed on a DNN to improve the training performance of inner layers in pseudo real-time, and through simulations, the study demonstrates the improved performance of a DNN-based adaptive controller with dropout. However, the pseudo real-time adaptation laws in [10] are not stability-driven but are rather based on a modular design where the stability analysis is primarily facilitated using robust control techniques. This paper introduces a novel dropout technique aimed at enhancing the function approximation performance of a DNN-based adaptive controller that updates the weights of all layers of the DNN using the Lyapunov-based update law in [3] (i.e., a Lyapunov-based Dropout Deep Neural Network (Lb-DDNN)). The proposed technique involves the selective inactivation, or dropout, of weights associated with randomly selected neurons within each DNN layer. To incorporate dropout regularization, a new recursive DNN representation and stability-driven weight adaptation laws are constructed by considering the effect of randomization matrices on the closed-loop error system. Through a non-smooth Lyapunov-based stability analysis, the designed controller is guaranteed to stabilize the system in the sense that the tracking error asymptotically converges to zero. Simulation experiments are performed to compare the Lb-DDNN adaptive controller with the baseline adaptive DNN controller developed in [3]. The simulation results show a \(35.56\%\) improvement in the tracking error, a \(49.94\%\) improvement in the function approximation error, and \(48.56\%\) lower control effort in the proposed controller when compared to the baseline controller. ## II Problem Formulation ### _Notation_ The space of essentially bounded Lebesgue measurable functions is denoted by \(\mathcal{L}_{\infty}\). Given two functions \(f:A\to B\) and \(g:B\to C\), where \(A\), \(B\), and \(C\) are sets, the composition of \(f\) and \(g\), denoted as \(g\circ f\), is a new function \(h:A\to C\) defined as \(h\triangleq g\circ f=g\left(f\left(x\right)\right)\), for all \(x\in A\). Let \(\mathbf{0}_{m\times n}\) denote a zero matrix with the dimension of \(m\times n\). Let \(I_{n\times n}\) denote an identity matrix with the dimension of \(n\). For matrices \(A\in\mathbb{R}^{m\times n}\) and \(B\in\mathbb{R}^{p\times q}\), the Kronecker product is denoted as \(A\otimes B\). Given a matrix \(A\triangleq\left[a_{i,j}\right]\in\mathbb{R}^{n\times m}\), where \(a_{i,j}\) denotes the element in the \(i^{th}\) row and \(j^{th}\) column of \(A\), the vectorization operator is defined as \(\text{vec}\left(A\right)\triangleq\left[a_{1,1},\ldots,a_{1,m},\ldots,a_{n,1 },\ldots,a_{n,m}\right]^{\top}\in\mathbb{R}^{nm}\). From [14, Proposition 7.1.9] and given matrices \(A\in\mathbb{R}^{p\times a}\), \(B\in\mathbb{R}^{a\times r}\), and \(C\in\mathbb{R}^{r\times s}\), the vectorization operator satisfies the property \(\text{vec}\left(ABC\right)=\left(C^{\top}\otimes A\right)\text{vec}\left(B \right).\) Differentiating \(\text{vec}\left(ABC\right)\) on both sides with respect to \(\text{vec}\left(B\right)\) yields the property \(\frac{\partial}{\partial\text{vec}\left(B\right)}\text{vec}\left(ABC\right) =C^{\top}\otimes A\). The right-to-left matrix product operator is represented by \(\overset{\curvearrow}{\prod}\), i.e., \(\overset{\curvearrow}{\prod}\overset{\curvearrow}{}A_{p}=A_{m}\ldots A_{2}A_ {1}\), and \(\overset{\curvearrow}{\prod}\overset{\curvearrow}{}A_{p}=1\) if \(a>m\). The Filippov set-valued map defined in [15, Equation 2b] is denoted by \(\text{K}\left[\cdot\right]\). The notation \(\overset{\text{a.at.}}{\left(\cdot\right)}\) denotes that the relation \(\left(\cdot\right)\) holds for almost all time (a.a.t.). Consider a Lebesgue measurable and locally essentially bounded function \(h:\mathbb{R}^{n}\times\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}^{n}\). Then, the function \(y:\mathcal{I}\rightarrow\mathbb{R}^{n}\) is called a Filippov solution of \(\dot{y}=h\left(y,t\right)\) on the interval \(\mathcal{I}\subseteq\mathbb{R}_{\geq 0}\) if \(y\) is absolutely continuous on \(\mathcal{I}\) and \(\dot{y}\overset{\text{a.at.}}{\text{K}}\left[h\right]\left(y,t\right)\). Given some functions \(f\) and \(g\), the notation \(f\left(y\right)=\mathcal{O}^{m}\left(g\left(y\right)\right)\) means that there exists some constants \(M\in\mathbb{R}_{>0}\) and \(y_{0}\in\mathbb{R}\) such that \(\left\|f(y)\right\|\leq M\left\|g(y)\right\|^{m}\) for all \(y\geq y_{0}\). The operator \(\text{proj}\left(\cdot\right)\) denotes the projection operator defined in [16, Appendix E, Eq. E.4]. ### _Dynamic Model and Control Objective_ Consider a control-affine nonlinear system modeled as \[\dot{x}\left(t\right)=f\left(x\left(t\right)\right)+u\left(t\right), \tag{1}\] where \(t\in\mathbb{R}_{\geq 0}\), \(x:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}^{n}\), \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\), and \(u:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}^{n}\) denote continuous time, the state, the unknown differentiable drift vector field, and the control input, respectively. The control objective is to design a controller \(u\left(t\right)\) such that the state tracks the desired trajectory \(x_{d}\). To achieve the control objective, an adaptive Lb-DNN architecture and a controller are designed to learn the unknown drift vector field and to achieve asymptotic convergence on the tracking error, respectively. To quantify the control objective, the tracking error \(e:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}^{n}\), is defined as \[e\left(t\right)\triangleq x\left(t\right)-x_{d}\left(t\right), \tag{2}\] where \(x_{d}:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}^{n}\) denotes a continuously differentiable desired trajectory. **Assumption 1**.: The desired trajectory is designed such that for all \(t\in\mathbb{R}_{\geq 0}\), \(x_{d}\left(t\right)\in\Omega\), and \(\dot{x}_{d}\in\mathcal{L}_{\infty}\), where \(\Omega\subset\mathbb{R}^{n}\) is a known compact set. Hence, the desired trajectory can be bounded as \(\left\|x_{d}\right\|\leq\overline{x_{d}}\), where \(\overline{x_{d}}\in\mathbb{R}_{>0}\) is a known constant. ## III Control Design ### _Deep Neural Network Architecture_ To estimate the unknown nonlinear drift vector field \(f\left(x\right)\), a Lb-DNN architecture is developed using dropout. Dropout randomly omits neurons while training, which helps mitigate over-fitting and co-adaptation, thus improving the overall performance and function approximation capabilities of the DNN [10, 11]. Leveraging the Lyapunov stability-driven weight adaptation laws developed in [3], the dropout DNN is designed such that randomization matrices are used to incorporate dropout techniques into the online, stability-driven weight adaptation. Through the randomization matrices, weights associated with a batch of randomly selected neurons are inactivated, i.e., dropped out, to reduce the interdependency and excessive reliance on specific weights and neurons. As shown in Figure 1, let the dropout DNN architecture, \(\Phi:\mathbb{R}^{n}\times\{0,1\}_{m=0}^{\sum\limits_{k=0}^{L}L_{m}\times \sum\limits_{m=0}^{k}L_{m}}\times\mathbb{R}^{\sum\limits_{k=0}^{k}L_{j}L_{j+1}} \rightarrow\mathbb{R}^{L_{k+1}}\), be defined as \[\Phi\left(x,R_{i},\theta\right)=\left(R_{i,k}V_{k}\right)^{\top}\phi_{k} \circ\cdots\circ\left(R_{i,1}V_{1}\right)^{\top}\phi_{1}\circ\left(R_{i,0}V_{0} \right)^{\top}x, \tag{3}\] where \(k\in\mathbb{Z}_{>0}\) denotes the number of the layers in \(\Phi\left(x,R_{i},\theta\right)\), and \(\phi_{j}:\mathbb{R}^{L_{j}}\rightarrow\mathbb{R}^{L_{j}}\) denotes the vector of smooth activation functions in the \(j^{\text{th}}\) layer, for all \(j\in\{1,\cdots,k\}\).1 For \(j\in\{0,\cdots,k\}\), \(V_{j}\in\mathbb{R}^{L_{j}\times L_{j+1}}\) and \(L_{j}\in\mathbb{Z}_{>0}\) represent the weight matrix and the number of nodes in the \(j^{\text{th}}\) layer of \(\Phi\), respectively. For notation simplicity, the weights can be represented in a vector \(\theta\in\mathbb{R}^{\sum\limits_{j=0}^{k}L_{j}L_{j+1}}\) as \(\theta\triangleq\left[\text{vec}\left(V_{0}\right)^{\top},\text{vec}\left(V_{1} \right)^{\top},\cdots,\text{vec}\left(V_{k}\right)^{\top}\right]^{\top}\). Let \(R_{i}\in\{0,1\}_{m=0}^{\sum\limits_{k=0}^{k}L_{m}\times\sum\limits_{m=0}^{k}L_{m}}\) denote the \(i^{\text{th}}\) instance of the randomization matrix, for all \(i\in\mathcal{I}\triangleq\{1,\cdots,J\}\), where \(\mathcal{I}\) denotes the set of all possible switching instances. Footnote 1: Although \(\phi_{j}\) is defined as a smooth function, the subsequent analysis allows the inclusion of non-smooth activation functions by using the switched systems analysis in [3]. After every user-selected constant time period of \(\delta t\in\mathbb{R}_{>0}\) seconds, the randomization matrix \(R_{i}\) switches to \(R_{i+1}\), where Figure 1: The structure of a DNN with three hidden layers, where the dashed and solid lines respectively represent the randomly dropped out and selected weights. \(R_{i+1}\) is randomly selected from all possible permutations of randomization matrices. Each permutation is defined as \[R_{i}\triangleq\left[\begin{array}{ccc}\mathbf{0}_{L_{1}\times L_{0}}&\mathbf{0 }_{L_{0}\times L_{1}}&\cdots&\mathbf{0}_{L_{0}\times L_{k}}\\ \mathbf{0}_{L_{1}\times L_{0}}&R_{i,1}&\cdots&\mathbf{0}_{L_{1}\times L_{k}}\\ \vdots&\vdots&\ddots&\vdots\\ \mathbf{0}_{L_{k}\times L_{0}}&\mathbf{0}_{L_{k}\times L_{1}}&\cdots&R_{i,k} \end{array}\right],\ \ \forall i\in\mathcal{I}.\] For \(i\in\left\{1,\cdots,J\right\}\) and \(j\in\left\{1,\cdots,k\right\}\), \(R_{i,j}\in\left\{0,1\right\}^{L_{j}\times L_{j}}\) is designed to be a diagonal matrix, and the matrix \(R_{i,0}\) is an identity matrix. The number of ones on the diagonal of each \(R_{i,j}\) is equal to a user-selected constant number \(n_{j}\), and the placement of non-zero elements on the diagonal of the matrix \(R_{i,j}\) randomly changes after every \(\delta t\) seconds.2 To illustrate the design of the randomization matrix \(R_{i,j}\) and the effect of dropout on the DNN architecture, the following example is provided. Footnote 2: Once the system reaches the steady state, the randomization can be stopped in the sense that \(R_{i}\) is replaced with identity matrices. This can be considered as the final permutation of \(R_{i}\) for all \(i\in\mathcal{I}\). **Example 1**.: Consider \(R_{i,2}\in\left\{0,1\right\}^{3\times 3}\), \(V_{2}\in\mathbb{R}^{3\times 2}\), and let \(n_{2}=1\). Therefore, every \(\delta t\) seconds, \(n_{2}=1\) of the elements on the diagonal of \(R_{i,2}\) are randomly set to 1 and the others are zeroed. The considered permutations of \(R_{i,2}\) for \(i\in\left\{1,2,3\right\}\) are \[R_{i,2}=\left[\begin{array}{ccc}1&0&0\\ 0&0&0\\ 0&0&0\end{array}\right],\ R_{2,2}=\left[\begin{array}{ccc}0&0&0\\ 0&1&0\\ 0&0&0\end{array}\right],\ R_{3,2}=\left[\begin{array}{ccc}0&0&0\\ 0&0&0\\ 0&0&1\end{array}\right].\] Let \(v_{p\times q}\in\mathbb{R}\) denote to each individual weight of the weight matrix \(V_{2}\), for rows \(p\in\left\{1,2,3\right\}\) and columns \(q\in\left\{1,2\right\}\). For \(p\in\left\{1,2,3\right\}\), let \(\phi_{2,p}:\mathbb{R}^{L_{2}}\rightarrow\mathbb{R}^{L_{2}}\) denote to the activation functions of the second layer such that the activation vector is \(\phi_{2}=\left[\phi_{2,1},\,\phi_{2,2},\,\phi_{2,3}\right]^{\top}\). Therefore, in the presence and absence of dropout matrix \(\left(R_{3,2}V_{2}\right)^{\top}\phi_{2}\) and \(V_{2}^{\top}\phi_{2}\) are respectively obtained as \[\left(R_{3,2}V_{2}\right)^{\top}\phi_{2} =\left[\begin{array}{ccc}v_{1,1}&v_{1,2}\\ v_{2,1}&v_{2,2}\\ v_{3,1}&v_{3,2}\end{array}\right]^{\top}\left[\begin{array}{ccc}0&0&0\\ 0&0&0\\ 0&0&1\end{array}\right]^{\top}\left[\begin{array}{ccc}\phi_{2,1}\\ \phi_{2,2}\\ \phi_{2,3}\end{array}\right]\] \[=\left[\begin{array}{ccc}v_{3,1}\phi_{2,3}\\ v_{3,2}\phi_{2,3}\end{array}\right]. \tag{4}\] \[V_{2}^{\top}\phi_{2} =\left[\begin{array}{ccc}v_{1,1}&v_{1,2}\\ v_{2,1}&v_{2,2}\\ v_{3,1}&v_{3,2}\end{array}\right]^{\top}\left[\begin{array}{ccc}\phi_{2,1} \\ \phi_{2,2}\\ \phi_{2,3}\end{array}\right]\] \[=\left[\begin{array}{ccc}v_{1,1}\phi_{2,1}+v_{2,1}\phi_{2,2}+v_ {3,1}\phi_{2,3}\\ v_{1,2}\phi_{2,1}+v_{2,2}\phi_{2,2}+v_{3,2}\phi_{2,3}\end{array}\right]. \tag{5}\] Comparing (4) and (5) suggests how the dropout method deactivates the activation functions associated with the zeros on the diagonal of the randomization matrix. Since the dropout matrix is randomly generated, a new batch of weights are selected every \(\delta t\) seconds. The universal function approximation property states that the function space of (3) is dense in \(\mathcal{C}\left(\mathcal{X}\right)\), where \(\mathcal{C}\left(\mathcal{X}\right)\) denotes the space of continuous functions over the compact set \(\mathcal{X}\subseteq\mathbb{R}^{n}\), where \(x\in\mathcal{X}\)[17, Theorem 1.1]. Therefore, for all \(j\in\left\{0,\cdots,k\right\}\), there exists a corresponding vector of ideal weights \(\theta^{*}\in\mathbb{R}^{\frac{k}{j\times}L_{j}L_{j+1}}\) such that \(\sup_{x_{d}\in\Omega}\left\|f\left(x\right)-\Phi\left(x,R_{i},\theta^{*} \right)\right\|\leq\overline{\varepsilon}\). Thus, the drift vector field can be modeled as \[f\left(x\right) = \Phi\left(x,R_{i},\theta^{*}\right)+\varepsilon\left(x\right), \tag{6}\] where \(\varepsilon:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) denotes an unknown function reconstruction error that can be bounded as \(\sup\left\|\varepsilon\left(x\right)\right\|\leq\overline{\varepsilon}\). **Assumption 2**.: The vector of ideal weights can be bounded by a known constant \(\overline{\theta}\in\mathbb{R}_{>0}\) as \(\left\|\theta^{*}\right\|\leq\overline{\theta}\), [18, Assumption 1]. ### _Adaptation Law_ To fulfill the tracking objective, the DNN model in (6) is used to estimate the unknown drift dynamics in (1). Since the ideal weights of the modeled DNN are unknown, adaptive estimates of the weight matrices are developed to learn the unknown drift dynamics \(f\left(x\right)\). Let \(\hat{\theta}:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}^{\frac{k}{j\times 2}L_{j}L_{j+1}}\) be defined as \(\hat{\theta}\left(t\right)\triangleq\left[\text{vec}\left(\widehat{V}_{0} {}\right)^{\top},\text{vec}\left(\widehat{V}_{1}\right)^{\top},\cdots,\text{vec }\left(\widehat{V}_{k}\right)^{\top}\right]^{\top}\), where \(\widehat{V}_{j}:\mathbb{R}^{L_{j}\times L_{j+1}}\), for all \(j\in\left\{0,\cdots,k\right\}\), denote the weight estimates. The corresponding weight estimation error \(\tilde{\theta}:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}^{\frac{k}{j\times 2}L_{j+1}}\) is defined as \(\tilde{\theta}\left(t\right)\triangleq\theta^{*}-\hat{\theta}\left(t\right)\). Using the weight estimates \(\hat{\theta}\), the adaptive estimate of \(f\left(x\right)\) can be represented as \(\Phi\left(x,R_{i},\hat{\theta}\right)\triangleq\left(R_{i,k}\widehat{V}_{k} \right)^{\top}\phi_{k}\circ\cdots\circ\left(R_{i,1}\widehat{V}_{1}\right)^{ \top}\phi_{1}\circ\left(R_{i,0}\widehat{V}_{0}\right)^{\top}x\). The estimated DNN architecture can be written in a recursive relation as \[\widehat{\Phi}_{j}=\begin{cases}\left(R_{i,j}\widehat{V}_{j}\right)^{\top} \phi_{j}\left(\widehat{\Phi}_{j-1}\right),&j=\left\{1,...,k\right\},\\ \left(R_{i,0}\widehat{V}_{0}\right)^{\top}x,&j=0,\end{cases} \tag{7}\] where \(\widehat{\Phi}_{j}\) is the shorthand notation for \(\widehat{\Phi}_{j}\triangleq\Phi_{j}\left(x,R_{i},\hat{\theta}\right)\), and \(\widehat{\Phi}=\widehat{\Phi}_{k}\). Based on the subsequent stability analysis, the adaptation law for DNN weight estimates is designed as \[\hat{\theta}\triangleq\text{proj}\left(\Gamma_{\theta}\widehat{\Phi}^{\prime \top}e\right), \tag{8}\] where \(\Gamma_{\theta}\in\mathbb{R}^{\frac{k}{j\times 2}L_{j}L_{j+1}\times\frac{k}{j}L_{j+1}}\) denotes a positive-definite adaptation gain matrix, and \(\widehat{\Phi}^{\prime}\) is a shorthand notation for the Jacobian \(\widehat{\Phi}^{\prime}\triangleq\frac{\partial\Phi\left(x,R_{i},\tilde{\theta }\right)}{\partial\tilde{\theta}}\). The Jacobian \(\widehat{\Phi}^{\prime}\) can be represented as \(\widehat{\Phi}^{\prime}\triangleq\left[\widehat{\Phi}^{\prime}_{0},...,\widehat{ \Phi}^{\prime}_{k}\right],\) where the shorthand notation \(\widehat{\Phi}^{\prime}_{j}\) is defined as \(\widehat{\Phi}^{\prime}_{j}\triangleq\frac{\partial\Phi_{j}\left(x,R_{i},\tilde{ \theta}\right)}{\partial\tilde{\theta}}\), for all \(j\in\left\{0,...,k\right\}\). The projection operator is incorporated in the update law to ensure that \(\hat{\theta}\left where \(\hat{\phi}_{j}\) and the Jacobian \(\hat{\phi}_{j}^{\prime}\) are the short-hand notations for \(\hat{\phi}_{j}\triangleq\phi_{j}\left(\widehat{\Phi}_{j-1}\right)\) and \(\hat{\phi}_{j}^{\prime}\triangleq\phi_{j}^{\prime}\left(\widehat{\Phi}_{j-1} \right)=\frac{\partial\widehat{\Phi}_{j}}{\partial\widehat{\theta}}\), respectively. _Remark 1_.: The presence of matrix \(R_{i}\) for all \(i\in\mathcal{I}\) in (9) mitigates co-adaptation by reducing the interdependency of weights in the adaptation law. ### _Closed-Loop Error System_ The designed DNN estimate is used in the developed controller to approximate the unknown drift vector field in (1). By incorporating the developed adaptive DNN estimate, the controller in (10) is designed such that the state \(x\) tracks the desired trajectory \(x_{d}\) despite inactivation of weights associated with a randomly selected batch of neurons. Based on the subsequent stability analysis, the control input is designed as \[u\left(t\right)\triangleq\dot{x}_{d}-\widehat{\Phi}-k_{e}e-k_{s}\text{sgn} \left(e\right), \tag{10}\] where \(k_{e},k_{s}\in\mathbb{R}_{>0}\) are constant control gains. Taking the time-derivative of (2) and substituting (1), (6), and the designed controller in (10) and canceling cross-terms yields the closed-loop error system as \[\dot{e}\left(t\right)=\Phi\left(x,R_{i},\theta^{\ast}\right)-\widehat{\Phi}+ \varepsilon\left(x\right)-k_{s}\text{sgn}\left(e\right)-k_{e}e. \tag{11}\] To address the technical challenges in deriving adaptation the DNN weights, many results use Taylor series approximation based techniques [3, 5, 7, 18, Eq. 22]. Applying a first-order Taylor series approximation-based error model on \(\Phi\left(x,R_{i},\theta^{\ast}\right)-\widehat{\Phi}\) yields \[\Phi\left(x,R_{i},\theta^{\ast}\right)-\widehat{\Phi}=\widehat{\Phi}^{\prime} \widehat{\theta}+\mathcal{O}^{2}\left(\left\|\widehat{\theta}\right\|\right), \tag{12}\] where \(\mathcal{O}\left(\left\|\widehat{\theta}\right\|\right)\) denotes the higher-order terms that can be bounded as \(\left\|\mathcal{O}^{2}\left(\left\|\widehat{\theta}\right\|\right)\right\|\leq\Delta\), where \(\Delta\in\mathbb{R}_{>0}\) denotes a known constant [5, Eq. 18]. Substituting (12) into (11) yields \[\dot{e}=\widehat{\Phi}^{\prime}\widetilde{\theta}+\mathcal{O}^{2}\left(\left\| \widehat{\theta}\right\|\right)+\varepsilon\left(x\right)-k_{s}\text{sgn} \left(e\right)-k_{e}e. \tag{13}\] To facilitate the subsequent stability analysis, let \(z:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}^{\psi}\) denote the concatenated error system defined as \[z\left(t\right)\triangleq\left[e^{\top}\left(t\right),\,\widetilde{\theta}^ {\top}\left(t\right)\right]^{\top}, \tag{14}\] where \(\psi\triangleq n+\sum\limits_{j=0}^{k}\ L_{j}L_{j+1}\). Additionally, let \(\dot{z}=h\left(z,t\right)\), where \(h:\mathbb{R}^{\psi}\times\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}^{\psi}\) is as \[h\left(z,t\right) \triangleq \left[\begin{array}{c}\left(\begin{array}{c}\widehat{\Phi}^{ \prime}\tilde{\theta}+\mathcal{O}^{2}\left(\left\|\widehat{\theta}\right\| \right)\\ +\varepsilon\left(x\right)-k_{s}\text{sgn}\left(e\right)-k_{e}e\\ -\text{proj}\left(\Gamma_{\theta}\widehat{\Phi}^{\prime\top}e\right)\end{array} \right)\end{array}\right]. \tag{15}\] ## IV Stability Analysis Let \(V_{L}:\mathbb{R}^{\psi}\rightarrow\mathbb{R}_{\geq 0}\) denote the Lyapunov function candidate defined as \[V_{L}\left(z\right)\triangleq\frac{1}{2}e^{\top}e+\frac{1}{2}\bar{\theta}^{ \top}\Gamma_{\theta}^{-1}\tilde{\theta}. \tag{16}\] Given the known constants \(\underline{\alpha},\overline{\alpha}\in\mathbb{R}_{>0}\), the Lyapunov function candidate satisfies the following inequality: \[\underline{\alpha}\left\|z\right\|^{2}\leq V_{L}\left(z\right)\leq\overline{ \alpha}\left\|z\right\|^{2}. \tag{17}\] Let the open and connected sets \(\mathcal{B}\in\mathbb{R}^{\psi}\) and \(\Upsilon\subseteq\mathcal{X}\) be defined as \(\mathcal{B}\triangleq\left\{\varsigma\in\mathbb{R}^{\psi}:\left\|\varsigma \right\|\leq\omega\sqrt[]{\underline{\alpha}/\overline{\alpha}}\right\}\) and \(\Upsilon\triangleq\left\{\varsigma\in\mathcal{X}:\left\|\varsigma\right\|< \overline{x_{d}}+\omega\right\}\). Theorem 1 uses the non-smooth analysis technique in [19] to establish the invariance properties of Fillipov solutions to \(\dot{z}\) and to guarantee asymptotic convergence of the tracking error, \(e\). **Theorem 1**.: _The controller designed in (10) and the DNN update law developed in (8) guarantee asymptotic tracking error convergence for the dynamical system in (1) in the sense that \(\lim\limits_{t\rightarrow\infty}\left\|e\left(t\right)\right\|=0\), given \(\left\|z\left(t_{0}\right)\right\|\in\mathcal{B}\) and that the gain condition \(k_{s}>\overline{\varepsilon}+\Delta\) is satisfied._ Proof.: Let \(\partial V_{L}\) denote the Clarke gradient of \(V_{L}\) defined in [20, p. 39]. Since the Lyapunov function candidate is continuously differentiable, \(\partial V_{L}(z)=\{\nabla V_{L}(z)\}\), where \(\nabla\) denotes the standard gradient operator. From (15), it can be concluded that for all \(i\in\mathcal{I}\), \(V_{L}\) satisfies the following differential equation \[\dot{V}_{L} \overset{\text{a.al.}}{=}\bigcap\limits_{\ell=0}^{\sigma\in \partial V_{L}(z)}\sigma^{\top}\text{K}\left[h\right]\left(z,t\right)\] \[=\nabla V_{L}^{\top}\left(z\right)\text{K}\left[h\right]\left(z,t\right)\] \[=e^{\top}\bigg{(}\widehat{\Phi}^{\prime}\tilde{\theta}+ \mathcal{O}^{2}\left(\left\|\widehat{\theta}\right\|\right)+\varepsilon \left(x\right)-k_{e}e\] \[\quad-k_{s}\text{K}\left[\text{sgn}\right]\left(e\right)\bigg{)}- \bar{\theta}^{\top}\Gamma_{\theta}^{-1}\text{K}\left[\text{proj}\right]\left( \Gamma_{\theta}\widehat{\Phi}^{\prime\top}e\right). \tag{18}\] Using [16, Lemma E.1.IV], the bounds on \(\mathcal{O}^{2}\left(\left\|\widehat{\theta}\right\|\right)\) and \(\varepsilon\left(x\right)\), the fact that \(\text{K}\left[\text{proj}\right]\left(\cdot\right)\) is the set of convex combinations of \(\text{proj}\left(\cdot\right)\) and \(\left(\cdot\right)\), and therefore, \(-\tilde{\theta}^{\top}\Gamma_{\theta}^{-1}\text{K}\left[\text{proj}\right] \left(\Gamma_{\theta}\widehat{\Phi}^{\prime\top}e\right)\leq-\tilde{\theta}^{ \top}\widehat{\Phi}^{\prime\top}e\), and canceling cross terms, (18) can be upper-bounded as \[\dot{V}_{L}\overset{\text{a.al.}}{\leq}-k_{e}\left\|e\right\|^{2}-k_{s}\left\|e \right\|+\left\|e\right\|\left(\Delta+\overline{\varepsilon}\right),\quad\forall i \in\mathcal{I}.\] Selecting the gain \(k_{s}\) according to the gain condition in Theorem 1 yields \[\dot{V}_{L}\overset{\text{a.al.}}{\leq}-k_{e}\left\|e\right\|^{2}. \tag{19}\] From the inequality obtained in (19), [19, Corollary 1] can be invoked to conclude that \(z\in\mathcal{L}_{\infty}\) and \(\lim\limits_{t\rightarrow\infty}\left\|e\left(t\right)\right\|=0\). Due to the facts that \(\widehat{\Phi}\) is smooth for all \(i\in\mathcal{I}\), \(x\in\Omega\), and \(\left\|\widehat{\theta}\right\|\leq\overline{\theta}\), \(\widehat{\Phi}\in\mathcal{L}_{\infty}\). Since \(\dot{x}_{d}\in\mathcal{L}_{\infty}\), \(e\in\mathcal{L}_{\infty}\), and \(\widehat{\Phi}\in\mathcal{L}_{\infty}\), \(u\in\mathcal{L}_{\infty}\). To show that \(x\in\mathcal{X}\), and therefore the universal function approximation property holds, let \(\left\|z\left(t_{0}\right)\right\|\in\mathcal{B}\). Since \(\left\|z\left(t_{0}\right)\right\|\leq\omega\sqrt[]{\underline{\alpha}/\overline{ \alpha}}\), using (17), \(\left\|e\left(t\right)\right\|\leq\omega\). Hence, using (2), \(\left\|x\right\|\) can be bounded as \(\left\|x\right\|\leq\overline{x_{d}}+\omega\). Therefore, if \(z\left(t_{0}\right)\in\mathcal{B}\), then \(x\in\Upsilon\subseteq\mathcal{X}\). ## V Simulation To demonstrate the efficacy of the Lb-DDNN adaptive controller, simulations are performed on a three-dimensional nonlinear system, and \(f\) in (1) is modeled as \[f=\left[\begin{array}{c}x_{1}x_{2}^{2}\tanh\left(x_{2}\right)+\sin\left(x_{1} where \(x\triangleq\left[x_{1},x_{2},x_{3}\right]^{\top}:\mathbb{R}_{\geq 0}\rightarrow \mathbb{R}^{3}\) denotes the system state. Three simulation experiments are performed for \(10\,\)sec with initial condition \(x\left(0\right)=\left[5,1,-5\right]^{\top}\). The desired trajectory is selected as \(x_{d}\left(t\right)=\left[\sin\left(2t\right),-\cos\left(t\right),\sin\left(3t \right)+\cos\left(-2t\right)\right]^{\top}\). The DNN used in the simulations has \(k=7\) inner layers with \(L=10\) neurons in each hidden layer and contained hyperbolic tangent activation functions. The first set of simulations are performed to compare the baseline DNN-based adaptive controller in [3] and the Lb-DDNN adaptive controller in (8) and (10). The second set of simulations are performed to examine the effect of \(\delta t\) on the performance of the propose method. The third set of simulations are performed to compare the performance in the absence and presence of dropout deactivation after the transient period. In all simulations, the DNN weight estimates are initialized randomly from the normal distribution \(\mathcal{N}\left(0,10\right)\). The control gains in (10) are selected as \(k_{e}=10.5\) and \(k_{s}=1.5\). The learning gain for the baseline DNN is selected as \(\Gamma_{\theta}=100I_{670\times 670}\). In the first two sets of simulations, the randomization is activated for the first \(2\,\)sec where the system is in the transient stage. After \(2\,\)sec, all the randomization matrices change to identity matrices. For the first \(2\,\)sec, the learning gain of the dropout DNN update law in (8) is selected as \(\Gamma_{\theta}=100I_{670\times 670}\), and after \(2\,\)sec, the learning gain changes to \(\Gamma_{\theta}=40I_{670\times 670}\). In the transient stage of the first and third set of simulations, the matrices \(R_{i}\) change to \(R_{i+1}\) every \(\delta t=0.1\,\)sec. The performance results of the simulations are presented in Table I. As shown in the first subplot of Figure 2, the tracking error for the dropout DNN converges significantly faster than the baseline DNN. Specifically, the dropout DNN results in convergence to the final error after approximately \(0.5\,\)sec, roughly four times faster than that of the baseline controller. Despite the jump in the tracking error after the deactivation of the dropout of the DNN weights, the dropout DNN still yields the norm of the root mean square tracking error of \(0.81\), which shows a \(38.32\%\) improvement when compared to the baseline DNN adaptive controller. Moreover, the baseline DNN controller presents more oscillatory behavior within the transient period than the dropout DNN controller. The oscillatory behavior in the baseline DNN is due to interdependency of weights in the adaptation. However, as stated in Remark 1, the dropout DNN mitigates co-adaptation in the adaptation law, thus yielding less oscillatory behavior. As shown in the second subplot of Figure 2, the function approximation error for the dropout DNN controller rapidly converges after less than \(0.2\,\)sec but takes approximately \(2\,\)sec to converge with the baseline DNN controller. Although there is a jump in the function approximation error after the deactivation of the DNN, the dropout DNN controller demonstrated a \(53.67\%\) improvement in function approximation with the norm of the root mean square function approximation error of \(19.44\). Thus, the developed dropout adaptive DNN architecture resulted in better transient behavior and improved tracking and function approximation performance with a \(50.44\%\) lower control effort when compared to the baseline adaptive DNN controller developed in [3]. To examine the effect of selecting different \(\delta t\), simulations are performed with \(\delta t=0.2\,\)sec and \(\delta t=0.05\,\)sec using the Lb-DDNN controller. As shown in Figure 3, reducing \(\delta t\) causes more spikes in the tracking and function approximation performances. Although the differences between the the tracking and function approximation errors are not significant, reducing \(\delta t\) is found to cause more spikes in the plots, as shown in Figure 3. The third set of simulations examine the performance of the developed dropout DNN controller under two cases; dropping out the neurons for the entire duration of the simulation, and deactivating dropout after \(1\,\)sec. As shown in Figure 4, for both cases, the difference between the tracking and function approximation performances are insignificant during the first second, as expected. Once the dropout is deactivated after \(1\,\)sec, there is an overshoot in both tracking and function approximation errors which does not occur when dropout is maintained throughout the simulation duration. However, not deactivating the dropout after the transient period leads to more spikes in both tracking and function approximation error in the steady state stage. Despite the increase in the the tracking error, deactivation of the dropout in the steady state leads to lower control input and function approximation error as shown in Table I. ## VI Conclusion A dropout DNN-based adaptive controller is developed for general continuous nonlinear systems. Leveraging the stability-derived DNN update law in [3] and inspired by the dropout technique, the developed dropout DNN controller improves function approximation performance and yields faster learning when compared to the DNN controllers without dropout. A Lyapunov-based stability analysis is performed to guarantee stability in the sense that the tracking error asymptotically converges to zero. Simulation results show \(38.32\%\) and \(53.67\%\) improvement in the tracking error and function approximation error, respectively, with a \(50.44\%\) reduced control effort when compared to the baseline adaptive DNN controller. Additional simulations showed the effect of dropout during both transient and steady state periods and how modifying dropout parameters, i.e., \(\delta t\), can effect system performance. Using the established Lb-DDNN framework, future work can explore implementation questions related to the dropout regularization such as changes in \(\delta t\), the number of neurons that are randomly selected, and dropout deactivation strategies.
2304.03627
Classification of width 1 lattice tetrahedra by their multi-width
We introduce the multi-width of a lattice polytope and use this to classify and count all lattice tetrahedra with multi-width $(1,w_2,w_3)$. The approach used in this classification can be extended into a computer algorithm to classify lattice tetrahedra of any given multi-width. We use this to classify tetrahedra with multi-width $(2,w_2,w_3)$ for small $w_2$ and $w_3$ and make conjectures about the function counting lattice tetrahedra of any multi-width.
Girtrude Hamm
2023-04-07T13:01:25Z
http://arxiv.org/abs/2304.03627v2
# Classification of width 1 lattice tetrahedra by their multi-width ###### Abstract. We introduce the multi-width of a lattice polytope and use this to classify and count all lattice tetrahedra with multi-width \((1,w_{2},w_{3})\). The approach used in this classification can be extended into a computer algorithm to classify lattice tetrahedra of any given multi-width. We use this to classify tetrahedra with multi-width \((2,w_{2},w_{3})\) for small \(w_{2}\) and \(w_{3}\) and make conjectures about the function counting lattice tetrahedra of any multi-width. ## 1. Introduction A _lattice polytope_\(P\subseteq\mathbb{R}^{d}\) is the convex hull of finitely many lattice points, that is, points in \(\mathbb{Z}^{d}\). We consider lattice polytopes as being defined only up to affine unimodular equivalence. Two lattice polytopes are said to be _affine equivalent_ if one can be mapped to the other by a change of basis of \(\mathbb{Z}^{d}\) followed by an integral translation. A _lattice simplex_ is the convex hull of affinely independent lattice points, for example triangles and tetrahedra. Lattice simplices are recurring objects of study with multiple applications. Via toric geometry they are relevant to algebraic geometry and are closely related to toric \(\mathbb{Q}\)-factorial singularities. The toric Fano three-folds with at most terminal singularities were classified by finding all the three-dimensional lattice polytopes whose only lattice points were the origin and their vertices [10]. A key step towards this was classifying all such tetrahedra. Simplices whose only lattice points are their vertices can give terminal quotient singularities by placing one vertex at the origin and considering the cone they generate. These were classified in dimension 4 in [11]. There are also applications of lattice simplices in mixed-integer and integer optimisation, see for example [1] and [1]. An important affine invariant of a polytope is its width. Recall that for a lattice polytope \(P\subseteq\mathbb{R}^{d}\) and a primitive dual vector \(u\in(\mathbb{Z}^{d})^{*}\) the width of \(P\) with respect to \(u\), written \(\operatorname{width}_{u}(P)\), is the length of the interval obtained by projecting \(P\) along the hyperplane with normal vector \(u\); that is, \(\operatorname{width}_{u}(P)\coloneqq\max_{x\in P}\{u\cdot x\}-\min_{x\in P}\{u \cdot x\}\). Then the _(first) width_ of \(P\), written \(\operatorname{width}^{1}(P)\), is the minimum width along all non-zero dual vectors \(u\), i.e. \(\operatorname{width}^{1}(P)\coloneqq\min_{u\in(\mathbb{Z}^{d})^{*}\setminus\{0 \}}\{\operatorname{width}_{u}(P)\}\). Width plays a role in the proofs of both [11] and [1] mentioned above which motivates seeking an understanding of the simplices of a given width. However, in dimension at least 2, there are infinitely many simplices of a given width. We would like to record enough information about the widths of a polytope so that there are only finitely many polytopes satisfying these conditions. To do this we consider the width of a lattice polytope in multiple directions. For a linearly independent collection of dual vectors \(u_{1},\dots,u_{d}\in(\mathbb{Z}^{d})^{*}\) we can consider the tuple whose entries are \(\operatorname{width}_{u_{i}}(P)\). By applying lexicographical order to \(\mathbb{Z}^{d}_{\geq 0}\) we find the minimum such tuple. We call this the multi-width of \(P\) written \(\operatorname{width}(P)\) and the \(i\)-th entry of this tuple is the \(i\)-th width of \(P\) written \(\operatorname{width}^{i}(P)\). Since \(\operatorname{width}^{i}(P)\) is always greater than or equal to \(\operatorname{width}^{i-1}(P)\) from now on, unless otherwise stated, let \(w_{1}\), \(w_{2}\) and \(w_{3}\) be integers satisfying \(0\leq w_{1}\leq w_{2}\leq w_{3}\). This author completely classified lattice triangles by their multi-widths in [1]. The result is surprisingly simple and it produces a normal form for triangles from which both their width and automorphism groups can be easily read. Additionally, it shows that the sequence counting lattice triangles with second width at most \(w_{2}\) has generating function equal to the Hilbert series of a degree 8 hypersurface in \(\mathbb{P}(1,1,1,2,2,2)\). The question we investigate here is how much of this can be extended to the three-dimensional case. Ideally, we would describe the finite set \[\mathcal{T}_{w_{1},w_{2},w_{3}}\coloneqq\{T=\operatorname{conv}(v_{1},v_{2},v_{3},v_{4}):\operatorname{mwidth}(T)=(w_{1},w_{2},w_{3})\}/\sim\] of lattice tetrahedra up to equivalence with multi-width \((w_{1},w_{2},w_{3})\). Theorem 1.1 achieves this in the case \(w_{1}=1\) by establishing a bijection between \(\mathcal{T}_{1,w_{2},w_{3}}\) and a set of tetrahedra \(\mathcal{S}_{1,w_{2},w_{3}}\) described in Definition 4.2. **Theorem 1.1**.: _There is a bijection from \(\mathcal{S}_{1,w_{2},w_{3}}\) to \(\mathcal{T}_{1,w_{2},w_{3}}\) given by the map taking a tetrahedron to its affine equivalence class. In particular, the cardinality of \(\mathcal{T}_{1,w_{2},w_{3}}\) is_ \[|\mathcal{T}_{1,w_{2},w_{3}}|=\begin{cases}2w_{2}^{2}+4&\text{if $w_{2}$ and $w_{3}$ even}\\ 2w_{2}^{2}+3&\text{if $w_{2}$ even and $w_{3}$ odd}\\ 2w_{2}^{2}+2&\text{if $w_{2}$ odd},\end{cases}\] _when \(w_{3}>w_{2}>1\)_ \[|\mathcal{T}_{1,w_{2},w_{2}}|=\begin{cases}w_{2}^{2}+w_{2}+2&\text{if $w_{2}$ even}\\ w_{2}^{2}+w_{2}+1&\text{if $w_{2}$ odd}\end{cases},\] _when \(w_{3}=w_{2}>1\), \(|\mathcal{T}_{1,1,w_{3}}|=3\) when \(w_{3}>w_{2}=1\), and \(|\mathcal{T}_{1,1,1}|=2\)._ The core idea of the proof is classify the possible images of the vertices of \(T\) under projection to lower rank lattices then lift these images back up to higher dimensions. To this end we need to consider multi-sets of lattice points. In an abuse of notation we will write \(\{v_{1},\ldots,v_{n}\}\) for the \(n\)-point multi-set containing lattice points \(v_{i}\in\mathbb{Z}^{d}\) even when the \(v_{i}\) are not distinct. We extend the notion of widths to these sets by saying the width of a set is the width of its convex hull. We can completely classify the four-point sets in \(\mathbb{Z}\) with width \(w_{1}\). These can represent the possible \(x\)-coordinates of all four-point sets in \(\mathbb{Z}^{2}\) with multi-width \((w_{1},w_{2})\). The second width gives bounds on their possible \(y\)-coordinates and in this way we can partially classify the four-point sets in the plane. Similarly, these can represent the possible first two coordinates of the vertices of tetrahedra of multi-width \((w_{1},w_{2},w_{3})\). By considering the possible \(z\)-coordinates we can assign to each point we obtain the classification above. When \(w_{1}>1\) this method is still valid, in fact it could theoretically be extended to classify simplices in any dimension, but the number of cases which needs to be checked increases dramatically. The method can also be written as a computer algorithm, allowing us to investigate \begin{table} \begin{tabular}{c|c c c c c c c c c c c c} & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 \\ \hline 1 & 2 & 3 & 3 & 3 & 3 & 3 & 3 & 3 & 3 & 3 & 3 & 3 \\ 2 & 0 & 8 & 11 & 12 & 11 & 12 & 11 & 12 & 11 & 12 & 11 & 12 \\ 3 & 0 & 0 & 13 & 20 & 20 & 20 & 20 & 20 & 20 & 20 & 20 & 20 \\ 4 & 0 & 0 & 0 & 22 & 35 & 36 & 35 & 36 & 35 & 36 & 35 & 36 \\ 5 & 0 & 0 & 0 & 0 & 31 & 52 & 52 & 52 & 52 & 52 & 52 & 52 \\ 6 & 0 & 0 & 0 & 0 & 44 & 75 & 76 & 75 & 76 & 75 & 76 \\ \end{tabular} \end{table} Table 1. The number of lattice tetrahedra with multi-width \((1,w_{2},w_{3})\) up to affine equivalence. \begin{table} \begin{tabular}{c|c c c c c c c c c c c} & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 \\ \hline 2 & 17 & 45 & 47 & 45 & 47 & 45 & 47 & 45 & 47 & 45 & 47 \\ 3 & 0 & 87 & 178 & 175 & 178 & 175 & 178 & 175 & 178 & 175 & 178 \\ 4 & 0 & 0 & 161 & 320 & 325 & 320 & 325 & 320 & 325 & 320 & \\ 5 & 0 & 0 & 0 & 244 & & & & & & & \\ \end{tabular} \end{table} Table 2. The number of lattice tetrahedra with multi-width \((2,w_{2},w_{3})\) up to affine equivalence. the width \(2\) case without fully classifying it. The results of this can be found in Table 2. The extent to which we can extend the results of [1] to the three dimensional case remains open but the similarities in the results we have found so far seem hopeful. In Section 2 we formally define the multi-width and prove some of its properties. In Section 3 we classify the four-point sets in the plane with multi-width \((w_{1},w_{2})\) which have \(x\)-coordinates \(\{0,0,0,w_{1}\}\) or \(\{0,0,w_{1},w_{1}\}\). In Section 4 we define \(\mathcal{S}_{1,w_{2},w_{3}}\) and prove Theorem 1.1. In Section 5 we discuss the computational extension of this classification and suggest some conjectures based on the findings. ### Acknowledgement I would like to thank my supervisors, Alexander Kasprzyk and Johannes Hofscheier, for generously sharing their expertise and supporting me throughout this project. Also, thank you to anyone who asked if I could generalise the triangle classification; you inspired me to return to this project. This research was supported in-part by the Heilbronn Institute for Mathematical Research. ## 2. Width and parallelepipeds Let \(N\cong\mathbb{Z}^{d}\) be a lattice, \(N^{*}\coloneqq\operatorname{Hom}(N,\mathbb{Z})\cong\mathbb{Z}^{d}\) its dual lattice and \(N_{\mathbb{R}}\coloneqq\mathbb{R}\otimes_{\mathbb{Z}}N\cong\mathbb{R}^{d}\) the real vector space containing \(N\). For two tuples of integers \(w=(w_{1},\ldots,w_{d})\) and \(w^{\prime}=(w^{\prime}_{1},\ldots,w^{\prime}_{d})\) we say that \(w<_{lex}w^{\prime}\) when there is some \(1\leq i\leq d\) such that \(w_{i}<w^{\prime}_{i}\) and \(w_{j}=w^{\prime}_{j}\) for all \(j<i\). This defines the _lexicographic order_ on \(\mathbb{Z}^{d}\). For a lattice polytope \(P\subseteq N_{\mathbb{R}}\) and a dual vector \(u\in N^{*}\) we define the _width of \(P\) with respect to \(u\)_ to be \[\operatorname{width}_{u}(P)\coloneqq\max_{x\in P}\{u\cdot x\}-\min_{x\in P}\{ u\cdot x\}.\] Since \(\operatorname{width}_{u}(P)\geq 0\) we can choose linearly independent vectors \(u_{1},\ldots,u_{d}\in N^{*}\) such that \[(\operatorname{width}_{u_{1}}(P),\ldots,\operatorname{width}_{u_{d}}(P))\] is minimal with respect to lexicographic order. Then we call this tuple the _multi-width_ of \(P\) written \(\operatorname{width}(P)\) and call \(\operatorname{width}_{u_{i}}(P)\) the _\(i\)-th width_ of \(P\) written \(\operatorname{width}^{i}(P)\). The widths of \(P\) are encoded in the polytope \(\mathcal{W}_{P}\coloneqq(P-P)^{*}\) which is the dual of the Minkowski sum of \(P\) and \(-P\). It can be shown that \(\operatorname{width}_{u}(P)\leq w\) if and only if \(u\in w\mathcal{W}_{P}\) and furthermore that \(\operatorname{width}_{u}(P)=w\) if and only if \(u\in\partial w\mathcal{W}_{P}\). This allows us to prove the following result. **Proposition 2.1**.: _Let \(d\geq 2\) and \(P\subset N_{\mathbb{R}}\) be a lattice polytope. If the first two widths of \(P\) are \(w_{1}\) and \(w_{2}\) then \(P\) is equivalent to a subset of \([0,w_{1}]\times[0,w_{2}]\times\mathbb{R}^{d-2}\)._ Proof.: Pick linearly independent, primitive dual vectors \(u_{1}\) and \(u_{2}\) which realise the first two widths of \(P\). We know that \(u_{1},u_{2}\in w_{2}\mathcal{W}_{P}\). As real vectors, \(u_{1}\) and \(u_{2}\) generate a two-dimensional vector space containing a sublattice of \(N\). The triangle \(\operatorname{conv}(0,u_{1},u_{2})\subseteq w_{2}\mathcal{W}_{P}\) contains a lattice point \(u_{2}^{\prime}\) such that \(\{u_{1},u_{2}^{\prime}\}\) is a basis for this sublattice. Since \(w_{2}\) is the second width of \(P\) and \(\operatorname{width}_{u_{2}^{\prime}}(P)\leq w_{2}\), we know that \(\operatorname{width}_{u_{2}^{\prime}}(P)=w_{2}\). After a change of basis, we may assume that \(u_{1}\) and \(u_{2}^{\prime}\) are the first two standard basis vectors. This change of basis and a translation are sufficient to map \(P\) to a subset of \([0,w_{1}]\times[0,w_{2}]\times\mathbb{R}^{d-2}\). **Proposition 2.2**.: _We define the parallelepiped_ \[P\coloneqq\{x\in\mathbb{R}^{3}:(1,0,0)\cdot x\in[0,a],(0,1,0)\cdot x\in[0,b], u\cdot x\in[r,r+c]\}\] _where \(u\) is a dual vector linearly independent to \(\{(1,0,0),(0,1,0)\}\), \(a\), \(b\) and \(c\) are integers satisfying \(0<a\leq b\leq c\) and \(r\) is rational. Then any lattice polytope \(Q\subset P\) is affine equivalent to a subset of \([0,a]\times[0,b]\times[0,a+c-1]\)._ Proof.: Say \(u=(u_{1},u_{2},u_{3})\) then \(u_{3}\neq 0\), in fact we may assume \(u_{3}>0\) otherwise replace \(u\) with \(-u\) and adjust \(r\) so this does not change \(P\). Therefore, we may pick integers \(k_{1}\) and \(k_{2}\) such that \(0\leq k_{i}u_{3}-u_{i}<u_{3}\). Now let \(\varphi\) be the shear described by \[\begin{pmatrix}x\\ y\\ z\end{pmatrix}\mapsto\begin{pmatrix}1&0&0\\ 0&1&0\\ k_{1}&k_{2}&1\end{pmatrix}\begin{pmatrix}x\\ y\\ z\end{pmatrix}.\] By inspecting the \(z\)-coordinates of the vertices of \(\varphi(P)\) we can show that \[\operatorname{width}_{(0,0,1)}(\varphi(Q))\leq \frac{a(k_{1}u_{3}-u_{1})+b(k_{2}u_{3}-u_{2})+c}{u_{3}}\] \[\leq a\left(1-\frac{1}{u_{3}}\right)+b\left(1-\frac{1}{u_{3}}\right)+c \frac{1}{u_{3}}\] \[< a+c.\] After a translation this shows that \(Q\) is equivalent to a subset of \([0,a]\times[0,b]\times[a+c-1]\). This uses the fact that \(Q\) is a lattice polytope and so has integral widths. This and Proposition 2.1 show that any \(3\)-dimensional lattice polytope with multi-width \((w_{1},w_{2},w_{3})\) is equivalent to a subset of \([0,w_{1}]\times[0,w_{2}]\times[0,w_{1}+w_{3}-1]\). We can improve this by considering the case where \(w_{3}>w_{1}+w_{2}\) separately. In the above proof if \(u_{3}=1\) then \(\operatorname{width}_{(0,0,1)}(\varphi(Q))\leq c\). Otherwise suppose \(u_{3}>1\) and \(c>a+b\). This means that \(\operatorname{width}_{(0,0,1)}(\varphi(Q))\leq\frac{a+b+c}{2}<c\). If instead \(u_{3}>1\) and \(c\leq a+b\) then \(\operatorname{width}_{(0,0,1)}(\varphi(Q))\leq a+b\). Replacing \(a\), \(b\) and \(c\) with \(w_{1}\), \(w_{2}\) and \(w_{3}\) we know that \(\operatorname{width}_{(0,0,1)}(\varphi(Q))\geq w_{3}\) so the above observations mean that \(Q\) is equivalent to a subset of \([0,w_{1}]\times[0,w_{2}]\times[0,\max\{w_{1}+w_{2},w_{3}\}]\). ## 3. Four-point sets in the plane The main aim of this section is to classify the four-point sets in the plane with first width \(1\). When the second width is more than \(1\) these are exactly \(\{(0,0),(0,w_{2}),(0,y_{0}),(1,0)\}\) where \(y_{0}\in[0,\frac{w_{2}}{2}]\) and \(\{(0,0),(0,w_{2}),(1,0),(1,y_{1})\}\) where \(y_{1}\in[0,w_{2}]\). This can be proven directly but here we will prove a more general result. We do this to help with the computational classification in Section 5 and because it could be useful towards a future extension of this classification. First we classify all four-point sets in \(\mathbb{Z}\) of width \(w_{1}\). **Proposition 3.1**.: _There is a bijection from the collection of lattice points in the triangle \(Q_{w_{1}}\coloneqq\operatorname{conv}((0,0),(0,w_{1}),(\frac{w_{1}}{2},\frac{ w_{1}}{2}))\) to the four-point sets in \(\mathbb{Z}\) with width \(w_{1}\) given by the map taking \((x_{1},x_{2})\) to \(\{0,x_{1},x_{2},w_{1}\}\). In particular, the number of such sets is given by_ \[\begin{cases}\frac{w_{1}^{2}}{4}+w_{1}+1&\text{if $w_{1}$ is even}\\ \frac{w_{1}^{2}}{4}+w_{1}+\frac{3}{4}&\text{if $w_{1}$ is odd}.\end{cases}\] Proof.: The map \((x_{1},x_{2})\mapsto\{0,x_{1},x_{2},w_{1}\}\) is a well-defined map taking a lattice point of \(Q_{w_{1}}\) to a four-point set of width \(w_{1}\). For surjectivity notice that the convex hull of any four-point set of width \(w_{1}\) is equivalent to \(\operatorname{conv}(0,w_{1})\). Therefore, we may assume that \(0\) and \(w_{1}\) are points in such a set and that \(x_{1},x_{2}\in[0,w_{1}]\) are the two remaining points. By relabeling of the \(x_{i}\) we may assume that \(x_{1}\leq x_{2}\). A reflection takes \(\{0,x_{1},x_{2},w_{1}\}\) to \(\{0,w_{1}-x_{2},w_{1}-x_{1},w_{1}\}\) so we may assume that \(x_{1}\leq w_{1}-x_{2}\). This shows that \((x_{1},x_{2})\in Q_{w_{1}}\). It remains to show injectivity. Let \((x_{1},x_{2})\) and \((x_{1}^{\prime},x_{2}^{\prime})\) be lattice points in \(Q_{w_{1}}\) such that \(\{0,x_{1},x_{2},w_{1}\}\sim\{0,x_{1}^{\prime},x_{2}^{\prime},w_{1}\}\). The only non-trivial affine automorphism of a line segment in \(\mathbb{Z}\) is the reflection about its midpoint so either \(x_{1}=x_{1}^{\prime}\) and \(x_{2}=x_{2}^{\prime}\) or \(x_{1}=w_{1}-x_{2}^{\prime}\) and \(x_{2}=w_{1}-x_{1}^{\prime}\). In the first case we are done. In the second we can see that \(x_{1}=w_{1}-x_{2}^{\prime}\geq x_{1}^{\prime}\) and \(x_{1}^{\prime}=w_{1}-x_{2}\geq x_{1}\) so \(x_{1}=x_{1}^{\prime}\). Similarly \(x_{2}=x_{2}^{\prime}\) which proves the result. We now move on to four-point sets in \(\mathbb{Z}^{2}\) with multi-width \((w_{1},w_{2})\). Written as a subset of \([0,w_{1}]\times[0,w_{2}]\) the \(x\)-coordinates of such a set are equivalent to one of the above classified sets. We restrict to the case where the corresponding point of \(Q_{w_{1}}\) is either \((0,0)\) or \((0,w_{1})\). **Proposition 3.2**.: _Let \(S\) be a four-point set in the plane with multi-width \((w_{1},w_{2})\) where \(0<w_{1}<w_{2}\). There is a dual vector \(u_{1}\) such that \(\operatorname{width}_{u_{1}}(S)=w_{1}\) and \(u_{1}\cdot S\sim\{0,0,0,w_{1}\}\) if and only if \(S\) is affine equivalent to exactly one of the following four point sets:_ * \(\{0,(0,y_{0}),(0,w_{2}),(w_{1},y_{1})\}\) _where_ \(0\leq y_{0}<\frac{w_{2}}{2}\) _and_ \(0\leq y_{1}<w_{1}\)_,_ * \(\{0,(0,\frac{w_{2}}{2}),(0,w_{2}),(w_{1},y_{1})\}\) _where_ \(y_{1}\leq(w_{2}-y_{1}\mod w_{1})\) _and_ \(w_{2}\) _is even._ _In particular such sets are counted by_ \[\begin{cases}\frac{w_{1}w_{2}}{2}+\lceil\frac{w_{1}+1}{2}\rceil&\text{if $w_{2}$ even}\\ \frac{w_{1}(w_{2}+1)}{2}&\text{if $w_{2}$ odd.}\end{cases}\] Proof.: First we must show that the listed four-point sets have multi-width \((w_{1},w_{2})\). It is enough to notice that in either case if \(u=(u_{1},u_{2})\) is a dual vector with \(u_{2}\neq 0\), then \[\operatorname{width}_{u}(S)\geq|u\cdot(0,w_{2})-u\cdot(0,0)|=|u_{2}w_{2}| \geq w_{2}.\] Now let \(S\) be a four-point set as described in the proposition, then by Proposition 2.1 we may assume that \(S\subset[0,w_{1}]\times[0,w_{2}]\). Since \(w_{1}<w_{2}\) the direction in which \(S\) has width \(w_{1}\) is unique so \(u_{1}=(1,0)\) and we may assume that \(S=\{(0,y_{0}),(w_{1},y_{1}),(0,y_{2}),(0,y_{3})\}\) for some integers \(y_{i}\). We must have \(0,w_{2}\in\{y_{0},y_{2},y_{3}\}\) or a shear can take \(S\) to a smaller rectangle contradicting the widths. Therefore, we assume that \(S=\{(0,0),(0,w_{2}),(0,y_{0}),(w_{1},y_{1})\}\). By a reflection we may assume that \(0\leq y_{0}\leq\frac{w_{2}}{2}\). By a shear we may assume that \(0\leq y_{1}<w_{1}\). If \(y_{0}=\frac{w_{2}}{2}\) we may further assume that \(y_{1}\leq(w_{2}-y_{1}\mod w_{1})\) or perform a reflection and shear to make this so. This proves that \(S\) is equivalent to such a set. It remains to show uniqueness. Suppose \[\{(0,0),(0,w_{2}),(0,y_{0}),(w_{1},y_{1})\}\sim\{(0,0),(0,w_{2}),(0,y^{\prime}_ {0}),(w_{1},y^{\prime}_{1})\}\] then by uniqueness of the direction \(u_{1}\) the only maps which can take one of these sets to the other are shears about the \(y\)-axis and reflection in the line \(y=\frac{w_{2}}{2}\). Since \(y_{0}\) and \(y^{\prime}_{0}\) are both at most \(\frac{w_{2}}{2}\) this means they are equal. Also either \(y_{1}=y^{\prime}_{1}\) or \(y_{1}=(w_{2}-y^{\prime}_{1}\mod w_{1})\). In the second case \(y^{\prime}_{1}\leq y_{1}\) and \(y_{1}\leq y^{\prime}_{1}\) which proves uniqueness. To count the sets listed in the proposition it is enough to note that there are \(\lfloor\frac{w_{2}+1}{2}\rfloor w_{1}\) of the first type and \(\lceil\frac{w_{1}+1}{2}\rceil\) of the second type when \(w_{2}\) is even. **Proposition 3.3**.: _Let \(S\) be a four-point set in the plane with multi-width \((w_{1},w_{2})\) where \(0<w_{1}<w_{2}\). There is a dual vector \(u_{1}\) such that \(\operatorname{width}_{u_{1}}(S)=w_{1}\) and \(u_{1}\cdot S\sim\{0,0,w_{1},w_{1}\}\) if and only if \(S\) is equivalent to exactly one of the following four point sets:_ * \(\{0,(0,w_{2}),(w_{1},y_{1}),(w_{1},y_{2})\}\) _where_ \(y_{1}\leq y_{2}\) _and_ \(y_{1}\leq(w_{2}-y_{2}\mod w_{1})\)__ * \(\{0,(0,y_{0}),(w_{1},y_{1}),(w_{1},w_{2})\}\) _were_ \(\max\{w_{2}-y_{1},w_{2}-(w_{1}-y_{1})\}\leq y_{0}<w_{2}\)__ Proof.: First we must prove that the listed four-point sets have multi-width \((w_{1},w_{2})\). The first case follows by the same proof as that in Proposition 3.2. In the second case, it suffices to show that for any dual vector \(u=(u_{1},u_{2})\) with \(u_{2}>0\), \(\operatorname{width}_{u}(S)\geq w_{2}\). The image of \(S\) under \(u\) is \(\{0,u_{2}y_{0},u_{1}w_{1}+u_{2}y_{1},u_{1}w_{1}+u_{2}w_{2}\}\). Suppose for contradiction that the width of \(S\) with respect to \(u\) is less than \(w_{2}\). Then \(u_{1}w_{1}+u_{2}w_{2}<w_{2}\) and so \(u_{1}<0\). Also \(u_{2}y_{0}-u_{1}w_{1}-u_{2}y_{1}<w_{2}\) which can be rearranged to show \(-1<u_{1}\) which is the desired contradiction. Let \(S\) be a four-point set as described in the proposition then by Proposition 2.1 and symmetry we are reduced to two possibilities: 1. \(S=\{(0,0),(0,w_{2}),(w_{1},y_{1}),(w_{1},y_{2})\}\) where \(y_{1}\leq y_{2}\), 2. \(S=\{(0,0),(0,y_{0}),(w_{1},y_{1}),(w_{1},w_{2})\}\) where \(y_{0}<w_{2}\) and \(y_{1}>0\) where the additional conditions are to avoid some simple repeated cases. In the first of these we may assume by a shear that \(y_{1}<w_{1}\). Consider the map given by a reflection in the line \(y=\frac{w_{2}}{2}\) followed by a shear about the \(y\)-axis which makes the \(y\)-coordinates of points in \(S\) take the smallest non-negative values possible. This is self inverse when applied to \(S\) and takes \(y_{1}\) and \(y_{2}\) to \(y^{\prime}_{2}\) and \(y^{\prime}_{1}\) where \(y^{\prime}_{1}\leq y^{\prime}_{2}\) and \(y^{\prime}_{1}=(w_{2}-y_{2}\mod w_{1})\). Therefore, either \(S\) or its image under this map is equivalent to one of the listed sets. In the second case, consider the width of \(S\) with respect to \((-1,1)\). To prevent this being too small we must have either \(y_{0}\geq 2w_{2}-w_{1}\) or \(y_{0}\geq w_{2}-w_{1}+y_{1}\). The first of these implies the second so we may assume that \(y_{0}\geq w_{2}-w_{1}+y_{1}\). Additionally, by symmetry we may assume that \(y_{0}\geq w_{2}-y_{1}\). Now we must show that each of the sets listed in the proposition are unique. The two cases are distinct since their convex hulls have different edge lengths. If \[\{0,(0,w_{2}),(w_{1},y_{1}),(w_{1},y_{2})\}\sim\{0,(0,w_{2}),(w_{1},y^{\prime}_ {1}),(w_{1},y^{\prime}_{2})\}\] then since \(y_{1},y_{1}^{\prime}\in[0,w_{1})\) the only non-trivial map between these two is the reflection in the line \(y=\frac{w_{2}}{2}\) followed by a shear described above. Therefore, either they are equal or \(y_{1}^{\prime}=(w_{2}-y_{2}\mod w_{1})\) and \(y_{1}=(w_{2}-y_{2}^{\prime}\mod w_{1})\). Using the bounds on \(y_{1}\) and \(y_{1}^{\prime}\) this shows that \(y_{1}=y_{1}^{\prime}\) and by considering the edge lengths of the convex hull, \(y_{2}=y_{2}^{\prime}\) too. If \[\{0,(0,y_{0}),(w_{1},y_{1}),(w_{1},w_{2})\}\sim\{0,(0,y_{0}^{\prime}),(w_{1}, y_{1}^{\prime}),(w_{1},w_{2})\}\] then since \(w_{1}<w_{2}\) it is enough to consider the lengths of the vertical edges of the convex hull. Since the right vertical edge is at least as long as the other this means that \(y_{0}=y_{0}^{\prime}\) and \(y_{1}=y_{1}^{\prime}\) which completes the proof. The previous two propositions are sufficient to completely classify all four point sets with multi-width \((1,w_{2})\) for integers \(w_{2}>1\) and show that they are enumerated by the function \[\begin{cases}\frac{3w_{2}}{2}+2&\text{if $w_{2}$ even}\\ \frac{3w_{2}}{2}+\frac{3}{2}&\text{if $w_{2}$ odd}.\end{cases}\] The multi-width \((1,1)\) four-point sets are just the vertices of the unit square and \(\{0,0,(1,0),(0,1)\}\). ## 4. Proof of Theorem 1.1 To describe the set of tetrahedra with multi-width \((1,w_{2},w_{3})\) we must recall the classification of lattice triangles by their multi-width. **Theorem 4.1** ([11]).: _Let \(\mathcal{S}_{w_{1},w_{2}}\) denote the set of lattice triangles equal to one of_ * \(\operatorname{conv}(0,(w_{1},y_{1}),(0,w_{2}))\) _where_ \(y_{1}\leq(w_{2}-y_{1}\mod w_{1})\)__ * \(\operatorname{conv}(0,(w_{1},y_{1}),(x_{2},w_{2}))\) _where_ \(0<x_{2}\leq\frac{w_{1}}{2}\)_,_ \(0\leq y_{1}\leq w_{1}-x_{2}\) _and_ \(y_{1}\geq x_{2}\) _if_ \(w_{1}=w_{2}\)__ * \(\operatorname{conv}((0,y_{0}),(w_{1},0),(x_{2},w_{2}))\) _where_ \(1<x_{2}<\frac{w_{1}}{2}\)_,_ \(0<y_{0}<x_{2}\) _and_ \(w_{1}<w_{2}\)_._ _Then there is a bijection from \(\mathcal{S}_{w_{1},w_{2}}\) to the set of lattice triangles with multi-width \((w_{1},w_{2})\) up to equivalence given by the map taking a triangle \(t\) to its equivalence class. In particular such triangles are counted by_ \[\begin{cases}\frac{w_{1}^{2}}{2}+2&\text{if $w_{1}$ and $w_{2}$ even and $w_{1}<w_{2}$}\\ \frac{w_{1}^{2}}{2}+1&\text{if $w_{1}$ even, $w_{2}$ odd and $w_{1}<w_{2}$}\\ \frac{w_{1}^{2}}{2}+\frac{1}{2}&\text{if $w_{1}$ odd and $w_{1}<w_{2}$}\\ \frac{w_{1}^{2}}{4}+\frac{w_{1}}{2}+1&\text{if $w_{1}=w_{2}$ even}\\ \frac{w_{1}^{2}}{4}+\frac{w_{1}}{2}+\frac{1}{4}&\text{if $w_{1}=w_{2}$ odd}.\end{cases}\] **Definition 4.2**.: When \(w_{3}\geq w_{2}>1\) we define \(\mathcal{S}_{1,w_{2},w_{3}}\) to be the set of the following tetrahedra 1. \(\operatorname{conv}(\{0\}\times t,(1,0,0))\) where \(t\in\mathcal{S}_{w_{2},w_{3}}\) 2. \(\operatorname{conv}(0,(0,w_{2},z_{1}),(1,0,0),(1,0,w_{3}))\) where \(0\leq z_{1}\leq\frac{w_{2}}{2}\) 3. \(\operatorname{conv}(0,(0,w_{2},z_{1}),(1,0,w_{3}),(1,y_{1},0))\) where \(0<y_{1}\leq w_{2}\) and \(w_{3}-w_{2}\leq z_{1}\leq w_{3}\) 4. \(\operatorname{conv}(0,(0,w_{2},w_{3}),(1,0,w_{3}),(1,y_{1},z_{1}))\) where \(0<z_{1}<y_{1}<w_{2}\) If \(w_{3}=w_{2}>1\) we include the following additional restrictions: * in case (3) \(y_{1}\leq z_{1}\) * in case (4) \(z_{1}\leq w_{2}-y_{1}\). When \(w_{3}>w_{2}=1\) let \[\mathcal{S}_{1,1,w_{3}}\coloneqq\{ \operatorname{conv}(0,(0,1,0),(0,0,w_{3}),(1,0,0)),\] \[\operatorname{conv}(0,(0,1,w_{3}-1),(1,0,w_{3}),(1,1,0)),\] \[\operatorname{conv}(0,(0,1,w_{3}),(1,0,w_{3}),(1,1,0))\}\] and finally \[\mathcal{S}_{1,1,1}\coloneqq\{\operatorname{conv}(0,(1,0,0),(0,1,0),(0,0,1)), \operatorname{conv}(0,(0,1,1),(1,0,1),(1,1,0))\}.\] **Proposition 4.3**.: _Let \(T\) be a lattice tetrahedron with multi-width \((1,w_{2},w_{3})\). Then there exists some \(T^{\prime}\in\mathcal{S}_{1,w_{2},w_{3}}\) which is equivalent to \(T\)._ Proof.: We will begin by showing that for arbitrary \(w_{2}\) and \(w_{3}\) there is a tetrahedra of one of the forms (1)-(4) equivalent to \(T\) then we will remove equivalent repeats in the special cases \(w_{2}=w_{3}>1\), \(w_{3}>w_{2}=1\) and \(w_{3}=w_{2}=1\). By Proposition 2.2 we may assume that \(T\subseteq[0,1]\times[0,w_{2}]\times[0,w_{3}]\). Let \(u_{1}=(1,0,0)\) then, possibly after a reflection, we may assume the image of the vertices of \(T\) under the action of \(u_{1}\) is either \(\{0,0,0,1\}\) or \(\{0,0,1,1\}\). If \(u_{1}\cdot T=\{0,0,0,1\}\) then \(T\) is the convex hull of a triangle embedded in the plane \(x=0\) and a point with \(x\)-coordinate \(1\). After a shear we may assume that under the projection onto the last two coordinates \(T\) is mapped to the triangle. The triangle is a subset of a \(w_{2}\times w_{3}\) rectangle. If it had multi-width lexicographically smaller than \((w_{2},w_{3})\) there would be dual vectors \(u_{2}^{\prime}\) and \(u_{3}^{\prime}\) linearly independent from \(u_{1}\) such that \((1,\operatorname{width}_{u_{2}^{\prime}}(T),\operatorname{width}_{u_{3}^{ \prime}}(T))<_{lex}(1,w_{2},w_{3})\) which is a contradiction. Therefore, this triangle has multi-width \((w_{2},w_{3})\). By Theorem 4.1 we can assume the triangle is in \(\mathcal{S}_{w_{2},w_{3}}\). By a shear we can move the fourth vertex to \((1,0,0)\) which proves that \(T\) is equivalent to a tetrahedron of the form (1). If \(u_{1}\cdot T=\{0,0,1,1\}\) we need to consider the four-point set we get by projecting the vertices of \(T\) onto the first two coordinates. By Proposition 3.3 these are exactly the sets \(\{(0,0),(0,w_{2}),(1,0),(1,y_{1})\}\) for integers \(y_{1}\in[0,w_{2}]\). The \(z\)-coordinates of the vertices of \(T\) must be in \([0,w_{3}]\) and at least one of them must attain each boundary to satisfy the width condition. By symmetry, this reduces us to the following four cases: * \(T=\operatorname{conv}((0,0,0),(0,w_{2},w_{3}),(1,0,z_{1}),(1,y_{1},z_{2}))\) * \(T=\operatorname{conv}((0,0,0),(0,w_{2},z_{1}),(1,0,w_{3}),(1,y_{1},z_{2}))\) * \(T=\operatorname{conv}((0,0,0),(0,w_{2},z_{1}),(1,0,z_{2}),(1,y_{1},w_{3}))\) * \(T=\operatorname{conv}((0,0,z_{1}),(0,w_{2},z_{2}),(1,0,0),(1,y_{1},w_{3}))\) In both the first and last of these cases a shear allows us to assume that \(z_{1}=0\) or \(z_{2}=0\). Such tetrahedra are equivalent to ones in the second or third case so we need only consider the second and third case. In the third case if \(y_{1}=0\), \(z_{1}=0\) or \(z_{2}=w_{3}\) then this is included in the second case so we will assume these are not true. Suppose for contradiction that \(z_{2}>0\). Then \(T\) has width less than \(w_{3}\) with respect to either \((-z_{2},0,1)\) or \((0,-1,1)\). This contradicts the multi-width of \(T\) so we may assume that \(z_{2}=0\). However, this means that with respect to \((0,-1,1)\) or \((w_{3}-y_{1},1,-1)\), \(T\) had width less than \(w_{3}\). This is again a contradiction so we may discard the third case entirely leaving just \[T=\operatorname{conv}((0,0,0),(0,w_{2},z_{1}),(1,0,w_{3}),(1,y_{1},z_{2})).\] If \(y_{1}=0\) then \(z_{2}=0\) since otherwise \(T\) has width less than \(w_{3}\) with respect \((-z_{2},0,1)\) or \((-z_{2},-1,1)\). By a shear and possibly a reflection we may assume that \(z_{1}\leq(-z_{1}\mod w_{2})\) and therefore \(z_{1}\in[0,\frac{w_{2}}{2}]\). This shows that \(T\) is of the form (2). If instead \(y_{1}>0\) then either \(z_{1}=w_{3}\) or \(z_{2}=0\), otherwise \(T\) has width less than \(w_{3}\) with respect to \((-z_{2},0,1)\). If \(z_{2}=0\) and \(z_{1}<w_{3}-w_{2}\) then \(T\) has width less than \(w_{3}\) with respect to \((-1,1,1)\). This proves that if \(z_{2}=0\) then \(T\) is of the form (3). If \(z_{1}=w_{3}\) and \(z_{2}=0\) or \(y_{1}=w_{2}\) this is equivalent to a tetrahedron in the previous case so we assume that \(z_{2}>0\) and \(y_{1}<w_{2}\). If \(z_{2}>y_{1}\) then \(T\) has width less than \(w_{3}\) with respect to \((-1,-1,1)\). If \(z_{2}=y_{1}\) this is equivalent to a tetrahedron in the previous case. This shows that \(T\) is equivalent to a tetrahedron of the form (3) or (4). Now we consider the special cases. When \(w_{3}=w_{2}\) there is no longer a unique four-point set of multi-width \((1,w_{2})\) which the vertices of \(T\) can project down to. We add additional restrictions to the tetrahedra of the forms (2)-(4) in \(\mathcal{S}_{1,w_{2},w_{2}}\) by requiring that \(y_{1}\) be the minimum non-negative integer such that the vertices of \(T\) project down to \(\{(0,0),(0,w_{2}),(1,0),(1,y_{1})\}\). Considering the projection on to the first and third coordinate shows that in case (3) we can require \(z_{1}\geq y_{1}\) and in case (4) we can require \(z_{1}\leq w_{2}-y_{1}\). When \(w_{2}=1\) substituting into (1)-(4) and simplifying reduces us to the following cases: * \(\operatorname{conv}((0,0,0),(0,0,w_{3}),(0,1,0),(1,0,0))\) * \(\operatorname{conv}((0,0,0),(0,1,0),(1,0,0),(1,0,w_{3}))\) * \(\operatorname{conv}((0,0,0),(0,1,z_{1}),(1,0,w_{3}),(1,1,0))\) where \(w_{3}-1\leq z_{1}\leq w_{3}\). Two of these are equivalent leaving the three tetrahedra appearing in \(\mathcal{S}_{1,1,w_{3}}\). When \(w_{3}=1\) this reduces further to the two cases in \(\mathcal{S}_{1,1,1}\). **Proposition 4.4**.: _Let \(T\in\mathcal{S}_{1,w_{2},w_{3}}\), then the multi-width of \(T\) is \((1,w_{2},w_{3})\)._ Proof.: In the previous proof we showed that the tetrahedra in \(\mathcal{S}_{1,w_{2},w_{3}}\) are always of one of the forms (1)-(4). Therefore, it suffices to show this for tetrahedra satisfying these conditions while allowing \(w_{3}\geq w_{2}\geq 1\). Since \(\operatorname{width}_{(1,0,0)}(T)=1\) the fact that these tetrahedra have non-zero volume shows that their first width is in fact \(1\). There are two ways in which the remaining two widths can fail. Either there is a dual vector \(u\) linearly independent to \((1,0,0)\) such that \(\operatorname{width}_{u}(T)<w_{2}\) or there is a dual vector \(u\) linearly independent to \(\{(1,0,0),(0,1,0)\}\) such that \(\operatorname{width}_{u}(T)<w_{3}\). To prove these do not occur it suffices to show that for all \(u=(u_{1},u_{2},0)\) with \(u_{2}\neq 0\)\(\operatorname{width}_{u}(T)\geq w_{2}\) and for all \(u=(u_{1},u_{2},u_{3})\) with \(u_{3}\neq 0\)\(\operatorname{width}_{u}(T)\geq w_{3}\). The first of these is a consequence of the classification of four-point sets of multi-with \((1,w_{2})\) since \[\operatorname{width}_{(u_{1},u_{2},0)}(T)=\operatorname{width}_{(u_{1},u_{2} )}(\pi(T))\] where \(\pi\) is the projection onto the first two coordinates. Now suppose for contradiction \(u=(u_{1},u_{2},u_{3})\) with \(u_{3}>0\) is such that \(\operatorname{width}_{u}(T)<w_{3}\). Then by the proof of Proposition 2.2 a map of the form \(\left(\begin{smallmatrix}1&0&0\\ k_{1}&k_{2}&1\end{smallmatrix}\right)\) takes \(T\) to a subset of a \(1\times w_{2}\times\operatorname{width}_{u}(T)\) rectangle. Under such shears (1)-(4) become: 1. \(\operatorname{conv}(\{0\}\times\begin{pmatrix}1&0\\ k_{2}&1\end{pmatrix}t,(1,0,k_{1}))\) where \(t\in\mathcal{S}_{w_{2},w_{3}}\) 2. \(\operatorname{conv}((0,0,0),(0,w_{2},z_{1}+k_{2}w_{2}),(1,0,k_{1}),(1,0,w_{3}+ k_{1}))\) where \(0\leq z_{1}\leq\frac{w_{2}}{2}\) 3. \(\operatorname{conv}((0,0,0),(0,w_{2},z_{1}+k_{2}w_{2}),(1,0,w_{3}+k_{1}),(1,y_ {1},k_{1}+k_{2}y_{1}))\) where \(0<y_{1}\leq w_{2}\) and \(w_{3}-w_{2}\leq z_{1}\leq w_{3}\), 4. \(\operatorname{conv}((0,0,0),(0,w_{2},w_{3}+k_{2}w_{2}),(1,0,w_{3}+k_{1}),(1,y_ {1},z_{1}+k_{1}+k_{2}y_{1}))\) where \(0<y_{1}<w_{2}\) and \(0<z_{1}<y_{1}\) for integers \(k_{1},k_{2}\). We will show that in each of these cases it is impossible for \(\operatorname{width}_{(0,0,1)}(T)<w_{3}\). In case (1) this is thanks to the widths of \(t\). In case (2) the tetrahedron always has a vertical edge of lattice length \(w_{3}\). In case (3) if the width with respect to \((0,0,1)\) was less than \(w_{3}\) we would need to have \(w_{3}+k_{1}-k_{1}-k_{2}y_{1}<w_{3}\) and \(z_{1}+k_{2}w_{2}<w_{3}\) which combine to give a contradiction. Similarly in case (4) we would need \(w_{3}+k_{2}w_{2}<w_{3}\) and \(w_{3}+k_{1}-z_{1}-k_{1}-k_{2}y_{1}<w_{3}\) which combine to give a contradiction. **Proposition 4.5**.: _The tetrahedra in \(\mathcal{S}_{1,w_{2},w_{3}}\) are all distinct under affine unimodular maps._ Proof.: The two tetrahedra in \(\mathcal{S}_{1,1,1}\) have normalised volume \(1\) and \(2\). Since volume is an affine invariant they must be distinct. For \(w_{3}>1\) the three tetrahedra in \(\mathcal{S}_{1,1,w_{3}}\) have volumes \(w_{3}\), \(2w_{3}-1\) and \(2w_{3}\) so must also be distinct. In the remaining cases \(w_{2}>1\) so up to sign \(u=(1,0,0)\) is the unique vector such that \(\operatorname{width}_{u}(T)=1\). This means the set of \(x\)-coordinates of two equivalent tetrahedra in \(\mathcal{S}_{1,w_{2},w_{3}}\) will also be equivalent. Therefore, case (1) is always distinct from the others. Furthermore, if \(T\) and \(T^{\prime}\) are equivalent tetrahedra, both of the form (1) then they each have a unique facet with normal \(u\) so these facets must be equivalent too. These facets were triangles in \(\mathcal{S}_{w_{2},w_{3}}\) so by Theorem 4.1 this means \(T=T^{\prime}\). Now we will show that if \(T\) and \(T^{\prime}\) are equivalent and each satisfy one of the conditions (2)-(4) then the \(y\)-coordinates of their vertices are equal. We do so by showing that \(y_{1}\) is the smallest non-negative integer such that there is a surjective lattice homomorphism \(\pi\) with \(\pi(T)\sim\operatorname{conv}((0,0),(0,w_{2}),(1,0),(1,y_{1}))\). In case (2) this is immediate since \(y_{1}=0\). Let \(\pi:\mathbb{Z}^{3}\to\mathbb{Z}^{2}\) be a surjective lattice homomorphism defined by multiplication with the \(2\times 3\) matrix \(P=(p_{ij})\) such that \(\pi(T)\sim\operatorname{conv}((0,0),(0,w_{2}),(1,0),(1,y_{1}^{\prime}))\) for some integer \(y_{1}^{\prime}\in[0,w_{2}]\). By Proposition 2.1 there are dual vectors \(u_{1}\) and \(u_{2}\in(\mathbb{Z}^{2})^{*}\) which form a basis of \((\mathbb{Z}^{2})^{*}\) and which realise the first two widths of \(\pi(T)\). This means that \(\operatorname{width}_{u_{i}}(\pi(T))=\operatorname{width}_{u_{i}\circ\pi}(T)=w_ {i}\) so, possibly after changing the sign of \(u_{1}\), we have \(u_{1}P=(1,0,0)\). Let \(U\in\operatorname{GL}_{2}(\mathbb{Z})\) be the matrix with rows \(u_{1}\) and \(u_{2}\) then we replace \(P\) with \(UP\) and change \(\pi\) accordingly. In this way we can assume that the first row of \(P\) is \((1,0,0)\). This does not alter our previous assumptions about \(\pi\) since this operation is an affine unimodular map in \(\mathbb{Z}^{2}\). If \(w_{2}<w_{3}\) this also allows us to assume that \(p_{23}=0\). In this case computing the vertices of \(\pi(T)\) and considering the possibilities for \(p_{21}\) and \(p_{22}\) shows that \(y_{1}^{\prime}\geq y_{1}\) and so \(y_{1}\) is minimal. It remains to consider the case \(w_{3}=w_{2}>1\). By replacing \(P\) with \(\left(\begin{smallmatrix}1&0\\ -p_{21}&1\end{smallmatrix}\right)P\) we may assume \(p_{21}=0\). This leaves us with the following two cases corresponding to (3) and (4) \[\pi(T)=\operatorname{conv}((0,0),(0,p_{22}w_{2}+p_{23}z_{1}),(1,p_{23}w_{2}), (1,p_{22}y_{1})\quad\text{or}\] \[\pi(T)=\operatorname{conv}((0,0),(0,p_{22}w_{2}+p_{23}w_{2}),(1,p_{23}w_{2}),( 1,p_{22}y_{1}+p_{23}z_{1})).\] The dual vector \((1,0)\) is the unique dual vector realising the first width of both \(\pi(T)\) and \(\operatorname{conv}((0,0),(0,w_{2}),(1,0),(1,y_{1}^{\prime}))\). These each have two facets with normal vector \((1,0)\) so these must be equivalent. Therefore, one of the vertical edges of \(\pi(T)\) must have length \(w_{2}\) and the other must have length in \([0,w_{2}]\). If \(|p_{22}w_{2}+p_{23}z_{1}|=w_{2}\) we may assume, after a possible change of sign of \(P\), that \(p_{23}=\frac{w_{2}(1-p_{22})}{z_{1}}\). Then \(|p_{23}w_{2}-p_{22}y_{1}|=|\frac{w_{2}^{2}}{z_{1}}-p_{22}(\frac{w_{2}^{2}}{z_ {1}}+y_{1})|\) which is at least \(y_{1}\) for all \(p_{22}\). Similarly if \(|p_{23}w_{2}-p_{22}y_{1}|\), \(|p_{22}w_{2}+p_{23}w_{2}|\) or \(|p_{23}(w_{2}-z_{1})-p_{22}y_{1}|\) equals \(w_{2}\) then \(|p_{22}w_{2}+p_{23}z_{1}|\), \(|p_{23}(w_{2}-z_{1})-p_{22}y_{1}|\) and \(|p_{22}w_{2}+p_{23}w_{2}|\) respectively are at least \(y_{1}\). This shows that \(y_{1}\) is minimal. We now know that if \(T\sim T^{\prime}\) are each of one of the forms (2)-(4) then they have identical \(x\) and \(y\)-coordinates. If both are of the form (3) then their volume is \(w_{2}w_{3}+z_{1}y_{1}=w_{2}w_{3}+z_{1}^{\prime}y_{1}^{\prime}\), therefore \(z_{1}=z_{1}^{\prime}\) and \(T=T^{\prime}\). Similarly if both are of the form (4) then \(T=T^{\prime}\). If both are of the form (2) then unless \(w_{2}=w_{3}\) and \(z_{1}=z_{1}^{\prime}=0\) the only facet of each of these equivalent to \(\operatorname{conv}((0,0,0),(1,0,0),(1,0,w_{3}))\) is this facet itself. This can be determined by looking at the area and edge lengths of each facet. Therefore, either \(T=T^{\prime}\) or this facet it preserved by the affine map taking \(T\) to \(T^{\prime}\). This restricts us to maps of the form \[U_{1}=\begin{pmatrix}1&k_{1}&0\\ 0&\pm 1&0\\ 0&k_{2}&1\end{pmatrix}\quad\text{or}\quad U_{2}=\begin{pmatrix}1&k_{1}&0\\ 0&\pm 1&0\\ w_{3}&k_{2}&-1\end{pmatrix}\] for integers \(k_{1}\) and \(k_{2}\). If \(U_{1}T=T^{\prime}\) then \(z_{1}^{\prime}=z_{1}+k_{2}w_{2}\) so since \(z_{1},z_{1}^{\prime}\in[0,w_{2})\) we have \(z_{1}^{\prime}=z_{1}\). If \(U_{2}T=T^{\prime}\) then \(z_{1}^{\prime}=k_{2}w_{2}-z_{1}\) so since \(z_{1},z_{1}^{\prime}\in[0,\frac{w_{2}}{2}]\) either \(z_{1}=z_{1}^{\prime}=\frac{w_{2}}{2}\) or \(k_{2}=0\) and \(z_{1}^{\prime}=-z_{1}=0\). In either case this shows that \(T=T^{\prime}\). Finally say \(T\) and \(T^{\prime}\) are equivalent tetrahedra of the forms (3) and (4) respectively. The areas of the facets of \(T\) are * \(\sqrt{w_{2}^{2}w_{3}^{2}+z_{1}^{2}+w_{2}^{2}}\) * \(\sqrt{z_{1}^{2}y_{1}^{2}+z_{1}^{2}+w_{2}^{2}}\) * \(\sqrt{w_{3}^{2}y_{1}^{2}+w_{3}^{2}+y_{1}^{2}}\) * \(\sqrt{(w_{3}w_{2}-w_{3}y_{1}+z_{1}y_{1})^{2}+w_{3}^{2}+y_{1}^{2}}\) and the areas of the facets of \(T^{\prime}\) are * \(\sqrt{w_{2}^{2}w_{3}^{2}+w_{3}^{2}+w_{2}^{2}}\) * \(\sqrt{(w_{2}z_{1}^{2}-w_{3}y_{1})^{2}+w_{3}^{2}+w_{2}^{2}}\) * \(\sqrt{w_{3}^{2}y_{1}^{2}+(w_{3}-z_{1}^{\prime})^{2}+y_{1}^{2}}\) * \(\sqrt{w_{2}^{2}(w_{3}-z_{1}^{\prime})^{2}+(w_{3}-z_{1}^{\prime})^{2}+y_{1}^{2}}\). The facet of \(T^{\prime}\) with area \(\sqrt{w_{2}^{2}w_{3}^{2}+w_{3}^{2}+w_{2}^{2}}\) has area strictly the larger than the other facets of \(T^{\prime}\) which can be directly seen from the restrictions on \(w_{2},w_{3},y_{1}\) and \(z_{1}^{\prime}\). Similarly \(\sqrt{w_{2}^{2}w_{3}^{2}+z_{1}^{2}+w_{2}^{2}}>\sqrt{z_{1}^{2}y_{1}^{2}+z_{1}^{ 2}+w_{2}^{2}}\) and \(\sqrt{w_{2}^{2}w_{3}^{2}+z_{1}^{2}+w_{2}^{2}}>\sqrt{w_{3}^{2}y_{1}^{2}+w_{3}^{2}+y _{1}^{2}}\) follow from \(y_{1}\leq w_{2}-1\) and \(z_{1}\leq w_{3}\). To show that the facet of \(T\) with area \(\sqrt{w_{2}^{2}w_{3}^{2}+z_{1}^{2}+w_{2}^{2}}\) is strictly the largest it remains to prove that \[z_{1}^{2}+w_{2}^{2}-w_{3}^{2}y_{1}^{2}-z_{1}^{2}y_{1}^{2}+2w_{3}^{2}w_{2}y_{1}-2 w_{3}w_{2}z_{1}y_{1}+2w_{3}y_{1}^{2}z_{1}-w_{3}^{2}-y_{1}^{2}>0.\] To see this consider the derivative of this polynomial with respect to \(z_{1}\). This shows that either \(y_{1}=1\) or the above polynomial is smallest when \(z_{1}=w_{3}\). Substituting \(y_{1}=1\) or \(z_{1}=w_{3}\) in and simplifying results in \[w_{2}^{2}-1+2w_{3}(w_{2}-1)(w_{3}-z_{1})\quad\text{or}\quad w_{2}^{2}-y_{1}^{2}\] both of which are positive. Therefore, if \(T\sim T^{\prime}\) then \(\sqrt{w_{2}^{2}w_{3}^{2}+z_{1}^{2}+w_{2}^{2}}=\sqrt{w_{2}^{2}w_{3}^{2}+w_{3}^{2 }+w_{2}^{2}}\) and so \(z_{1}=w_{3}\). The normalised volumes of \(T\) and \(T^{\prime}\) are \(w_{2}w_{3}+w_{3}y_{1}\) and \(w_{2}w_{3}-w_{2}z_{1}^{\prime}+w_{3}y_{1}\). If \(T\sim T^{\prime}\) these are equal which forces \(z_{1}^{\prime}=0\). This is a contradiction and so completes the proof. Proof of Theorem 1.1.: Proposition 4.4 shows that the map taking a tetrahedron to its equivalence class is a well-defined map from \(\mathcal{S}_{1,w_{2},w_{3}}\) to \(\mathcal{T}_{1,w_{2},w_{3}}\). Propositions 4.3 and 4.5 show that it is bijective. It remains to find the cardinality of \(\mathcal{S}_{1,w_{2},w_{3}}\). When \(w_{2}=1\) this is immediate. When \(w_{3}>w_{2}>1\) we combine the triangles classification with the new tetrahedra to get \[|\mathcal{T}_{1,w_{2},w_{3}}|=\begin{cases}2w_{2}^{2}+4&\text{if $w_{2}$ and $w_{3}$ even}\\ 2w_{2}^{2}+3&\text{if $w_{2}$ even and $w_{3}$ odd}\\ 2w_{2}^{2}+2&\text{if $w_{2}$ odd}\end{cases}\] and when \(w_{3}=w_{2}>1\) we get \[|\mathcal{T}_{1,w_{2},w_{2}}|=\begin{cases}w_{2}^{2}+w_{2}+2&\text{if $w_{2}$ even}\\ w_{2}^{2}+w_{2}+1&\text{if $w_{2}$ odd}.\end{cases}\] The generating function of the sequence counting lattice triangles with second width \(w_{2}\) was the Hilbert series of a hypersurface in a weighted projective space so we investigate the generating function of \(|\mathcal{T}_{1,w_{2},w_{3}}|\). This can be computed using Theorem 1.1 as follows. \[\sum_{w_{3}=1}^{\infty}s^{w_{3}}|\mathcal{T}_{1,1,w_{3}}|=\frac{s(s+2)}{1-s}\] and for \(w_{2}>1\) \[\sum_{w_{3}=w_{2}}^{\infty}s^{w_{3}}|\mathcal{T}_{1,w_{2},w_{3}}|=s^{w_{2}} \frac{w_{2}^{2}(s+1)^{2}+w_{2}(-s^{2}+1)+(\frac{3}{2}s^{2}+\frac{5}{2}s+\frac {3}{2})+\frac{1}{2}(-1)^{w_{2}}(s^{2}+s+1)}{1-s^{2}}.\] Using both of these we can show that \[\sum_{w_{2}=1}^{\infty}\sum_{w_{3}=w_{2}}^{\infty}t^{w_{2}}s^{w_{3}}|\mathcal{ T}_{1,w_{2},w_{3}}|=\frac{f(s,t)}{2(1-s^{2})(1-ts)^{3}(1+ts)}\] where \(f(s,t)\) is the polynomial \[t^{5}s^{7}+5t^{5}s^{6}+4t^{5}s^{5}-2t^{4}s^{7}-5t^{4}s^{6}-9t^{ 4}s^{5}-4t^{4}s^{4}+4t^{3}s^{6}+13t^{3}s^{5}+7t^{3}s^{4}\] \[-6t^{3}s^{3}-5t^{2}s^{4}-3t^{2}s^{3}+4t^{2}s^{2}-4ts^{4}-10ts^{3} -4ts^{2}+2ts+2s^{3}+14s^{2}+20s+8.\] To instead count lattice tetrahedra with first width \(1\) and third width \(w_{3}\) we let \(t=1\) in the above generating function resulting in \[\frac{-s^{7}+4s^{6}+8s^{5}-6s^{4}-3s^{3}+22s+8}{2(1-s)^{4}(1+s)^{2}}.\] Neither of these generating functions share any of the properties of the one from triangles. However, this does not mean that a function counting lattice tetrahedra of a given multi-width in general will not share these properties. ## 5. Computational and conjectural results The above method can be extended into a computer algorithm which classifies four-point sets and tetrahedra of a given multi-width. We implement all algorithms using Magma V2.27. Algorithm 1 classifies four-point sets of multi-width \((w_{1},w_{2})\). It takes the list of four-point sets in the line of width \(w_{1}\) and assigns a \(y\)-coordinate in the range \([0,w_{2}]\) to each point of each set in every possible way. The resulting sets in the plane include all four-point sets of multi-width \((w_{1},w_{2})\). We eliminate any which do not have the correct widths. Let \(P\) be the convex hull of such a set then we use the polytope \(\mathcal{W}_{P}\) to check its multi-width. We call the convex hull of the lattice points in a polytope its _integral part_. The \(i\)-th width of \(P\) is \(w_{i}\) if and only if the dimension of the integral part of \((w_{i}-1)\mathcal{W}_{P}\) is less than \(i\) and the dimension of the integral part of \(w_{i}\mathcal{W}_{P}\) is at least \(i\). This allows us to check if the multi-width of a polytope is equal to \((w_{1},w_{2})\) without necessarily calculating its multi-width. We also discard repeated sets using an affine unimodular normal form. Kreuzer and Skarke introduced a unimodular normal form for lattice polytopes in their Palp software [10]. This can be extended to an affine normal form by translating each vertex of a polytope to the origin in turn and finding the minimum unimodular normal form among these possibilities. If the convex hull of a four-point set is a quadrilateral we can use this normal form without adjustment. If the convex hull is a triangle we find the normal form of this triangle then consider the possible places the fourth point can be mapped to in this normal form. We choose the minimum such point and call the set of vertices of the triangle and this point the normal form of the four-point set denoted \(\mathrm{NF}(S)\). Note that we keep the normal forms of each set as well as the set itself as we need the four-point sets written as a subset of \([0,w_{1}]\times[0,w_{2}]\) for the tetrahedra classification. ``` Data: The set \(\mathcal{P}\) of all lattice points in \(Q_{w_{1}}=\mathrm{conv}((0,0),(0,w_{1}),(\frac{w_{1}}{2},\frac{w_{1}}{2})\). Result: The set \(\mathcal{A}\) containing all four-point sets in the plane with multi-width \((w_{1},w_{2})\) written as a subset of \([0,w_{1}]\times[0,w_{2}]\). \(\mathcal{A}\longleftarrow\emptyset\) NormalForms \(\longleftarrow\emptyset\) for\((x_{1},x_{2})\in\mathcal{P}\)do for\(h_{1},h_{2},h_{3},h_{4}\in[0,w_{2}]\cap\mathbb{Z}\) such that \(h_{i}=0\) and \(h_{j}=w_{2}\) for some \(i<j\)do \(S\longleftarrow\{(0,h_{1}),(x_{2},h_{2}),(x_{3},h_{3}),(w_{1},h_{4})\}\) if\(\mathrm{mwidth}(S)=(w_{1},w_{2})\) and \(\mathrm{NF}(S)\notin\mathrm{NormalForms}\)then \(\mathcal{A}\longleftarrow\mathcal{A}\cup\{S\}\) NormalForms \(\longleftarrow\mathrm{NormalForms}\cup\{\mathrm{NF}(S)\}\) ``` **Algorithm 1**Classifying the four-point sets in the plane with multi-width \((w_{1},w_{2})\). Running Algorithm 1 for small widths produces Table 3. We use this data to give estimates for a function counting the width \((w_{1},w_{2})\) four-point sets. To do this we will use the following. **Proposition 5.1**.: _There are at most \((\frac{w_{1}^{2}}{4}+w_{1}+c)(6w_{2}^{2}+1)\) four-point sets in the plane with multi-width \((w_{1},w_{2})\) where \(c=1\) if \(w_{1}\) is even and \(c=\frac{3}{4}\) if \(w_{1}\) is odd._ Proof.: By Proposition 3.1 we know that there are \[\begin{cases}\frac{w_{1}^{2}}{4}+w_{1}+1&\text{if $w_{1}$ even}\\ \frac{w_{1}^{2}}{4}+w_{1}+\frac{3}{4}&\text{if $w_{1}$ odd}\end{cases}\] four-point sets in the line with width \(w_{1}\). Let \((y_{1},\ldots,y_{4})\in[0,w_{2}]^{4}\) be a lattice point representing the \(y\)-coordinates we give to each point. We know that there exist indices \(i_{0}\) and \(i_{1}\) such that \(y_{i_{0}}=0\) and \(y_{i_{1}}=w_{2}\). By a reflection we may assume that \(i_{0}<i_{1}\). We also assume these are as small as possible. Counting the possibilities in each of the six cases we show that there are at most \(6w_{2}^{2}+1\) ways to assign \(y\)-coordinates to a four-point set in the line. A _quasi-polynomial_ is a polynomial whose coefficients are periodic functions with integral period. The functions counting lattice triangles and width \(1\) lattice tetrahedra are both piecewise quasi-polynomials whose coefficients have period \(2\) so we may expect a function counting four-point sets in the plane to be similar. By Proposition 5.1, if there is a quasi-polynomial counting four-point sets of multi-width \((w_{1},w_{2})\) we expect it to be at most quadratic in \(w_{2}\). We expect the case when \(w_{1}=w_{2}\) to be distinct due to the increased symmetry. Also we expect the cases when \(w_{2}\) is odd and even to be distinct so consider them separately. By fitting a quadratic to the results for \((w_{1},w_{1}+1),\ldots,(w_{1},w_{1}+5)\) and \((w_{1},w_{1}+2),\ldots,(w_{1},w_{1}+6)\) we obtain the following conjecture which agrees with the entries of Table 3. **Conjecture 5.2**.: The number of four-point sets of multi-width \((w_{1},w_{2})\) is given by \[\begin{cases}9w_{2}+6&\text{if $w_{2}$ even}\\ 9w_{2}+4&\text{if $w_{2}$ odd}\end{cases}\] when \(w_{1}=2\), \[\begin{cases}\frac{47}{2}w_{2}+7&\text{if $w_{2}$ even}\\ \frac{47}{2}w_{2}+\frac{11}{2}&\text{if $w_{2}$ odd}\end{cases}\] when \(w_{1}=3\), \[\begin{cases}56w_{2}+6&\text{if $w_{2}$ even}\\ 56w_{2}+2&\text{if $w_{2}$ odd}\end{cases}\] when \(w_{1}=4\), \[\begin{cases}\frac{211}{2}w_{2}-9&\text{if $w_{2}$ even}\\ \frac{111}{2}w_{2}-\frac{23}{2}&\text{if $w_{2}$ odd}\end{cases}\] when \(w_{1}=5\) and \[\begin{cases}183w_{2}-36&\text{if $w_{2}$ even}\\ 183w_{2}-42&\text{if $w_{2}$ odd}\end{cases}\] when \(w_{1}=6\). It is tempting to fit quadratics in \(w_{1}\) to the coefficients of these polynomials to get a quasi-polynomial counting four-point sets in general however the resulting function is \[\begin{cases}(10w_{1}^{2}-\frac{73}{2}w_{1}+42)w_{2}-\frac{21}{4}w_{1}^{2}+ \frac{63}{2}w_{1}-36&\text{if $w_{1},w_{2}$ even}\\ (10w_{1}^{2}-\frac{73}{2}w_{1}+42)w_{2}-\frac{21}{4}w_{1}^{2}+\frac{61}{2}w_{ 1}-36&\text{if $w_{1}$ even and $w_{2}$ odd}\\ (\frac{15}{2}w_{1}^{2}-19w_{1}+13)w_{2}-\frac{21}{8}w_{1}^{2}+13w_{1}-\frac{67 }{8}&\text{if $w_{1}$ odd and $w_{2}$ even}\\ (\frac{15}{2}w_{1}^{2}-19w_{1}+13)w_{2}-\frac{21}{8}w_{1}^{2}+\frac{25}{2}w_{ 1}-\frac{67}{8}&\text{if $w_{1},w_{2}$ odd}\end{cases}\] which takes value \(1934\) not \(2206\) when \((w_{1},w_{2})=(7,8)\) and so does not accurately predict the number of four-point sets. This suggests either there is no such quasi-polynomial or that for small values of \(w_{1}\) we have a special case and so cannot predict it from this data. \begin{table} \begin{tabular}{c|c c c c c c c c c c c c} & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 \\ \hline 1 & 2 & 5 & 6 & 8 & 9 & 11 & 12 & 14 & 15 & 17 & 18 & 20 \\ 2 & 0 & 13 & 31 & 42 & 49 & 60 & 67 & 78 & 85 & 96 & 103 & 114 \\ 3 & 0 & 0 & 39 & 101 & 123 & 148 & 170 & 195 & 217 & 242 & 264 & 289 \\ 4 & 0 & 0 & 0 & 114 & 282 & 342 & 394 & 454 & 506 & 566 & 618 & 678 \\ 5 & 0 & 0 & 0 & 0 & 254 & 624 & 727 & 835 & 938 & 1046 & 1149 & 1257 \\ 6 & 0 & 0 & 0 & 0 & 0 & 520 & 1239 & 1428 & 1605 & 1794 & 1971 & 2160 \\ 7 & 0 & 0 & 0 & 0 & 0 & 0 & 2206 & & & & \\ \end{tabular} \end{table} Table 3. The number of four-point sets with multi-width \((w_{1},w_{2})\) up to affine equivalence. Using the classification of four-point sets, we move on to classify tetrahedra. This uses a similar algorithm to the four-point set case (see Algorithm 2) with two main differences. We may no longer assume that all the tetrahedra we want to classify are contained in a \(w_{1}\times w_{2}\times w_{3}\) cuboid so must allow more \(z\)-coordinates to be assigned to each point. Also, since we are not extending this classification to a higher dimension, we need only store the normal form of each tetrahedron in order to count them. ``` Data: The set \(\mathcal{A}\) containing all four-point sets in the plane with multi-width \((w_{1},w_{2})\) written as a subset of \([0,w_{1}]\times[0,w_{2}]\). Result: The set \(\mathcal{T}\) containing all tetrahedra with multi-width \((w_{1},w_{2},w_{3})\). \(\mathcal{T}\longleftarrow\emptyset\) for\(\{v_{1},v_{2},v_{3},v_{4}\}\in\mathcal{A}\)do for\(h_{1},h_{2},h_{3},h_{4}\in[0,\max\{w_{1}+w_{2},w_{3}\}]\cap\mathbb{Z}\) such that \(h_{i}=0\) and \(h_{j}\geq w_{3}\) for some \(i<j\)do \(T\longleftarrow\operatorname{conv}(v_{i}\times\{h_{i}\}:i=1,2,3,4)\) if\(\operatorname{mwidth}(T)=(w_{1},w_{2},w_{3})\)then \(\mathcal{T}\longleftarrow\mathcal{T}\cup\{\operatorname{NF}(T)\}\) ``` **Algorithm 2**Classifying the tetrahedra with multi-width \((w_{1},w_{2},w_{3})\). Classifying tetrahedra in this way is slow. Table 2 shows the number of tetrahedra with first width 2 and small second and third width. The full list of these tetrahedra, along with their multi-widths, can be found at [1]. Even for these few, familiar patterns begin to appear. It would seem that, as in the width 1 case, when the first width is 2 the third width has little impact on \(|\mathcal{T}_{w_{1},w_{2},w_{3}}|\). It is worth noting that in all of the cases displayed in Table 2 all tetrahedra could be written as a subset of a cuboid with dimensions given by their multi-width. It would be interesting to see if this is a feature of small width or can be extended to general three-dimensional tetrahedra. Based on Table 2 and the previous classifications we make the following conjecture. **Conjecture 5.3**.: There is a piecewise quasi-polynomial with 4 components counting lattice tetrahedra of multi-width \((w_{1},w_{2},w_{3})\). There is a component for each combination of equalities in \(w_{3}\geq w_{2}\geq w_{1}>0\). The leading coefficient in the case \(w_{3}>w_{2}>w_{1}>0\) is double the leading coefficient in the case \(w_{3}=w_{2}>w_{1}>0\). For fixed \(w_{1}\) and \(w_{2}\) there are at most three values which \(|\mathcal{T}_{w_{1},w_{2},w_{3}}|\) can take depending of whether \(w_{3}\) is odd, even or equal to \(w_{2}\).
2301.01836
A Quantum-Inspired Binary Optimization Algorithm for Representative Selection
Advancements in quantum computing are fuelling emerging applications across disciplines, including finance, where quantum and quantum-inspired algorithms can now make market predictions, detect fraud, and optimize portfolios. Expanding this toolbox, we propose the selector algorithm: a method for selecting the most representative subset of data from a larger dataset. The selected subset includes data points that simultaneously meet the two requirements of being maximally close to neighboring data points and maximally far from more distant data points where the precise notion of distance is given by any kernel or generalized similarity function. The cost function encoding the above requirements naturally presents itself as a Quadratic Unconstrained Binary Optimization (QUBO) problem, which is well-suited for quantum optimization algorithms - including quantum annealing. While the selector algorithm has applications in multiple areas, it is particularly useful in finance, where it can be used to build a diversified portfolio from a more extensive selection of assets. After experimenting with synthetic datasets, we show two use cases for the selector algorithm with real data: (1) approximately reconstructing the NASDAQ 100 index using a subset of stocks, and (2) diversifying a portfolio of cryptocurrencies. In our analysis of use case (2), we compare the performance of two quantum annealers provided by D-Wave Systems.
Anna G. Hughes, Jack S. Baker, Santosh Kumar Radha
2023-01-04T22:07:22Z
http://arxiv.org/abs/2301.01836v1
# A Quantum-Inspired Binary Optimization Algorithm for Representative Selection ###### Abstract Advancements in quantum computing are fuelling emerging applications across disciplines, including finance, where quantum and quantum-inspired algorithms can now make market predictions, detect fraud, and optimize portfolios. Expanding this toolbox, we propose the selector algorithm: a method for selecting the most representative subset of data from a larger dataset. The selected subset includes data points that simultaneously meet the two requirements of being maximally close to neighboring data points and maximally far from more distant data points where the precise notion of distance is given by any kernel or generalized similarity function. The cost function encoding the above requirements naturally presents itself as a Quadratic Unconstrained Binary Optimization (QUBO) problem, which is well-suited for quantum optimization algorithms - including quantum annealing. While the selector algorithm has applications in multiple areas, it is particularly useful in finance, where it can be used to build a diversified portfolio from a more extensive selection of assets. After experimenting with synthetic datasets, we show two use cases for the selector algorithm with real data: (1) approximately reconstructing the NASDAQ 100 index using a subset of stocks, and (2) diversifying a portfolio of cryptocurrencies. In our analysis of use case (2), we compare the performance of two quantum annealers provided by D-Wave Systems. ## I Introduction The task of choosing representative samples from a larger collection of data, often referred to as representative selection, has various advantages over studying the dataset as a whole (e.g., [1, 2, 3, 4]). Representative selection can reduce the size and complexity of the data, simplifying data analysis and processing and reducing the memory cost of storing data. The computational efficiency of data modeling, such as classifier training and model application, can be significantly improved. Representative selection has been implemented in various subjects ranging from computer vision [5] and language processing [6] to protein analysis [7]. A variety of representative selection procedures have been offered to reduce the volume of training data for some specific supervised learning classifiers [8, 9]. In addition to the methods that require additional knowledge for representative selection, there has been a growing interest in unsupervised approaches to finding representative samples [10]. In this work, we implement an unsupervised representative selection algorithm. An unlabeled dataset may contain some unknown number of classes, or the data in the set could be unclustered or only very loosely clustered into different categories according to some notion of similarity. The algorithm aims to find a subset of representative data points from a larger dataset. The selected subset should include only data points dissimilar to one another and similar to unselected neighboring data points. More concretely, our selector algorithm finds the \(k\) least similar points in a sample of \(n\) total clustered data points. This is done by both _maximizing_ the distance between the chosen data and all other data in the dataset while also _minimizing_ the distance between selected data and the other similarly clustered data. The returned \(k\) points are then both representative of the data clusters and maximally distant from the other groups. This method can be applied across disciplines, but in this paper, we explore an application in finance: diversification. Diversification is a crucial strategy when building a robust portfolio. To mitigate the risk of interdependent components performing poorly in a market downturn, it is critical to invest in assets that are not strongly correlated to one another. There are many portfolio diversification strategies, from a naive \(1/N\) rule [11] to more complex methods such as portfolio dimensionality or a Bayesian approach [12, 13]. Many diversification methods are designed to weigh assets to minimize the portfolio's overall variance deliberately. The recent popularity of machine learning in quantitative finance has enabled researchers to define new diversification methods. In this paper, we use our selector algorithm as a method for the diversification of assets. In this framework, a portfolio is well-diversified when each asset is maximally dissimilar to one another and representative of their respective sectors or other similarly performing assets. The core framework of our selector algorithm emerges as quadratic unconstrained binary optimization (QUBO) problem, which can be tackled with a range of metaheuristics [14, 15, 16, 17] utilizing classical and quantum computation. Because of the large number of variables considered in diversification (and other selective representation problems), the resulting QUBO objective has many binary variables which, in the quantum setting, translates to the requirement of a large number of qubits. Although gate-model quantum computers are continually scaling up qubit numbers, presently, _quantum inspired_ hardware like quantum annealers already meet this requirement (although the qubits are non-universal). Subsequently, we regard our selector algorithm as quantum inspired. ## II The selector algorithm The selector algorithm is designed to pick out unique and representative data points from a larger dataset by finding low-cost solutions to a QUBO objective function. This function is constructed such that low-cost solutions maximize some notion of distance between selected data points and all other data, ensuring that chosen data points are unique while simultaneously minimizing the distance between selected and similarly clustered data. This ensures that each of the selected \(k\) data points from a dataset containing \(n>k\) points represent data that are similarly clustered with nearby points whilst each of the \(k\) data points are _not_ similar to each other. The input data are provided as a matrix \(\mathbf{Y}\), where each row is a vector representative of the \(i^{\text{th}}\) data point \[\mathbf{Y}=\begin{bmatrix}\vec{y}_{1}\\ \vec{y}_{2}\\ \vdots\\ \vec{y}_{i}\\ \vdots\\ \vec{y}_{n}\end{bmatrix}=\begin{bmatrix}y_{1}^{(1)}&y_{1}^{(2)}&y_{1}^{(3)}& \ldots\\ y_{2}^{(1)}&y_{2}^{(2)}&y_{3}^{(3)}&\ldots\\ \vdots\\ y_{i}^{(1)}&y_{i}^{(2)}&y_{i}^{(3)}&\ldots\\ \vdots\\ y_{n}^{(1)}&y_{n}^{(2)}&y_{n}^{(3)}&\ldots\end{bmatrix}. \tag{1}\] where \(n\) is the size of the dataset. Although any metric can be used to evaluate the distance between each data point, throughout this paper, the choice of distance metric is the Euclidean distance unless stated otherwise, \[d_{ij}(\vec{y}_{i},\vec{y}_{j})=\sqrt{\sum_{m}\left(\vec{y}_{i}^{(m)}-\vec{y} _{j}^{(m)}\right)^{2}}. \tag{2}\] Alternative distance metrics include standardized Euclidean, cosine, Minkowski, etc. The distance \(d\) is formally a kernel function, which in principle could also be computed and/or learned using quantum/classical neural networks (e.g., [18]). It is therefore possible to create a cross-paradigm variation of the selector algorithm where the \(d\) is evaluated on a gate model device as proposed in [18], and the QUBO objective is approximately solved using a quantum annealer. This however is beyond the scope of this work. The QUBO objective is defined as \[\mathcal{C}(\vec{x})=\frac{1}{2k}\vec{x}\mathbf{d}\vec{x}^{T}-\frac{1}{n} \vec{x}\mathbf{d}\mathbf{1}^{T}+A\left(\sum_{i=1}^{n}x_{i}-k\right)^{2}, \tag{3}\] where \(\vec{1}\) is the \(n\)-length vector of ones, \(\mathbf{d}\) is the \(n\times n\) distance matrix with elements \(d_{ij}\) as given in Equation 2(or another user-defined distance measure), \(\vec{x}\) is the \(\{0,1\}^{n}\) vector of binary variables and \(A\) is a penalty scaling factor used to enforce the equality constraint \(\sum x_{i}=k\). The first term on the RHS of Equation 3 represents the distance between the selected points and other points in the cluster, ensuring that selected points are representative of their cluster, the second term represents the distance between selected points and all others in the data set, and the last term enforces the equality constraint (i.e., a penalty is applied to the cost function if more than \(k\) data points are chosen). This optimization problem takes on a QUBO form which are, in general, NP-hard. A more detailed description of QUBO models is given in Appendix A and a formal extension to weighted selection (i.e, where \(\vec{x}\) can take on a number discrete values not limited to \(\vec{x}\in\{0,1\}^{n}\)) is given in Appendix C. Now in this form, the problem becomes approachable with approximate metaheuristic algorithms, including quantum annealing, used in Section IV.2, which we discuss qualitatively in Appendix B ## III Experiments with synthetic data To demonstrate the basic functionality of the selector algorithm, we use it to select points from two synthetic Figure 1: An example of the selector algorithm’s performance on clustered data points. The clusters are generated by randomly choosing data points from a Gaussian distribution on a two-dimensional plane centered on two points. Each blob contains 90 data points, with a standard deviation of 2. The selector algorithm was tasked with choosing two representative points from the complete dataset using the \(q\)_obsolv_ solver. The algorithm-selected points are highlighted with blue squares, indicating that the algorithm successfully chose visually representative points from each cluster. datasets. One dataset contains simple and obviously clustered data points (in terms of the Euclidean distance), while the second dataset contains time series data - data points organized in a chronologically ordered sequence. We show that in both cases, the selector algorithm makes reasonable, representative choices. In the first application, we use the selector algorithm to choose 2 data points from a clustered array of points. We generate two clusters, or blobs, by randomly choosing data points from a Gaussian distribution on a two-dimensional plane centered on two points. Each blob contains 90 data points, with a standard deviation of 2. The coordinates of all data points are used as input for the selector algorithm, which then chooses \(k\) representative points by minimizing the cost function in Equation 3 using the D-Wave decomposing solver _qbsolv_[19, 20] which uses a modified tabu algorithm [21] to minimize the objective function. In this example, we have generated two clusters of data points and tasked the selector algorithm with choosing \(k=2\) representative points. The blobs and selected points are shown in Figure 1; as expected, the selector algorithm successfully chose one point from each cluster. Since the choice of distance is euclidean, the chosen point closely resembles the approximate center of the cluster. This visual representation might not always be the case for all metrics; for example, metrics like Jensen-Shannon divergence might have no visually discernible centers. The selector algorithm can similarly choose data points of arbitrarily high dimensions. We demonstrate this in two examples: choosing points from an array of trigonometric curves and choosing from an array of stochastic differential equation (SDE) time series generated randomly. In each case, the data are generated synthetically but clustered into two different classes: sine or cosine in the trigonometric case and time series generated from coupled Brownian process in the SDE case. In both cases, the selector algorithm is tasked with choosing two representative data points; in the trigonometric case, a solution is considered successful if the selector algorithm chooses one sine and one cosine curve, and in the SDE case, a solution is successful if the selector algorithm chooses representatives from the distinct processes. The trigonometric time series are generated using, \[\begin{split} y_{i}^{\text{sine}}&=sin(t_{i})+n_{ i}\\ y_{i}^{\text{cosine}}&=cos(t_{i})+n_{i},\end{split} \tag{4}\] where \(t_{i}\) ranges from 0 to \(2\pi\) and \(n_{i}\) is artificial random noise pulled from a Gaussian distribution with mean \(\mu=0\) and standard deviation \(\sigma\) ranging from \(\sigma=0.1\) to \(\sigma=5.5\). As \(\sigma\) increases, there is little difference between the sine and cosine curves because the introduced noise dominates the amplitude of the curves. We generate 50 sine and 50 cosine time series with 10 different \(\sigma\) values and task the selector algorithm with choosing \(k=2\) representative data points in each case. We expect the selector algorithm to select one sine and one cosine data point when \(\sigma\) is low, but as \(\sigma\) is increased, it should be harder for the selector algorithm to distinguish the two clusters. The results are shown in Figure 2. In the first two panels (a,b), clusters of sine (green) and cosine (orange) curves are plotted against time, where \(t\) spans from 0 to \(2\pi\). The curves in the top left panel have \(\sigma=0.3\), while the curves in the top right panel have \(\sigma=3.0\). In each plot, the 2 representative curves chosen by the selector algorithm are plotted in black. As expected, the \(\sigma=0.3\) time series curves are easily distinguishable, with the algorithm choosing one of each. In (b), where the standard deviation is increased to \(\sigma=3\), the sine and cosine curves are no longer visually distinguishable. In (c), density plots of the elements of the correlation distance matrix are shown. Each horizontal line extending upward represents increasing noise standard deviation, from \(\sigma=0.1\) to \(\sigma=1.0\). As expected, the two clusters of curves are clearly separable into two distinct peaks at low \(\sigma\), but start to merge as \(\sigma\) increases. This can measure the distinguishability (or lack thereof) of the clusters in the dataset as a function of \(\sigma\). In (d), the accuracy of chosen solutions is plotted as a function of \(\sigma\). Accuracy here is measured as the percentage of algorithm solutions that contained one sine and one cosine curve out of 200 total trials. As expected, at low \(\sigma\), the selector algorithm chooses solutions with 100% accuracy. While that accuracy decreases at higher \(\sigma\), even at \(\sigma>3\) where the two sets of density curves (figure (c)) are indistinguishable, the selector algorithm can pick out curves from each cluster with better-than-random accuracy. This demonstrates the robustness of the selector algorithm in choosing representative data points even as the separation between clusters vanishes. Finally, we test the performance of the selector algorithm at choosing solutions from the solution of a two-dimensional SDE given by \[dx_{i}(t)=\mu_{i}x_{i}(t)dt+\sum_{j=1}^{2}\sigma_{ij}x_{i}(t)dW_{j}(t), \tag{5}\] where \(\mu_{i}\) and \(\sigma_{ij}\) are the drift vector and volatility matrix respectively and \(W_{j}(t)\) is the standard Brownian motion. For the case of simulation, we choose \(\mu_{i}\) and \(\sigma\) randomly from a uniform distribution \([-1,1]\) and numerically generated multiple two-dimensional time series. While the trigonometric time series showed us the ability of the selector algorithm to choose representative data points from loosely clustered or unclustered data, in this example, we show the ability of the selector algorithm to evaluate simulated financial data. As before, the selector algorithm was given an array containing all-time series and tasked with choosing (apriori known) \(k=2\) representative curves. The two sets of correlated SDE time series are plotted in Figure 3, with the selector algorithm's choices plotted in black. The selector algorithm successfully chose one curve from each time series cluster, demonstrating its ability to evaluate financial time series data such as daily returns. ## IV Use case: building a diversified portfolio Diversifying a portfolio by investing in uncorrelated assets is an approach to mitigate risks associated with downturns in specific markets or unexpected crises. Because the selector algorithm is designed to pick out data points that are both distinct and representative, it is particularly useful for building a diverse portfolio from a more extensive list of assets. Quantum annealers are particularly well-suited to address QUBO problems at this scale. That is, it is possible for the large number of variables to be mapped to the large number (\(\sim 1000\)) of non-universal qubits available on present generation quantum annealers. In section IV.1, we will start by using the selector algorithm as a portfolio diversifier aiming to approximate the behaviour of the NASDAQ 100 index using a smaller subset of assets. For this task we use a classical large scale QUBO solver which is presently practical to use. We then proceed to section IV.2, where we perform experiments with the algorithm using D-Wave's quantum annealers. That is, we benchmark and compare the performance of two quantum devices for the task of diversifying a cryptocurrency portfolio. ### Reconstructing the NASDAQ 100 with a classical QUBO solver The selector algorithm can build a diverse portfolio by selecting a representative subset of stocks from a market index. In this first application on real financial data, we use the selector algorithm to approximately reconstruct the NASDAQ 100 by choosing a subset (\(S_{k}\)) of \(k\) stocks from all 102 stocks in the market index. We perform the optimization with the D-Wave decomposing solver _qbsolv_[19; 20]. To choose stocks from the NASDAQ 100 index, we treat the daily returns from each stock as individual data points in our array. Vectors \(\vec{y}_{i}\) are composed of the daily Figure 3: Two clusters, each containing 50 synthetic financial time series, generated as given in Equation 5, are plotted in orange (positive) and teal (negative). Each cluster is created with a random correlation chosen from a Gaussian distribution of \(\mu=0\) and \(\sigma=4\). For each set of curves, the volatility and return rates are randomly chosen from a Gaussian distribution with \(\mu=-1\) and \(\sigma=1\). The selector algorithm was tasked with choosing 2 points from the full dataset. Plotted in black are the two selected curves, showing the selector algorithm’s ability to choose representative time series from synthetic financial data reliably. Figure 2: Two clusters of 50 sine and cosine curves generated from Equation 4 are used as input for the selector algorithm, with varying values of the standard deviation of the Gaussian noise. _(a)_: 50 sine and cosine time series curves with introduced Gaussian noise of \(\sigma=0.3\) are plotted in green and orange, respectively. The algorithm-selected curves are plotted in black, showing that the algorithm chose one curve from each cluster. _(b)_: 50 sine and cosine time series curves are plotted with introduced Gaussian noise increased to \(\sigma=3.0\). As before, the algorithm-selected curves are plotted in black, but the clusters are no longer distinguishable because the amplitude of the noise is too great. _(c)_: the density of curves are shown as a function of distance, where each horizontal line extending upward represents increasing noise standard deviation. As expected, the two clusters of curves are clearly separable into two distinct peaks at low \(\sigma\), but start to merge as \(\sigma\) increases. _(d)_: the accuracy of chosen solutions is plotted as a function of \(\sigma\) in the bottom right panel. Accuracy here is measured as the percentage of algorithm solutions that contained one sine and one cosine curve out of 200 total trials. At low \(\sigma\), the selector algorithm chooses solutions with 100% accuracy. While that accuracy decreases at higher \(\sigma\), even at \(\sigma>3\) where the two sets of curves are indistinguishable, the selector algorithm can pick out curves from each cluster with better-than-random accuracy. All optimizations in this example were performed using the _qbsolv_ solver. returns from each stock in the index over one trading year, or 253 days, starting on 2021-02-01 (YYYY-MM-DD) and ending on 2022-02-01 (YYYY-MM-DD). This results in a \(102\times 253\) matrix \(\mathbf{Y}\), \[\mathbf{Y}=\begin{bmatrix}\overrightarrow{\vec{y}_{1}}\\ \overrightarrow{\vec{y}_{2}}\\ \overrightarrow{\vec{y}_{3}}\\ \vdots\\ \overrightarrow{\vec{y}_{n}}\\ \end{bmatrix}=\begin{bmatrix}\overrightarrow{AAPL}\\ \overrightarrow{ABN}\\ \overrightarrow{ADB}\\ \overrightarrow{\vec{A}DBE}\\ \vdots\\ \overrightarrow{ZS}\\ \end{bmatrix} \tag{6}\] In this familiar form, the data are ready for evaluation in the cost function (Equation 3), where \(n=102\) is the total number of vectors in matrix \(\mathbf{Y}\), \(k\) is the number of desired stocks selected by the algorithm, and the elements of \(\mathbf{d}\) are given by the correlation distance, \[d_{ab}=1-\frac{\sum(a_{i}-\bar{a})\cdot\sum(b_{i}-\bar{b})}{\sqrt{\sum(a_{i}- \bar{a})^{2}\sum(b_{i}-\bar{b})^{2}}}, \tag{7}\] where \(\bar{a}\) and \(\bar{b}\) are the mean values of vectors \(\vec{a}\) and \(\vec{b}\). The chosen stocks can come arbitrarily close to a complete reproduction of the market index; as the number of selected stocks \(k\) increases, the combined data from the chosen stocks comes closer and closer to an exact reproduction of the index itself. However, the goal is only approximately to reproduce the index with a smaller number of stocks, so choosing a large number is not necessarily advantageous and is a hyperparameter for the end-user. In all following analyses, we use only the binary case, where the weighted subspace is restricted to only binary values, \(\vec{w}\rightarrow\vec{x}\in\{0,1\}^{n}\) (see discussion in Appendix C for the general discrete weighted case). Like in the previous examples, this means that specific data points are either chosen or not chosen, with no inbetween. This is an NP-hard problem [22]. To evaluate the performance of the binary selector algorithm in reproducing the NASDAQ 100, we create a proxy NASDAQ 100 index by linearly combining all stocks and averaging the resulting vector. Note that while the NASDAQ 100 index is market value-weighted, for simplicity, the stocks in our NASDAQ 100 index are equally weighted. We initially use the selector algorithm to choose two stocks from our input matrix \(\mathbf{Y}\) of returns. The algorithm-chosen stocks are Fox Corporation ('FOX') and Synopsys, Inc. ('SNPS'). We use these stocks to create \(S_{k=2}:=S_{2}\), a data point composed of the averaged linear combination of our chosen stocks' daily returns. A histogram showing the average percentage daily returns for both \(S_{2}\) and the proxy NASDAQ 100 index is shown on the left panel of Figure 4. One can see that the stock return profile of the selected \(S_{2}\) stocks is a good surrogate for the NASDAQ 100 already at \(k=2\). To further assess the quality of the algorithm's selection, we calculate the value of the cost function and the correlation distance between the proxy NASDAQ 100 index and some data point \(S_{2}\) for all possible combinations of 2 stocks. A histogram showing the range of cost function and correlation distance values are shown in the middle and right panels of Figure 4, respectively. The \(S_{2}\) index had a lower cost function value than 74% of all cost function values and a correlation distance less than 89% of all values. In Figure 5, we show the average (left) and cumulative (right) returns from the proxy NASDAQ 100 index plotted with the \(S_{k}\) time series, with each row showing incremental increases in \(k\). As expected, when the number of chosen stocks (\(k\)) increases, the \(S_{k}\) vector comes closer to the proxy NASDAQ 100 index. Finally, we compare the algorithm-selected indices \(S_{k}\) to the NASDAQ 100 index by computing the mean squared error (MSE) between the two. The number of stocks selected \(k\) increases steadily until all 102 stocks in the index are included. The MSE as a function of \(k\) is plotted in Figure 6, where one can see that NASDAQ 100 can effectively be reproduced by roughly 40 stocks. ### Diversifying cryptocurrency portfolios with quantum annealers Selecting a diverse portfolio from a limited index can be challenging, especially when the assets cannot be easily divisible into traditional sectors, as in the case of cryptocurrencies (crypto). Unexpected correlations between coins can leave a crypto portfolio vulnerable to the volatility of one asset. The selector algorithm is thus particularly useful for building a diverse portfolio of cryptocurrencies. In this use case, we use the selector algorithm to build a diverse portfolio from the Crescent Crypto Market Index (CCMIX) using quantum annealers. We compare the performance of each device, investigating how well each device satisfied the problem constraints and the quality of solutions. The two quantum annealing devices used in the following experiments are the D-Wave 2000Q and the D-Wave Advantage. The D-Wave 2000Q, first introduced in 2017, is a 2048-qubit quantum annealing device. The 2000+ qubits are arranged in a Chimera topology with 6016 couplers. The 2000Q system has been used to solve problems ranging from e-commerce listing order to cryptography [23; 24]. The D-Wave Advantage quantum processing unit (QPU) was released in September 2020 as an updated, more advanced system. It contains 5000+ qubits with 35,000+ couplers - an increase from 6 to 15 couplers per qubit. This increase in qubits and couplers enables the D-Wave Advantage QPU not only to solve larger problems than the D-Wave 2000Q device but also allows for problems with more challenging connectivity to be more easily mappable to the device qubits when compared to the 2000Q Device. The D-Wave Advantage QPU has been used to solve problems such as railway dispatching and molecular unfolding [25; 26]. The input data to the selector algorithm consists of the daily returns from each coin in the CCMIX over seven months, beginning on 2021-04-01 (YYYY-MM-DD) and ending on 2021-11-11 (YYYY-MM-DD). Depending on the experiment, the selector algorithm chooses some desired number of coins in the array, using the specified quantum annealer to minimize the cost function. #### iii.2.1 Constraint Satisfaction with Default Parameters In the first round of experiments, we investigate the ability of each device to choose solutions that satisfy the equality constraint imposed by the penalty term in Equation 3. While satisfying this constraint may be trivial for some classical methods, quantum annealers are more limited, especially when the constraints are imposed as penalties [27]. For experiments in this section, we use the default parameters of the annealers, notably leading to an annealing time of \(t=20\,\mu s\). We also use a penalty scaling factor of \(A=2\). We run 300 trials on each device, where the selector algorithm was tasked with choosing \(k=3\) cryptocurrencies from the 18 coins included in the CCMIX. In all experiments, the number of shots per run is fixed at 1000. The results are shown in Figure 7. There is a significant difference in how well each device satisfies the \(k=3\) constraint. The D-Wave 2000Q device only chose three cryptocurrencies in 49 of the 300 trials - a success rate of just 16%. Conversely, the D-Wave Advantage QPU chose three coins in 256 of the 300 runs - a success rate of over 85%. There is clearly a marked improvement in performance for the newer D-Wave Advantage QPU even using the default parameters. #### iii.2.2 Consistency of Solutions with Changing Annealing Time In subsequent experiments, we investigated the impact of the value of the penalty scaling factor \(A\) (see Equation 3) and the annealing time on constraint satisfaction. We first investigated tuning the penalty scaling factor in 11 runs with 50 repeats for each value of \(A\), where the value ranged from \(A=0\) to \(A=10\) in fixed increments while the annealing time was fixed at \(20\,\mu s\). In these experiments, we saw very little impact on either the quality of the solution (i.e, how low the cost is) or satisfaction of the problem constraint. For the rest of this section, were therefore continue with the default of \(A=2\). Finally, we performed 19 trials on each device using annealing times ranging from the default 20 \(\mu s\) to 990 \(\mu s\), with 50 repeats at each annealing time. As can be seen in Figure 8, the value of the annealing time has a negligible effect on the percentage of solutions matching the \(k=3\) constraint for the Advantage QPU, which tends towards solutions that satisfy this constraint. Conversely, higher annealing times enable the 2000Q QPU to find more solutions that satisfy the soft constraint. Even at high annealing times, most of the solutions chosen by the 2000Q device do not satisfy the problem constraint, while the Advantage QPU satisfies the constraint \(>85\%\) of the time at every annealing time. The possible impact of the annealing time on the overall value of the cost function (\(k=3\)) is more difficult to discern. At the intermediate annealing time of 600 \(\mu s\) and at the higher annealing time of 900 \(\mu s\), both the value of the cost function and the standard deviation between trials hits a low value for the 2000Q QPU. Very little such variation is discernible for the Advantage device. Figure 4: _(a)_ Histograms and corresponding kernel density estimate (KDE) plots of the daily returns from the proxy NASDAQ 100 index and the combined index from \(k=2\) stocks (\(S_{2}\)) selected by the algorithm using the _qbsolv_ solver. _(b)_ Histogram and corresponding KDE plot showing the value of the cost function for all combinations of 2 stocks in the NASDAQ 100 index, where the dashed vertical line shows the value of the cost function for the algorithm-selected \(k=2\) stocks. _(c)_ Histogram and corresponding KDE plot showing the correlation distance for all combinations of 2 stocks in the NASDAQ 100 index, where the dashed vertical line shows the distance for the algorithm-selected \(k=2\) stocks. #### iii.1.3 Comparing solution quality We examine the quality of solutions selected by each device given the constraint that \(k=3\). To understand how the QPU-chosen solutions compare with all possible solutions, we first compute the value of the cost function for every combination of coins. The CCMIX index contains 18 cryptocurrencies, leading to \(2^{18}=262144\) total possible combinations, ranging from choosing 0 coins to the possibility that all 18 coins are chosen. Figure 5: _(a)-(e)_: All daily returns from the proxy NASDAQ 100 index (violet) and the index generated from the selector algorithm’s choice of stocks only. The proxy NASDAQ 100 index is created by equally weighting all stocks in the NASDAQ 100 over 2021-02-01 to 2022-02-01 (YYYY-MM-DD), while the selector algorithm index is created by equally weighting only the stocks chosen by the algorithm. The number of selected stocks \(k\) varies, starting from \(k=2\) in the first row and increasing to \(k=20\) in the final row. As \(k\) increases, the selector-created index comes closer to the proxy NASDAQ 100 index, as expected. _(f)-(j)_: The cumulative returns from both the proxy NASDAQ 100 index and the selector algorithm-generated index, where each row represents an increasing number of selected stocks \(k\). As the number of selected stocks is increased, the selector algorithm-generated index more closely replicates the features of the proxy NASDAQ 100 index. The cost function values are calculated assuming the \(k=3\) constraint, so all combinations containing more or less than three coins are penalized in the cost function. \(A=2\) in these experiments. The mean value of the cost function, considering all combinations, is 81. To gauge the performance of quantum devices relative to a randomized solution, we calculate the average value of the cost function for solutions chosen by each device for all trials at each device's optimal annealing time (550 \(\mu s\) for 2000Q and 900\(\mu s\) for Advantage). For the 2000Q device, the average value of the cost function was 4.02, while the average cost function value for the Advantage QPU was 0.32. The results are shown in Figure 9, where a histogram showing the density of cost function values is plotted, with vertical lines indicating the total average cost value and the cost values of the 2000Q and Advantage QPUs. The average cost function value of both devices was significantly lower than the average value considering all possible solutions. The average value of the cost function for the 2000Q device is in the lowest 4% of possible values, while the average value of the cost function for the Advantage QPU is in the lowest 0.03% of all solutions. Overall, both devices distinctly demonstrate the ability to choose solutions with low values of the cost function. However, comparing devices, the Advantage QPU shows a marked improvement in the ability to choose solutions that match the problem constraints and solutions with lower cost function values. Figure 7: The selector algorithm was tasked with choosing three cryptocurrencies from the CCMIX, which contains 18 total options. Optimization was performed using the D-Wave 2000Q device (left) and the D-Wave Advantage device (right), with 300 trials on each QPU. The solutions chosen by the D-Wave 2000Q device satisfied the problem constraint of \(k=3\) in only 16% of its trials, while the D-Wave Advantage device chose solutions that satisfied the problem constraint in \(>85\%\) of its trials. Figure 6: The mean-squared error (MSE) between the proxy NASDAQ 100 index and the selector algorithm-generated index as a function of the number of selected stocks k. The proxy NASDAQ 100 index is created by equally weighting all stocks in the NASDAQ 100 over 2021-02-01 to 2022-02-01 (YYYY-MM-DD). In contrast, the selector algorithm index is created by equally weighting only the stocks chosen by the algorithm. As \(k\) increases from 1 representative stock to all 102 stocks in the index, the MSE sharply decreases to zero. Figure 8: We investigate the ability of each device to choose solutions matching problem constraints as a function of annealing time, where the annealing time is increased from \(20\mu s\) to \(990\mu s\). The percentage of solutions matching problem constraints (out of 50 trials at each annealing time) is plotted for the D-Wave 2000Q (beige) and the D-Wave Advantage (violet). The change in annealing time has a negligible effect on the percentage of constraint-satisfying solutions for the Advantage QPU, while higher annealing times seem to enable the 2000Q QPU to satisfy the problem constraint better. However, even at high annealing times, most solutions chosen by the 2000Q device do not satisfy the problem constraint, while \(>85\%\) of solutions chosen by the Advantage QPU satisfy the constraint at every annealing time. ## V Conclusion In this work, we devised a selector algorithm for the representative selection of data which relies on solving a QUBO problem. Our selector algorithm chooses data points by maximizing the distance between selected points and all other data points in the dataset and minimizing the distance between selected points and similarly clustered data points. Given the potentially large number of variables in the resulting QUBO problem, the optimization stage of selector algorithm is particularly suited towards large scale QUBO solvers like quantum annealers. After experimentation with synthetic datasets, in our first practical use case, we used the selector algorithm to approximate the NASDAQ 100 with a subset of stocks in the index. We created a proxy NASDAQ 100 index by linearly combining and averaging all 102 stocks. The selector algorithm chose \(k\) stocks from the index, whose performances were combined, averaged, and compared to the proxy 100 indexes. As the number of selected stocks \(k\) increased, the algorithm-constructed index more closely resembled the proxy NASDAQ 100 index. We compared the \(k=2\) point, \(S_{2}\), to all possible combinations of 2 stocks. We found that the algorithm-selected point \(S_{2}\) had a cost function value in the bottom 26% of all \(k=2\) solutions. In our second application, we used the selector algorithm to build a diversified portfolio of cryptocurrency assets from the Crescent Crypto Market Index. The optimization was performed using 2 quantum annealers: the D-Wave 2000Q and the D-Wave Advantage. We compared the \(k=3\) selection of both devices by evaluating the cost function and comparing that to the value of the cost function of all possible coin combinations in the index. We found that the average cost function value of solutions chosen by the 2000Q device was in the lowest 4% of all cost function values, while the D-Wave Advantage device's average cost function value is in the lowest 0.03%. The D-Wave Advantage device also more consistently chose solutions matching the problem constraint - with \(>85\%\) of solutions matching the constraint, while only 16% of 2000Q solutions matched the constraint. Overall, we saw clear improvement between the newer Advantage QPU and the earlier 2000Q QPU, providing meaningful solutions to the combinatorial optimization problem.
2310.17915
Lifting the Veil: Unlocking the Power of Depth in Q-learning
With the help of massive data and rich computational resources, deep Q-learning has been widely used in operations research and management science and has contributed to great success in numerous applications, including recommender systems, supply chains, games, and robotic manipulation. However, the success of deep Q-learning lacks solid theoretical verification and interpretability. The aim of this paper is to theoretically verify the power of depth in deep Q-learning. Within the framework of statistical learning theory, we rigorously prove that deep Q-learning outperforms its traditional version by demonstrating its good generalization error bound. Our results reveal that the main reason for the success of deep Q-learning is the excellent performance of deep neural networks (deep nets) in capturing the special properties of rewards namely, spatial sparseness and piecewise constancy, rather than their large capacities. In this paper, we make fundamental contributions to the field of reinforcement learning by answering to the following three questions: Why does deep Q-learning perform so well? When does deep Q-learning perform better than traditional Q-learning? How many samples are required to achieve a specific prediction accuracy for deep Q-learning? Our theoretical assertions are verified by applying deep Q-learning in the well-known beer game in supply chain management and a simulated recommender system.
Shao-Bo Lin, Tao Li, Shaojie Tang, Yao Wang, Ding-Xuan Zhou
2023-10-27T06:15:33Z
http://arxiv.org/abs/2310.17915v1
# Lifting the Veil: Unlocking the Power of Depth in Q-learning ###### Abstract With the help of massive data and rich computational resources, deep Q-learning has been widely used in operations research and management science and has contributed to great success in numerous applications, including recommender systems, supply chains, games, and robotic manipulation. However, the success of deep Q-learning lacks solid theoretical verification and interpretability. The aim of this paper is to theoretically verify the power of depth in deep Q-learning. Within the framework of statistical learning theory, we rigorously prove that deep Q-learning outperforms its traditional version by demonstrating its good generalization error bound. Our results reveal that the main reason for the success of deep Q-learning is the excellent performance of deep neural networks (deep nets) in capturing the special properties of rewards namely, spatial sparseness and piecewise constancy, rather than their large capacities. In this paper, we make fundamental contributions to the field of reinforcement learning by answering to the following three questions: Why does deep Q-learning perform so well? When does deep Q-learning perform better than traditional Q-learning? How many samples are required to achieve a specific prediction accuracy for deep Q-learning? Our theoretical assertions are verified by applying deep Q-learning in the well-known beer game in supply chain management and a simulated recommender system. deep Q-learning, deep nets, reinforcement learning, generalization error, beer game, recommender system. ## 1 Introduction Intelligent system operations [1], supply chain management [2], production recommendation [3], human resource management [4], games [5], robotic manipulation [6], pricing [4], and many other applications, often deal with data consisting of states, actions and rewards. Developing suitable policies based on these data to improve the quality of decision-making is an important research topic in operations research and management science. For example, in product recommendations [3], the state refers to a user's current preference, the action is the recommendation of a product to the user, and the reward concerns his/her feedback on the recommendation. The goal is to find a policy that efficiently recommends products that can maximize the cumulative rewards and thus increase revenue. To address these sequential decision-making problems, traditional studies [7] favour _model-driven_ approaches; many models are proposed based on human ingenuity and prior knowledge and data are utilized to select a suitable model from these candidates. Such approaches benefit from theoretical analysis and interpretability [8, 9], which are crucial for convincing the decision-makers and persuading consumers. However, these approaches have weak predictive accuracy and are usually labor intensive, frequently requiring people to change the model, as long as some outliers are discovered. Meanwhile, the rapid development of data mining in recent years has led to an explosion in the volume and variety of data. These massive data certainly bring opportunities to discover subtle population patterns, but significantly impact labor-intensive model-driven approaches [10]. _Data-driven_ approaches, on the other hand, focus on utilizing machine learning methods to explore the patterns suggested by the data. They have attracted growing interest recently in operations research and management science [11, 12, 13, 14]. The basic idea of a data-driven approach is to first adopt a large hypothesis space, with the intuition that the space is rich enough to contain the optimal model, and then apply a specific machine learning algorithm to the massive data to tune a high-quality model from the hypothesis space. These approaches greatly reduce the importance of human ingenuity and improve the prediction accuracy. Figure 1 Fig. 1: Philosophy behind _model-driven_ and _data-driven_ approaches differentiates the two aforementioned approaches. However, although the data-driven approach has been widely applied and achieved great success in numerous applications [3, 5, 14, 15], solid theoretical verification and interpretability are needed to support its excellent performance, without which decision makers' enthusiasm may be dampened and they turn to the model-driven approach, neglecting the outstanding performance of the data-driven approach in practice. Under these circumstances, a corresponding theory that explains the running mechanisms and verifies the feasibility of the data-driven approach should be developed urgently, which is the purpose of this paper. ### _Problem formulation_ Reinforcement learning (RL) [16] is a promising data-driven approach to tackle sequential decision-making problems with data consisting of states, actions, and rewards. As shown in Figure 2, RL aims to develop a sequence of actions to maximize the total cumulative reward. It has been successfully used in human resource operations [17], recommendations [3], games [5], supply chains [2], among others. The review [18] provides additional details on RL. The Q-learning [19] algorithm is widely used to produce a sequence of high-quality actions in RL and is regarded to be one of the most important breakthroughs in RL [16, Sec. 6.5]. The core of Q-learning is to learn a sequence of Q-functions that are approximations of the optimal values. Q-learning is, by definition, a temporal-difference algorithm that incorporates four components: hypothesis space selection, optimization strategy designation, Q-functions training, and policy search. The hypothesis space component is devoted to selecting a suitable class of functions that regulate some a-priori knowledge of Q-functions. The optimization component entails the mathematical formulation of a series of optimization problems concerning Q-functions based on the given data and selected hypothesis space. The Q-functions component aims to successively determine Q-functions at different times by solving the formulated optimization problems. The policy component determines the policy by maximizing the obtained Q-functions. As a starting point of Q-learning, the hypothesis space plays a crucial role in Q-learning because it not only regulates the format and property of the Q-function to be learned but also determines the difficulty of solving the corresponding optimization problems. Since optimal Q-functions are unknown, the selection of hypothesis spaces is an eternal problem in Q-learning [20]. In particular, large hypothesis spaces are beneficial for representing any Q-functions but inevitably require large computations and are frequently unstable to noise. Conversely, excessively small hypothesis spaces lack expressive powers and usually exclude the optimal Q-functions, causing Q-learning to underperform, even in fitting the given data. Due to the limitation of computation and memory requirements, linear spaces spanned by certain basis functions are often selected as hypothesis spaces in traditional Q-learning [16], resulting in several bottlenecks in practice. For example, in robotic grasping [21], Q-learning with linear hypothesis spaces is only applicable to training individual skills, such as hitting a ball, throwing, or opening a door. With the development of rich computational resources, such as the computational power of modern graphics processing units (GPUs), adopting large hypothesis spaces in Q-learning has become practically feasible. Deep Q-learning, which adopts deep neural networks (or _deep nets_) to build hypothesis spaces, has been used in numerous applications. For instance, AlphaGo [5] beats a professional human "go" player in a complete game of Go without handicap; deep robotic grasping [6] achieved a 96% grasp success rate on unseen objects, and certain bottlenecks in traditional Q-learning have been overcome [21]. Despite its great success in practice, deep Q-learning lacks the theoretical foundations to provide the guarantees required by many applications. Consequently, the ability of deep Q-learning to outperform existing schemes is unclear, and users hesitate to adopt deep Q-learning in safety-critical learning tasks, such as clinical diagnosis and financial investment. The following three crucial questions should be answered to increase the confidence of decision-makers: \(\diamond\) **Question 1.** Why does deep Q-learning perform so well? \(\diamond\) **Question 2.** When does deep Q-learning perform better than traditional Q-learning? \(\diamond\) **Question 3.** How many samples are required to achieve a specific prediction accuracy for deep Q-learning? ### _Our Contributions_ An intuitive explanation for the success of deep Q-learning is the large capacity of deep nets [12, 15], which improves the expressive power of traditional linear hypothesis spaces. In this paper, we demonstrate that this large capacity is not the determining factor, since such a feature makes the learning scheme sensitive to noise and thus leads to large variances [22]. We rigorously prove that the success of deep nets in Q-learning is due to their excellent performance in capturing the locality, spatial sparseness, and piecewise smoothness of rewards, which is beyond the capability of shallow nets and linear models [23]. In particular, after analyzing the relationship between optimal Q-functions and rewards, we find that optimal Q-functions are generally spatially sparse and piecewise smooth. With similar capacities, deep nets outperform shallow nets and linear models by providing a considerably better approximation error. Our main contributions are as follows. Fig. 2: States, actions and rewards in RL \(\bullet\)_Oracle inequality for Q-learning:_ An oracle inequality is a bound on the risk of an estimator that shows that the performance of the estimator is almost as good as it would be if the decision-maker had access to an oracle that knows what the best the model should be. It is an important tool that determines whether the estimator in hand is optimal. Our first contribution is the establishment of an oracle inequality for Q-learning, which shows the crucial role of hypothesis spaces. We adopt two conflicting quantities, namely, approximation error and covering numbers, to illustrate a dilemma in selecting hypothesis spaces. The optimal performance is achieved by balancing these two quantities. \(\bullet\)_Expressivity for deep nets without enlarging the capacity:_ Generally speaking, it is difficult to select hypothesis spaces with both small approximation errors and covering numbers. However, in Q-learning, optimal Q-functions depend heavily on rewards, and the adopted rewards are usually spatially sparse and piecewise smooth [16]. Our results rigorously demonstrate the advantage of deep nets in approximating spatially sparse and piecewise-smooth functions. The approximation capability of deep nets is substantially better than that of linear models and shallow nets that have almost the same capacities. This finding addresses Question 1 and shows that the reason why deep Q-learning outperforms traditional Q-learning is due to its innovation of selecting hypothesis spaces that possess both small approximation errors and covering numbers, provided the rewards are spatially sparse and piecewise-smooth functions. With this, we provide a basis for answering Question 2, that is, understanding when deep Q-learning outperforms traditional Q-learning. \(\bullet\)_Generalization error for deep Q-learning:_ Combining the approximation error estimate and the established oracle inequality, we succeed in deriving a tight generalization error bound for deep Q-learning and answering Question 3. Since deep nets can capture the spatial sparseness and piecewise smoothness of rewards, the derived generalization error is smaller than that of traditional Q-learning with linear hypothesis spaces, which shows the power of depth in deep Q-learning. \(\bullet\)_Numerical verifications:_ Motivated by [14] and [24], we apply deep Q-learning to a classical supply chain management problem: the beer game, and a simulated recommender system. We numerically show that if the reward is incorrectly specified, then deep Q-learning cannot essentially improve the performance of the classical (shallow) Q-learning approach. The effectiveness and efficiency of deepening the network are based on the appropriate reward. A similar conclusion holds for the role of data size in deep Q-learning in terms that massive data can embody the advantage of deep Q-learning, whereas small data cannot do it. These interesting phenomena numerically verify our theoretical assertions that the properties of rewards, depth of neural networks, and size of data are three crucial factors that guarantee the excellent performance of deep Q-learning. ### _Organization_ The rest of this paper is organized as follows. In Section 3, we explain RL and Q-learning and present a novel oracle inequality for Q-learning. In Section 4, we introduce several important properties of optimal Q-functions and show the performance of deep nets in approximating these Q-functions. In Section 5, we pursue the power of depth by exhibiting a tight generalization error of deep Q-learning. In Section 6, we conduct experiments by applying deep Q-learning to the beer game and recommender system to verify our theoretical assertions. We draw a simple conclusion in the last section. The proofs of our results and the background of the beer game and recommender system application are presented in the supplementary material. ## 2 Related work Over the past decade, RL has been shown to be a powerful tool for addressing sequential decision-making problems. An important reason for this is its integration with deep nets [25]. Although the power of depth in deep Q-learning remains undetermined, there are numerous studies on the generalization capability of traditional Q-learning, the power of depth in supervised learning, and the feasibility of deep Q-learning, which are related to our work. Unlike classical works [26, 27, 28] describing the convergence of Q-learning for finite-horizon RL problems, the present paper focuses on the generalization capability, which quantifies the relationship between the prediction accuracy and number of samples. The generalization capability of Q-learning, measured by the generalization error (or finite-sample bounds), has proven a stumbling block for an enormous number of research activities over the past twenty years [29, 30, 31, 32, 33, 34]. Among these works, [29, 31] are the most related to our work, where the generalization error of batch Q-learning is deduced for finite-horizon sequential decision-making problems. The novelty of our work is that we highlight the important role of hypothesis spaces in Q-learning rather than assuming the hypothesis spaces to be linear models. Furthermore, under the same conditions, our derived generalization error bound is much smaller than those of [29, 31], since several new techniques have been developed, such as a novel error decomposition and a new concentration inequality for Q-learning. The approximation capability of neural networks is a classical research topic that back to the 1980s, when the universal approximation property of shallow nets was established by [35]. A common consensus is that the power of depth depends heavily on the properties of target functions. If the target function is smooth, then it was proved by [36] that deep nets perform similarly to shallow nets, showing that there is no essential improvement when the approximation tools change from shallow nets to deep nets. For complicated functions, deep nets have been shown to be much better than shallow nets and linear models. In particular, with a comparable number of free parameters, Deep nets have been proven to outperform shallow nets and linear models in embodying the manifold structures of the input space [37], realizing the sparseness in the frequency domain [38] and spatial domain [39], reflecting the rotation-invariance of the data [40], grasping the hierarchical features of the data [41], and approximating non-smooth functions [42]. All these demonstrate the power of depth in supervised learning. In this paper, we aim to derive the advantage of deep nets in approximating optimal Q-functions, which are piecewise smooth and spatially sparse in numerous applications [16]. We rigorously prove that deep nets outperform shallow nets and linear models in approximating such Q-functions and show the reasonableness and efficiency of building hypothesis spaces using deep nets. To the best of our knowledge, [43] is the first work to show the feasibility of deep Q-learning in terms of generalization. Under some composition and smoothness assumptions for optimal Q-functions and a concentration coefficient assumption regarding marginal probability, [43] derived a generalization error estimate of deep Q-learning and showed that deep Q-learning beats the classical version, which is, of course, a beneficial development that lays a stepping-stone toward understanding deep Q-learning. There are three main differences between our work and that of [43]. The first difference is the setting; we are concerned with finite-stage sequential decision-making problems, whereas [43] considered infinite-stage problems involving strict restrictions on the discount factor to guarantee the concentration coefficient assumption. The second difference is the adopted algorithms, although both are variants of batch Q-learning. To be detailed, since infinite-stage sequential decision-making problems are considered in [43], the policy length was a main parameter and depended heavily on the concentration coefficient assumption, which is difficult to verify in practice; this is not the case in our analysis. The last difference is the assumptions of the optimal Q-functions; our assumptions are induced from numerous deep Q-learning applications, which is beyond the scope of [43]. Another recent paper [14] studied the performance of deep Q-learning in inventory optimization problems. The basic idea was to adopt a shaped variant of rewards, with which they showed that deep Q-learning can essentially improve the performance of classical (shallow) Q-learning. Our numerical studies are motivated by [14]. However, unlike [14], which showed the outperformance of shaped deep Q-learning, we numerically elucidate the roles of depth, rewards and data size. We apply a similar numerical experiment in a simulated recommender system [44]. Our numerical results aim to reveal the veil of the success of deep Q-learning. More importantly, we derive a solid theoretical analysis to show why and when deep Q-learning outperforms classical (shallow) Q-learning. Our theoretical results can provide a theoretical guarantee for [14] and [24]. ## 3 Oracle Inequality for Q-learning We present an oracle inequality to show the important role of hypothesis spaces in Q-learning. Throughout the paper, we use upper-case letters to denote random variables and lower-case letters to denote instances of random variables. ### _RL and Q-learning_ We consider \(T\)-stage sequential decision-making problems. For \(t=1,\ldots,T\), let \(\tilde{\mathcal{S}}_{t}\subset\mathbb{R}^{d_{s,t}}\) and \(\tilde{\mathcal{A}}_{t}\subset\mathbb{R}^{d_{a,t}}\) be families of states and actions, respectively, in stage \(t\), where \(d_{s,t},d_{a,t}\in\mathbb{N}\) denote the dimensions of the state and action spaces at stage \(t\). The data in RL are formed as \(\mathcal{T}_{T}=(\mathbf{s}_{T+1},\mathbf{a}_{T},\mathbf{R}_{T})\), where \(\mathbf{s}_{t}=\{s_{1},\ldots,s_{t}\}\) with \(s_{t}\in\tilde{\mathcal{S}}_{t}\) is the sequence of \(t\) states, \(\mathbf{a}_{t}=\{a_{1},\ldots,a_{t}\}\) with \(a_{t}\in\tilde{\mathcal{A}}_{t}\) is the sequence of \(t\) actions, and \(\mathbf{R}_{t}=\{R_{1},\ldots,R_{t}\}\) with \(R_{t}:=R_{t}(\mathbf{s}_{t+1},\mathbf{a}_{t})\) is the sequence of \(t\) rewards. Denote by \(D:=\{\mathcal{T}_{T,i}\}_{i=1}^{m}\) the training set of size \(m\). RL aims to derive \(T\) maps as \[\pi_{t}:\tilde{\mathcal{S}}_{1}\times\cdots\times\tilde{\mathcal{S}}_{t}\times \tilde{\mathcal{A}}_{1}\times\cdots\times\tilde{\mathcal{A}}_{t-1}\to\tilde{ \mathcal{A}}_{t},t=1,2,\ldots,T\] to maximize \(\sum_{t=1}^{T}R_{t}(\mathbf{s}_{t+1},\pi_{t}(\mathbf{s}_{t},\mathbf{a}_{t-1}))\) based on \(D\). Under the standard statistical framework of RL [20, 29, 31], samples in \(\{\mathcal{T}_{T,i}\}_{i=1}^{m}\) are assumed to be drawn independently and identically (i.i.d.) according to a definite but unknown distribution \[\begin{split}& P=\rho_{1}(s_{1})p_{1}(a_{1}|s_{1})\\ &\prod_{t=2}^{T}\rho_{t}(s_{t}|\mathbf{s}_{t-1},\mathbf{a}_{t-1}) p_{t}(a_{t}|\mathbf{s}_{t},\mathbf{a}_{t-1})\rho_{T+1}(s_{T+1}|\mathbf{s}_{T}, \mathbf{a}_{T}),\end{split} \tag{1}\] where \(\rho_{t}(s_{t}|\mathbf{s}_{t-1},\mathbf{a}_{t-1})\) is the conditional density of \(s_{t}\) conditioned on \(\mathbf{s}_{t-1},\mathbf{a}_{t-1}\) and \(p_{t}(a_{t}|\mathbf{s}_{t},\mathbf{a}_{t-1})\) denotes the probability that action \(a_{t}\) is taken given the history \(\{\mathbf{s}_{t},\mathbf{a}_{t-1}\}\). A policy formed by a sequence of decision rules is written as \(\pi=\{\pi_{1},\ldots,\pi_{T}\}\). We further denote \[\begin{split}& P_{\pi}=\rho_{1}(s_{1})1_{a_{1}=\pi_{1}(s_{1})}\\ &\prod_{t=2}^{T}\rho_{t}(s_{t}|\mathbf{s}_{t-1},\mathbf{a}_{t-1}) 1_{a_{t}=\pi(\mathbf{s}_{t},\mathbf{a}_{t-1})}\rho_{T+1}(s_{T+1}|\mathbf{s}_{ T},\mathbf{a}_{T}),\end{split} \tag{2}\] where for a predicate \(W\), \(1_{W}\) is 1 if \(W\) is true and 0 otherwise. Define the \(t\) value function (value function at time \(t\)) of \(\pi\) by \[\begin{split} V_{\pi,t}(\mathbf{s}_{t},&\mathbf{a}_{t -1})=\\ & E_{\pi}\left[\sum_{j=t}^{T}R_{j}(\mathbf{S}_{j+1},\mathbf{A}_{j}) \big{|}\mathbf{S}_{t}=\mathbf{s}_{t},\mathbf{A}_{t-1}=\mathbf{a}_{t-1}\right],\end{split}\] where \(E_{\pi}\) is the expectation with respect to the distribution \(P_{\pi}\). If \(t=1\), then this can be written as \(V_{\pi}(s_{1})=V_{\pi,1}(s_{1})\) for brevity. We further denote the optimal \(t\) value function as \[V_{t}^{*}(\mathbf{s}_{t},\mathbf{a}_{t-1})=\max_{\pi\in\Pi}V_{\pi,t}(\mathbf{s }_{t},\mathbf{a}_{t-1}),\] where \(\Pi\) denotes the collection of all policies. The Bellman equation [45] characterizes the optimal policy \(\pi^{*}\) as \[\begin{split}\pi_{t}^{*}(\mathbf{s}_{t},\mathbf{a}_{t-1})& =\operatorname*{arg\,max}_{a_{t}}E[R_{t}(\mathbf{S}_{t+1},\mathbf{A}_{t})\\ &+V_{t+1}^{*}(\mathbf{S}_{t+1},\mathbf{A}_{t})|\mathbf{S}_{t}= \mathbf{s}_{t},\mathbf{A}_{t}=\mathbf{a}_{t}],\end{split} \tag{3}\] where \(E\) is the expectation with respect to \(P\). RL then aims to find a policy \(\hat{\pi}\) to minimize \(V^{*}(s_{1})-V_{\hat{\pi}}(s_{1})\), where \(V^{*}(s_{1}):=V_{1}^{*}(s_{1})\). Batch Q-learning [29, 16], a widely used variant of Q-learning, divides a \(T\)-stage sequential decision-making problem into \(T\) least squares problems. The optimal time-dependent \(Q\)-function is defined by \[\begin{split} Q_{t}^{*}(\mathbf{s}_{t},\mathbf{a}_{t})& =E[R_{t}(\mathbf{S}_{t+1},\mathbf{A}_{t})\\ &+V_{t+1}^{*}(\mathbf{S}_{t+1},\mathbf{A}_{t})|\mathbf{S}_{t}= \mathbf{s}_{t},\mathbf{A}_{t}=\mathbf{a}_{t}].\end{split} \tag{4}\] Since \[V_{t}^{*}(\mathbf{s}_{t},\mathbf{a}_{t-1}) \tag{5}\] \[= V_{\pi^{*},t}(\mathbf{s}_{t},\mathbf{a}_{t-1})\] \[= E_{\pi^{*}}\left[\sum_{j=t}^{T}R_{j}(\mathbf{S}_{j+1},\mathbf{A} _{j})|\mathbf{S}_{t}=\mathbf{s}_{t},\mathbf{A}_{t-1}=\mathbf{a}_{t-1}\right],\] then \[V_{t}^{*}(\mathbf{s}_{t},\mathbf{a}_{t-1})=\max_{a_{t}}Q_{t}^{*}(\mathbf{s}_{t},\mathbf{a}_{t}). \tag{6}\] We call \(V_{t}^{*}(\mathbf{s}_{t},\mathbf{a}_{t-1})-Q_{t}^{*}(\mathbf{s}_{t},\mathbf{a }_{t})\) the optimal advantage temporal-difference at time \(t\). Furthermore, according to [29, Lemma 1] (see also [46, Chap. 5]), \[V^{*}(s_{1})- V_{\pi}(s_{1})= \tag{7}\] \[E_{\pi}\left[\sum_{t=1}^{T}V_{t}^{*}(\mathbf{S}_{t},\mathbf{A}_{ t-1})-Q_{t}^{*}(\mathbf{S}_{t},\mathbf{A}_{t})\big{|}S_{1}=s_{1}\right]\] holds for an arbitrary \(\pi\). This equality shows that the quality of a policy \(\pi\) depends on the temporal difference, and a good estimate of optimal Q-function helps reduce the generalization error of RL. Under these circumstances, the estimation of \(Q_{t}^{*}\) for \(t=1,\ldots,T\) lies at the core of batch Q-learning. With \(Q_{T+1}^{*}=0\), it follows from (4) and (5) that for \(t=T,T-1,\ldots,1\), \[Q_{t}^{*}(\mathbf{s}_{t},\mathbf{a}_{t})=E[R_{t}(\mathbf{S}_{t+1 },\mathbf{A}_{t}) \tag{8}\] \[\qquad\quad+\max_{a_{t+1}}Q_{t+1}^{*}(\mathbf{S}_{t+1},\mathbf{A }_{t},a_{t+1})\big{|}\mathbf{S}_{t}=\mathbf{s}_{t},\mathbf{A}_{t}=\mathbf{a}_ {t}\big{|}\] This implies the following proposition. **Proposition 1**: _Let \(L_{t}^{2}\) be the space of square-integrable functions with respect to the distribution_ \[P_{t}=\rho_{1}(s_{1})p_{1}(a_{1}|s_{1})\prod_{j=2}^{t}\rho_{j}(s_{j}|\mathbf{ s}_{j-1},\mathbf{a}_{j-1})p_{j}(a_{j}|\mathbf{s}_{j},\mathbf{a}_{j-1}). \tag{9}\] _Then, for \(t=T,T-1,\ldots,1\),we have_ \[Q_{t}^{*}(\mathbf{s}_{t},\mathbf{a}_{t})=\arg\min_{Q_{t}\in L_{ t}^{2}}E[(R_{t}(\mathbf{S}_{t+1},\mathbf{A}_{t}) \tag{10}\] \[\qquad+\max_{a_{t+1}}Q_{t+1}^{*}(\mathbf{S}_{t+1},\mathbf{A}_{t},a_{t+1})-Q_{t}(\mathbf{S}_{t},\mathbf{A}_{t}))^{2}].\] According to Proposition 1, optimal \(Q\)-functions can be obtained by solving \(T\) least squares problems via the backward recursion, that is, \(t=T,T-1,\ldots,1\). Empirically, by setting \(\hat{Q}_{T+1}=0\), we can compute \(Q\)-functions (\(\hat{Q}_{t}\) for \(t=T,T-1,\ldots,1\)) by solving the following \(T\) least squares problems \[\hat{Q}_{t}(\mathbf{s}_{t},\mathbf{a}_{t})=\arg\min_{Q_{t}\in \tilde{\mathcal{Q}}_{t}}\mathbb{E}_{m}[(R_{t}(\mathbf{S}_{t+1},\mathbf{A}_{t}) \tag{11}\] \[\qquad+\max_{a_{t+1}}\hat{Q}_{t+1}(\mathbf{S}_{t+1},\mathbf{A}_{t },a_{t+1})-Q_{t}(\mathbf{S}_{t},\mathbf{A}_{t}))^{2}],\] where \(\mathbb{E}_{m}\) is the empirical expectation and \(\tilde{\mathcal{Q}}_{t}\) is a parameterized hypothesis space. With this, we obtain a sequence of estimators \(\{\hat{Q}_{T},\ldots,\hat{Q}_{1}\}\) and define the corresponding policy by \[\hat{\pi}_{t}(\mathbf{s}_{t},\mathbf{a}_{t-1})=\arg\max_{a_{t}\in\tilde{ \mathcal{A}}_{t}}\hat{Q}_{t}(\mathbf{s}_{t},\mathbf{a}_{t-1},a_{t}),t=1, \ldots,T. \tag{12}\] ### _Oracle inequality for Q-learning_ In this subsection, we aim to derive our novel oracle inequality for Q-learning. We first introduce two mild assumptions. **Assumption 1**: _Let \(\mu\geq 1\) be a constant. Then,_ \[p_{t}(a|\mathbf{s}_{t},\mathbf{a}_{t-1})\geq\mu^{-1},\qquad\forall a\in \tilde{\mathcal{A}}_{t}. \tag{13}\] Assumption 1 is a standard assumption made in [29, 31] and it states that every action in \(\tilde{\mathcal{A}}_{t}\) has a positive conditional probability of being chosen at each time \(t\). It contains at least two widely used settings. One is that \(\tilde{\mathcal{A}}_{t}\) only contains finitely many actions, which is the case in go games [5], blackjack [16], robotic grasping [6] and beer game [14]. The other is that \(\tilde{\mathcal{A}}_{t}\) is an infinite set, but only finite actions in \(\tilde{\mathcal{A}}_{t}\) are active when in the case of \(\{\mathbf{s}_{t},\mathbf{a}_{t-1}\}\). For example, in a recommender system, if the feedback for a client's age is around three, then the following candidate action is to recommend children's products. Hence, Assumption 1 is a mild condition that can be easily satisfied for numerous applications. Since rewards are given before the learning process and are always assumed to be finite, we present the following mild assumption. **Assumption 2**: _There exists a \(U>0\) such that \(\|R_{t}\|_{L^{\infty}}\leq U\) for any \(t=1,\ldots,T\)._ According to (8) and \(Q_{T+1}^{*}=0\), Assumption 2 implies that \(\|Q_{t}^{*}\|_{L^{\infty}}\leq 2U\) for all \(t=1,2,\ldots,T\). Therefore, it is natural to search for estimators uniformly bounded by \(2U\). To describe the role of hypothesis space, we also introduce the empirical covering number [9, 47] to quantify its capacity. For a set of functions \(\mathcal{G}\) defined on \(\mathcal{X}\subseteq\mathbb{R}^{d}\) with \(d\in\mathbb{N}\), denote by \(\mathcal{N}_{1}(\epsilon,\mathcal{G},x_{1}^{m})\) with \(x_{1}^{m}=(x_{1},\ldots,x_{m})\in\mathcal{X}^{m}\) the \(\ell^{1}\) empirical covering number [47, Def.9.3] of \(\mathcal{G}\), which is the number of elements in a least \(\varepsilon\)-net of \(\mathcal{G}\) with respect to \(\|\cdot\|_{\ell^{1}}\). Furthermore, let \(\mathcal{N}_{1}(\epsilon,\mathcal{G}):=\max_{x_{1}^{m}\in\mathcal{X}^{m}} \mathcal{N}_{1}(\epsilon,\mathcal{G},x_{1}^{m})\). We then obtain the following oracle inequality for batch Q-learning. **Theorem 1**: _Let \(\beta_{t}>0\), \(\tilde{\mathcal{Q}}_{t}\), \(t=1,\ldots,T\) be sets of functions uniformly bounded by \(2U\) and \(\tilde{\mathcal{Q}}_{T+1}=\{0\}\). If Assumptions 1 and 2 hold and \(\tilde{Q}_{t}\) is defined by (11), then_ \[E[V^{*}(S_{1})-V_{\pi}(S_{1})]\leq \tag{14}\] \[C\sum_{t=1}^{T}\sum_{j=t}^{T}(3\mu)^{j-t}(\min_{j_{t}\in\tilde{ \mathcal{Q}}_{j}}E[(h_{j}-Q_{j}^{*})^{2}]+\beta_{j}+\frac{1}{m}\] \[+\frac{1}{m}\exp{(-C^{\prime}\beta_{j}m)}\left(\mathcal{N}_{1}(C^{ \prime}\beta_{j},\tilde{\mathcal{Q}}_{j})+\mathcal{N}_{1}(C^{\prime}\beta_{j}, \tilde{\mathcal{Q}}_{j+1})\right))^{\frac{1}{2}},\] _where \(C\) and \(C^{\prime}\) are constants depending only on \(U\)._ Theorem 1 presents a bias-variance trade-off in selecting the hypothesis space \(\tilde{\mathcal{Q}}_{t}\). If \(\tilde{\mathcal{Q}}_{t}\) is large, then the approximation error \(\min_{h_{j}\in\tilde{\mathcal{Q}}_{j}}E[(h_{j}-Q_{j}^{*})^{2}]\) is small but the capacity \(\mathcal{N}_{1}(C^{\prime}\beta_{j},\tilde{\mathcal{Q}}_{j})\) will be large, leading to bad generalization. Conversely, if \(\tilde{\mathcal{Q}}_{t}\) is small, then \(\mathcal{N}_{1}(C^{\prime}\beta_{j},\tilde{\mathcal{Q}}_{j})\) is small but its approximation performance is not so good, which will also result in bad generalization. A suitable hypothesis space should be selected to balance the bias and variance in each stage, thereby achieving the best learning performance. The bias-variance balance is determined by the a-priori information of \(Q_{t}^{*}\), without which it is impossible to derive a satisfactory generalization error [47, Chap. 3]. Estimation of the capacity of a hypothesis space is a classical topic in statistical learning theory [47, 48, 22]. In particular, it is discussed in [47, Chap. 9] for linear models of dimension \(k\), \(\mathcal{G}_{k}^{*}\), and [47, Chap. 16] for shallow nets \(G_{k}^{*}\) with \(k\) tunable parameters and some specified activation function for which \[\log\mathcal{N}_{1}(\varepsilon,\mathcal{G}_{k,M})\leq C_{1}k\log\frac{M}{ \varepsilon},\qquad\forall\varepsilon>0, \tag{14}\] where \(\mathcal{G}_{k,M}=\{f\in\mathcal{G}_{K},\|f\|_{\infty}\leq M\}\), \(\mathcal{G}_{k}\) is either \(\mathcal{G}_{k}^{*}\) or \(\mathcal{G}_{k}^{*}\), \(M>0\) and \(C_{1}\) is a constant depending only on \(M\). The following corollary then follows from Theorem 1 with \(\beta_{t}=\frac{2C_{1}\max\{k_{t},k_{t+1}\}\log(2C^{\prime}Um)}{C^{\prime}m}\). **Corollary 1**: _Let \(k_{t}\in\mathbb{N}\), \(k_{T+1}=0\), \(\tilde{\mathcal{Q}}_{T+1}=\{0\}\) and \(\tilde{\mathcal{Q}}_{t}\), \(t=1,\ldots,T\) be sets of functions satisfying (14) with \(k=k_{t}\) and \(M=2U\). If Assumptions 1 and 2 hold and \(\tilde{Q}_{t}\) is defined by (10), then_ \[E[V^{*}(S_{1})-V_{\pi}(S_{1})]\leq C^{{}^{\prime\prime}}\sum_{t =1}^{T}\sum_{j=t}^{T}(3\mu)^{j-t}\] \[\left(\min_{h_{j}\in\tilde{\mathcal{Q}}_{j}}E[(h_{t}-Q_{t}^{*})^{ 2}]+\frac{\max\{k_{t},k_{t+1}\}\log(2m)}{m}\right)^{\frac{1}{2}},\] _where \(C^{{}^{\prime\prime}}\) is a constant depending only on \(U\)._ The oracle inequality for batch Q-learning was initially deduced by [29] under the same setting as ours. However, there are three crucial differences between our results and those of [29, Theorem 1]. First, we do not assume that \(Q_{t}^{*}\in\tilde{\mathcal{Q}}_{t}\) and utilize the approximation error to measure the expressive power of \(\tilde{\mathcal{Q}}_{t}\). Since optimal Q-functions are rarely known in practice, the assumption of \(Q_{t}^{*}\in\tilde{\mathcal{Q}}_{t}\) requires an extremely large hypothesis space, which leads to a large generalization error. Then, the derived generalization error bound in Theorem 1 or Corollary 1 is essentially better than that of [29, Theorem 1] under the same conditions. In particular, if \(\tilde{\mathcal{Q}}_{t}\) is a linear space and \(Q_{t}^{*}\in\tilde{\mathcal{Q}}_{t}\), our derived generalization error in Corollary 1, is of order \(\mathcal{O}(m^{-1/2})\), whereas that in [29, Theorem 1] is of order \(\mathcal{O}(m^{-1/4})\). Finally, we take the covering number to measure the generalization error without presenting any restrictions on it, which is totally different from [29, Theorem 1] that conducted the analysis under capacity restrictions like (14). This makes our analysis applicable to numerous hypothesis spaces, such as reproducing kernel Hilbert space [49], shallow nets [47, Chap.16] and deep nets [36]. ## 4 Deep Nets in Approximating Optimal Q-Functions In this section, we first demonstrate some good properties of optimal Q-functions of many real-world applications and then analyze the power of depth in approximating Q-functions via deep nets. ### _A priori information for optimal Q-functions_ We provide some excellent applications of deep Q-learning and present the a-priori information for optimal Q-functions. Q-learning has been proven to gain considerably profitable (long-term) rewards [50] in a recommender system, where the state is defined as the browsing history of a user, and the action is to recommend (one or more item) items to the user and the reward is the user's feedback, including skipping, clicking, or ordering of these items. This implies that the reward function is piecewise constant and spatially sparse with respect to the state and action. Traditional Q-learning adopts linear models to formulate the hypothesis space, which cannot capture piecewise constancy or spatial sparseness [39]. This makes it difficult to design an efficient policy to recommend multiple products simultaneously [3]. Deep Q-learning succeeds in overcoming this bottleneck of traditional Q-learning [3] by recommending numerous high-quality products simultaneously. Q-learning also provides a promising avenue for manipulating robotic interaction. Traditional Q-learning is only applicable to individual skills, such as hitting a ball, throwing, or opening a door [21]. Consequently, [6] developed a deep Q-learning approach to robotic grasping that achieved a 96% grasp success rate on unseen objects. In their approach, the state includes the robot's current camera observation, and an RGB image with a certain resolution (e.g., 472\(\times\)472). The action consists of a vector indicating the desired change in the gripper position, such as the open and close commands. The reward is 1 at the end of the episode if the gripper contains an object and is above a certain height, and 0 otherwise, showing the piecewise constant and spatially sparse property of the reward functions. Q-learning has numerous other applications, where the reward functions are set to be piecewise smooth (or piecewise constant) and spatially sparse. We refer the readers to a detailed beer game example in Section 4 of the supplementary material or some interesting examples in [16] shown in Table IV. All these shows that piecewise smoothness (or piecewise constant) and spatial sparseness are two vital properties of reward functions. Under this circumstance, it follows from (7) and \(Q_{T+1}^{*}=0\) that optimal Q-functions are also piecewise smooth (or piecewise constant) and spatially sparse, as shown in Table IV. In the following, we provide the mathematical definition of spatially sparse and piecewise smooth (or piecewise constant) functions. For \(d,N\in\mathbb{N}\) and \(\mathbb{I}^{d}:=[0,1]^{d}\), we partition \(\mathbb{I}^{d}\) by \(N^{d}\) sub-cubes \(\{A_{j}\}_{j=1}^{N^{d}}\) of side length \(N^{-1}\) and with centers \(\{\zeta_{j}\}_{j=1}^{N^{d}}\). For \(s\in\mathbb{N}\) with \(s\leq N^{d}\) and a function \(f\) defined on \(\mathbb{I}^{d}\), if the support of \(f\) is contained in \(\cup_{j\in\Lambda_{s}}A_{j}\) for some subset \(\Lambda_{s}\) of \(\{1,\ldots,N^{d}\}\) of cardinality \(s\), we then say that \(f\) is \(s\) spatially sparse in \(N^{d}\) cubic partitions. Let \(c_{0}>0\), \(r=u+v\) with \(u\in\mathbb{N}_{0}:=\{0\}\cup\mathbb{N}\), \(0<v\leq 1\), and \(\mathbb{A}\subseteq\mathbb{I}^{d}\). We say that a function \(f:\mathbb{A}\rightarrow\mathbb{R}\) is \((r,c_{0})\)-smooth, if \(f\) is \(u\)-times differentiable and for any \(\alpha=(\alpha_{1},\cdots,\alpha_{d})\in\mathbb{N}_{0}^{d}\) with \(\alpha_{1}+\cdots+\alpha_{d}=u\) and \(x,x^{\prime}\in\mathbb{A}\), its partial derivative satisfies the Lipschitz condition \[\left|\frac{\partial^{u}f}{\partial x_{1}^{\alpha_{1}}\ldots\partial x_{d}^{ \alpha_{d}}}(x)-\frac{\partial^{u}f}{\partial x_{1}^{\alpha_{1}}\ldots\partial x _{d}^{\alpha_{d}}}(x^{\prime})\right|\leq c_{0}\|x-x^{\prime}\|^{v}, \tag{15}\] where \(\|x\|\) denotes the Euclidean norm of \(x\). \(Lip_{h}^{(r,c_{0})}\) is then written as the set of functions satisfying (15). If there exists \(g_{j}\in Lip_{A_{j}}^{(r,c_{0})}\) for \(j=1,\ldots,N^{d}\) such that \[f(x)=\sum_{j\in\Lambda_{s}}g_{j}(x)\mathcal{I}_{A_{j}}(x), \tag{16}\] we then say that \(f\) is a spatially sparse in \(N^{d}\) cubic partitions and \((r,c_{0})\)-smooth, where \(\mathcal{I}_{A_{j}}\) is the indicator function of \(A_{j}\), i.e., \(\mathcal{I}_{A_{j}}(x)=1\) if \(x\in A_{j}\) and \(\mathcal{I}_{A_{j}}(x)=0\) if \(x\notin A_{j}\). Denote by \(Lip^{(r,c_{0},s,N^{d})}\) the set of all such functions. A special case of \(f\in Lip^{(r,c_{0},s,N^{d})}\) is \(g_{j}(x)=c_{j}\) for some \(|c_{j}|\leq C_{0}\) and \(C_{0}>0\). In this case, we say that \(f\) is \(s\) spatially sparse in \(N^{d}\) cubic partitions and piecewise constant. We further denote by \(\mathcal{C}^{(C_{0},s,N^{d})}\) the set of all these functions. Figure 3 shows a piecewise constant and spatially sparse function in \(\mathcal{C}^{(C_{0},4,16)}\). ### _Capacity of deep nets_ Let \(L\in\mathbb{N}\) and \(d_{0},d_{1},\ldots,d_{L}\in\mathbb{N}\) with \(d_{0}=d\). For \(\widetilde{h}_{k}=(h^{(1)},\ldots,h^{(d_{k})})^{T}\in\mathbb{R}^{d_{k}}\), define \(\vec{\sigma}(\widetilde{h})=(\sigma(h^{(1)}),\ldots,\sigma(h^{(d_{k})}))^{T}\), where \(\sigma(t)=\max\{t,0\}\) is the rectified linear unit (ReLU). Deep nets with depth \(L\) and width \(d_{j}\) in the \(j\)th hidden layer can be mathematically represented as \[h_{\{d_{0},\ldots,d_{L}\}}(x)=\vec{a}\cdot\vec{h}_{L}(x), \tag{17}\] where \[\vec{h}_{k}(x)=\vec{\sigma}(W_{k}\cdot\vec{h}_{k-1}(x)+\vec{b}^{(k)}),\qquad k =1,2,\ldots,L, \tag{18}\] \(\vec{a}\in\mathbb{R}^{d_{L}}\), \(\vec{b}^{(k)}\in\mathbb{R}^{d_{k}}\), \(\vec{h}_{0}(x)=x\), and \(W_{k}=(w_{i,j}^{(k)})_{1,1}^{d_{k},d_{k-1}}\) is a \(d_{k}\times d_{k-1}\) matrix. The structure of deep nets is reflected by the parameter matrices \((W_{k},\vec{b}^{k})\). A typical deep net is a deep fully connected neural network (DFCN), which corresponds to weight matrices without any constraints. We denote by \(\mathcal{H}_{\{d_{0},\ldots,d_{L}\}}\) the set of all deep nets formed as (17). Then, there are totally \[n_{L}:=\sum_{j=0}^{L-1}(d_{j}d_{j+1}+d_{j+1})+d_{L} \tag{19}\] tunable parameters for \(h\in\mathcal{H}_{\{d_{0},\ldots,d_{L}\}}\). If \(L\) is large, there are too many parameters to be tuned, leading to extremely large capacity. It is known that deep sparsely connected nets (DSCN), such as deep convolution neural networks [51, 52], deep nets with tree structures [40] or other sparse structures [42], can significantly reduce the capacity of DFCN without sacrificing its approximation capability very much. The hypothesis spaces in this paper are DSCNs with \(n\ll n_{L}\) tunable parameters paved on \(L\) hidden layers. Denote by \(\mathcal{H}_{n,L}\) the set of all these deep nets with a specific structure. Thus, we define \[\mathcal{H}_{n,L,M}:=\{h\in\mathcal{H}_{n,L}:\|h\|_{L^{\infty}}\leq M\}. \tag{20}\] The following lemma presents a covering number estimate for \(\mathcal{H}_{n,L,M}\). **Lemma 1**: _There is a constant \(C_{1}^{*}\) depending only on \(d\) such that_ \[\log\mathcal{N}_{1}(\epsilon,\mathcal{H}_{n,L,M})\leq C_{1}^{*}Ln\log D_{\max }\log\frac{M}{\epsilon}, \tag{21}\] _where \(D_{\max}:=\max_{0\leq j\leq L}d_{j}\)._ Since \(\|Q_{t}^{*}\|_{L^{\infty}}\leq 2U\), it is natural to take \(\mathcal{H}_{n,L,2U}\) as the hypothesis space. It should be mentioned that except for the boundedness of the neural networks, Lemma 1 does not \begin{table} \begin{tabular}{|c|c|c|c|} \hline Games & Rewards & \(Q^{*}\) & References \\ \hline Pick-and-Place Robot & Piecewise constant & Piecewise constant & Example 3.2 \\ Recycling Robot & Piecewise constant & Piecewise constant & Example 3.3 \\ Pole-Balancing & Piecewise constant & Piecewise constant & Example 3.4 \\ Gridworld & Piecewise constant & Piecewise constant & Example 3.5 \\ Golf & Piecewise constant & Piecewise constant & Example 3.6 \\ Gambler Problem & Piecewise constant & Piecewise constant & Example 4.3 \\ Blackjack & Piecewise constant & Piecewise constant & Example 5.1 \\ Driving Home & Piecewise smooth & Piecewise smooth & Example 6.1 \\ Cliff Walking & Piecewise constant & Piecewise constant & Example 6.6 \\ Dyna Maze & Piecewise constant & Piecewise constant & Example 8.1 \\ Racetrack & Piecewise constant & Piecewise constant & Example 8.6 \\ Mountain Car & Piecewise constant & Piecewise constant & Example 10.1 \\ Access-control Queuing & Piecewise constant & Piecewise constant & Example 10.2 \\ AlphaGo & Piecewise constant & Piecewise constant & Sec. 16.6.1 \\ \hline \end{tabular} \end{table} TABLE I: Propositions for rewards and optimal \(Q\)-functions for games [16] Fig. 3: Piecewise-constant and spatially sparse functions in \(\mathcal{C}^{(C_{0},4,16)}\) impose any additional restrictions on the boundedness of weights, which is different from [36] and [53, Chap. 14]. If \(L\) is not excessively large, then it follows from (21) and (14) that the covering number of deep nets is comparable with that of shallow nets with \(n\) parameters and linear models of \(n\)-dimension. ### _Approximation capability of deep nets_ A common consensus on deep nets' approximation [23, 36, 37, 53, 54, 55, 56] is that the power of depth depends on the properties of target functions. If the target function is assumed to be in \(Lip_{1^{d}}^{(r,c_{0})}\), then [36] verified that deep nets perform similarly to shallow nets, showing that there are no essential improvements when the approximation tools changed from shallow nets or linear models to deep nets. However, if some additional a-priori knowledge is given, deep nets are much more effective than shallow nets and linear models, as Table II shows. As discussed in Sec. 3.1, optimal Q-functions are frequently piecewise constant (or piecewise smooth) and spatially sparse. Studying the advantage of deep nets in approximating such functions is our main purpose, as addressed in the following theorem. **Theorem 2**: _Let \(d\geq 2\), \(N\in\mathbb{N}\), \(C_{0}>0\), \(s\leq N^{d}\), \(1\leq p<\infty\) and \(0<\tau<0\). There exists a deep net structure with \(L=2\), \(\mathcal{O}(N^{d})\) free parameters and \(D_{\max}=\mathcal{O}(N^{d})\) such that for any \(Q^{*}\in\mathcal{C}^{(C_{0},s,N^{d})}\), there exists a deep net \(\mathcal{N}_{N,s,\tau,Q^{*}}\) with the aforementioned structure satisfying_ \[\|Q^{*}-\mathcal{N}_{N,s,\tau,Q^{*}}\|_{p}\leq 2dC_{0}s\tau N^{1-d}, \tag{22}\] _where \(\|\cdot\|_{p}\) is the norm of \(p\)-times Lebesgue integrable function space \(L^{p}(\mathbb{H}^{d})\)._ The detailed structure of deep nets in Theorem 2 is given in the proof. Since functions in \(\mathcal{C}^{(C_{0},s,N^{d})}\) are discontinuous in general, linear models [59] suffer from the Gibbs phenomenon in the sense that the linear estimators overshoot at a jump discontinuity, and this overshoot persists as the dimension increases. Shallow nets were utilized in [60] to avoid the Gibbs phenomenon when \(d=1\). For \(d\geq 2\), it can be found in [23, 42] that shallow nets are not the optimal approximation tools in the sense that they cannot achieve the optimal approximation rates. In Theorem 2, we rigorously prove that deep nets succeed in overcoming certain drawbacks of linear models and shallow nets in terms of providing perfect approximation error. In fact, we can set \(\tau\) to be extremely small such that \(\|Q^{*}-\mathcal{N}_{N,s,\tau,Q^{*}}\|_{p}\leq\nu\) for arbitrarily small \(\nu>0\). In a word, by adding only one hidden layer to shallow nets, we can use \(\mathcal{O}(N^{d})\) free parameters to yield an approximant within an arbitrary accuracy, provided the target function is in \(\mathcal{C}^{(C_{0},s,N^{d})}\). As stated in Lemma 1, the \(\varepsilon\)-covering number of deep nets is of the order \(\mathcal{O}\left(N^{d}\log\frac{1}{\varepsilon}\right)\), which is the same as that of shallow nets with \(N^{d}\) free parameters and linear models with \(N^{d}\)-dimension. In Figure 4, we provide a numerical example to show the performance of deep nets in approximating functions in \(\mathcal{C}^{(1,4,36)}\) with \(\tau=0.01\). In the following theorem, we pursue the power of depth in approximating piecewise smooth and spatially sparse functions. **Theorem 3**: _Let \(1\leq p<\infty\), \(C_{0}>0\), \(N,d,s\in\mathbb{N}\), and \(r=u+v\) with \(u\in\mathbb{N}\) and \(0<v\leq 1\). There exists a deep net structure with_ \[2(d+u)\left\lceil\frac{r+2d}{2d}\right\rceil+8(d+u)+3+\left\lceil\frac{rp+d+p +1}{2d}\right\rceil \tag{23}\] _layers, \(nN^{d}\) free parameters and \(D_{\max}=poly(N^{d}n)\) such that for any \(Q^{*}\in Lip^{(r,c_{0},s,N^{d})}\), there is a \(\mathcal{N}_{n,s,N,Q^{*}}\) with the aforementioned structure satisfying_ \[\|Q^{*}-\mathcal{N}_{n,s,N,Q^{*}}\|_{p}\leq C_{2}^{*}n^{-r/d}sN^{-d/p},\] _where \(C_{2}^{*}\) is a constant independent of \(n,s,N\) and \(poly(n)\) is a polynomial with respect to \(n\)._ Shallow nets were verified in [54] to be at least as good as linear models in the sense that the approximation error of shallow nets is always not larger than that of linear models. The problem is, as mentioned in [40], that the capacity of shallow nets in [54] is usually much larger than that of linear models. This leads to the insatiability of the estimator and a large variance. Furthermore, as discussed above, both linear models and shallow nets are difficult to approximate discontinuous functions. Theorem 3 with \(s=N^{d}\) presents an approximation rate of deep nets when the target function is piecewise smooth, a special type of discontinuous function. It is shown that deep nets can achieve an order of \(\mathcal{O}(n^{-r/d})\) when \(p=1\), which is an optimal approximation rate [42] if there are \(N^{d}\) pieces and \(N^{d}n\) parameters. Besides their discontinuity, optimal Q-functions are spatially sparse, which was not considered in [42]. Theorem 3 is devoted to approximating discontinuous and spatially sparse functions and demonstrates that deep nets outperform shallow nets by showing an additional reducing factor \(sN^{-d/p}\) to reflect the sparsity in the approximation error \begin{table} \begin{tabular}{|l|l|l|l|} \hline Reference. & Features of target functions & Parameters & Depth \\ \hline [37] & Locality & \(4d+1\) & \(2\) \\ \hline [37] & \(k\)-spatial sparse & \(k(4d+1)\) & \(2\) \\ \hline [42] & Receive \((r,c_{0})\)-smooth & \(\varepsilon^{-d/r}\) & \(\mathcal{O}(d,r)\) \\ \hline [58] & \(\ell_{2}\) radial + \((r,c_{0})\)-smooth & \(\varepsilon^{-1/r}\) & \(\mathcal{O}(d,r)\) \\ \hline [38] & \(k\)-sparse (frequency) & \(k\log(\varepsilon^{-1})\) & \(\log(\varepsilon^{-1})\) \\ \hline [37] & \(d^{\prime}\) dimensional manifold+smooth & \(\varepsilon^{-d^{\prime}/r}\) & \(4\) \\ \hline \end{tabular} \end{table} TABLE II: Power of depth in approximating special target functions (within accuracy \(\varepsilon\)) Fig. 4: Performance of deep nets constructed in Theorem 2 with \(\tau=0.01\) estimate. It should be highlighted that our result is essentially different from [57], in which the target function is continuous. In the proofs of Theorems 2 and 3, we shall provide concrete structures of deep nets to approximate spatially sparse and piecewise smooth (or piecewise constant) functions. It should be mentioned that the structure is not unique. In fact, we can derive numerous depth-width pairs of deep nets to achieve the same approximation performance by using the approach in [58]. Furthermore, all these structures can be realized by deep convolutional neural networks via using the technique in [51, 52]. However, since the purpose of our study is to demonstrate the power of depth, we consider only one structure for brevity. ## 5 Power of Depth in Deep Q-Learning The aim of this section is to show the power of depth in deep Q-learning. ### _Learning schemes and assumptions_ The main difference between deep Q-learning and the traditional version is their hypothesis spaces. The latter uses linear models, which benefits in computation, whereas the former adopts deep nets to enhance prediction performance. To simplify our analysis, we present a Markov assumption for the distribution \(P\) defined by (1). **Assumption 3**: _Let \(Q_{t}^{*}\) be defined by (4). We have_ \[Q_{t}^{*}(\mathbf{s}_{t},\mathbf{a}_{t})=Q_{t}^{*}(s_{t},a_{t}).\] It should be mentioned that Assumption 3 is not necessary in our analysis, since our result, as shown in Theorem 1, holds for an arbitrary \(P\). Without the Markov assumption, optimal Q-functions (\(Q_{t}^{*}\), \(t=1,\ldots,T\)), are functions with \(\tilde{d}_{t}\) variables with \(\tilde{d}_{t}:=\sum_{j=1}^{t}(d_{a,j}+d_{s,j})\). The fact that \(\tilde{d}_{t_{1}}\leq\tilde{d}_{t_{2}}\) for \(t_{1}\leq t_{2}\) then implies that the hypothesis spaces of deep Q-learning vary with \(t\). Under Assumption 3, if \(d_{a,t}\) and \(d_{s,t}\) do not vary with \(t\), then \(Q_{t}^{*}\), \(t=1,\ldots,T\) are functions with the same number of variables, which leads to the same hypothesis space for all times. We also present a compactness assumption for the action and state spaces. **Assumption 4**: _Assume \(\tilde{\mathcal{A}}_{t}=[0,1]^{d_{s,t}}\) and \(\tilde{\mathcal{S}}_{t}=[0,1]^{d_{s,t}}\)._ Assumption 4 can be satisfied by using a standard scaling technique directly, provided that the action spaces and state spaces are compact. This is a mild assumption since data are always collected to be bounded. Recall that \(L_{t}^{2}\) is the space of square-integrable functions with respect to \(P_{t}\), as defined in (8). The following distortion assumption describes the difference between \(P_{t}\) and the Lebesgue measure. **Assumption 5**: _For \(p\geq 1\), denote \(J_{p,t}\) as the identity mapping: \(L_{t}^{2}\xrightarrow[J_{p}]{J_{p}}L^{p}([0,1]^{d_{t}})\) and \(\mathcal{J}_{p,T}=\max_{t=1,\ldots,T}\|J_{p,t}\|\), where \(\|J\|\) is the spectral norm of the operator \(J\). We assume \(\mathcal{J}_{p,T}<\infty\)._ It is obvious that \(\mathcal{J}_{p,t}\) measures the extent to which \(P_{t}\) distorts the Lebesgue measure. Since \(Q_{t}^{*}\) is frequently spatially sparse, if the support of \(P_{t}\) is out of the support of \(Q_{t}^{*}\), then all samples are useless in the learning process, as shown in Figure 5. Therefore, Assumption 5 is necessary and reasonable. It holds for all \(p\geq 2\) when \(P_{t}\) is the uniform distribution. Then, we present the assumption of spatially sparse and piecewise constant on optimal Q-functions as follows. **Assumption 6**: _For any \(t=1,\ldots,T\), there exist \(s_{t},N_{t}\in\mathbb{N}\) such that \(Q_{t}^{*}\in\mathcal{C}^{(2U,s_{t},N_{t}^{d_{t}})}\)._ As discussed in Sec. 3.1, Assumption 6 is standard for numerous applications. As shown in Theorem 2, each \(Q_{t}^{*}\) corresponds to a deep net with two hidden layers and \(\mathcal{O}(N_{t}^{d_{t}})\) free parameters. Denote by \(\mathcal{H}_{N_{t},\tau,t}\) the set of deep nets structured as in Theorem 2 for \(t=1,\ldots,T\). Given the dataset \(D=\{\mathcal{T}_{T,t}\}_{i=1}^{m}=\{(\mathbf{s}_{T+1,i},\mathbf{a}_{T,i}, \mathbf{R}_{T,i})\}_{i=1}^{m}\), we can deduce Q-functions via \(\hat{Q}_{T+1,N_{T}+1}=0\) and \[\hat{Q}_{t,N_{t},\tau}(\mathbf{s}_{t},\mathbf{a}_{t})=\arg\min_{ Q_{t}\in\mathcal{H}_{N_{t},\tau,t}}\] \[\mathbb{E}_{m}\left[\left(R_{t}+\max_{a_{t+1}}\hat{Q}_{t+1}( \mathbf{S}_{t+1},\mathbf{A}_{t},a_{t+1})-Q_{t}(\mathbf{S}_{t},\mathbf{A}_{t}) \right)^{2}\right].\] Then, the policy derived from deep Q-learning is defined by \[\hat{\pi}_{N,\tau}=\{\hat{\pi}_{1,N_{1},\tau},\ldots,\hat{\pi}_{T,N_{T},\tau}\}, \tag{24}\] where \[\hat{\pi}_{t,N_{t},\tau}(\mathbf{s}_{t},\mathbf{a}_{t-1})=\arg \max_{a_{t}\in\tilde{\mathcal{A}}_{t}}\hat{Q}_{t,N_{t}}(\mathbf{s}_{t},\mathbf{ a}_{t-1},a_{t}), \tag{25}\] \[t=1,\ldots,T.\] Since \(N_{t}\) and \(\tilde{d}_{t}\) vary with \(t\), the network structures at different times are vary. Further noting that \(L=2\) for all \(t\), we can parameterize the width in the learning process. The following assumption on piecewise smooth and spatially sparse is our final assumption. **Assumption 7**: _For any \(t=1,\ldots,T\), there exist \(r_{t}>0\), \(c_{0}>0\), \(s_{t},N_{t}\in\mathbb{N}\) such that \(Q_{t}^{*}\in Lip^{(r_{t},c_{0},s_{t},N_{t}^{d_{t}})}\)._ The non-differentiability of ReLU results in the necessity of depth in approximating functions in \(Lip_{\mathbf{j}^{\prime}}^{(r,c_{0})}\). Let \(\mathcal{H}_{t,n_{t},N_{t},L_{t}}\) be the set of deep nets structured as in Theorem 3 with respect to \(\mathcal{Q}_{t}^{*}\). We build the hypothesis spaces for Q-learning as \(\tilde{\mathcal{Q}}_{t}=H_{t,n_{t},N_{t},L_{t}};2U:=\{f\in H_{t,n_{t},N_{t},L_{t} }:\) \(\|f\|_{L^{\infty}}\leq 2U\}\). Then, Q-functions can be defined by \(\hat{Q}_{T,n_{T,n_{T},n_{T},N_{T+1},L_{T+1}}}=0\) and for \(1\leq t\leq T\), \[\hat{Q}_{t,n_{t},N_{t},L_{t}}(\mathbf{s}_{t},\mathbf{a}_{t})=\arg \min_{Q_{t}\in\mathcal{H}_{t,n_{t},N_{t},L_{t},2U}}\] \[\mathbb{E}_{n}\left[\left(R_{t}+\max_{a_{t+1}}\hat{Q}_{t+1}( \mathbf{S}_{t+1},\mathbf{A}_{t},a_{t+1})-Q_{t}(\mathbf{S}_{t},\mathbf{A}_{t}) \right)^{2}\right].\] The policy derived from Q-learning is defined by \[\hat{\pi}_{n,N,L}=\{\hat{\pi}_{1,n_{1},N_{t},L_{1}},\ldots,\hat{\pi}_{T,n_{T}, N_{T},L_{T}}\}, \tag{26}\] where \[\hat{\pi}_{t,n_{t},N_{t},L_{t}}(\mathbf{s}_{t},\mathbf{a}_{t-1})= \arg\max_{a_{t}\in\hat{A}_{t}}\hat{Q}_{t,n_{t},N_{t},L_{t}}(\mathbf{s}_{t}, \mathbf{a}_{t-1},a_{t}), \tag{27}\] \[t=1,\ldots,T.\] ### _Power of depth in deep Q-learning_ In this subsection, we derive the generalization error for deep Q-learning under the aforementioned assumptions. Our first result shows the power of depth in deep Q-learning when optimal Q-functions are spatially sparse and piecewise constant. **Theorem 4**: _Under Assumptions 1, 2, 4, 5, and 6, if \(\hat{\pi}_{N,\tau}\) is defined by (24) with \(\tau=\mathcal{O}(N^{\tilde{d}_{t}-1}s^{-1}m^{-1})\), then_ \[E[V^{*}(S_{1})-V_{\hat{\pi}_{N,\tau}}(S_{1})]\leq \tag{28}\] \[\hat{C}_{1}\mathcal{J}_{p,T}\left(\frac{\log m}{m}\right)^{1/2}\] \[\cdot\sum_{t=1}^{T}\sum_{j=t}^{T}(3\mu)^{j-t}N_{t}^{\max\{\tilde{ d}_{j},\tilde{d}_{j+1}\}/2}(\log N_{j})^{1/2},\] _where \(\hat{C}_{1}\) is a constant depending only on \(r,p\), and \(U\)._ The use of deep nets to learn discontinuous functions in the framework of supervised learning was first studied in [61]. In Theorem 4, we extend their result from supervised learning to RL by using the oracle inequality in Theorem 1. Noting that shallow nets [23] and linear models [60] have difficulties in realizing either the spatial sparseness or piecewise-constant property of optimal Q-functions, the corresponding generalization error is worse than the established results in Theorem 4. As a consequence, traditional Q-learning requires many more samples than deep Q-learning to finish a specific learning task. This demonstrates the power of depth and explains why deep Q-learning performs so well in numerous applications. The following corollary shows the generalization error of deep Q-learning when the Markov assumption is imposed. **Corollary 2**: _Under Assumptions 1-6, if \(d_{a,1}=\cdots=d_{a,T}=d_{a}\), \(d_{s,t}=\cdots=d_{s,T}=d_{s}\), \(N_{1}=\cdots=N_{T}=N\), and \(\hat{\pi}_{N,\tau}\) is defined by (24) with \(\tau=\mathcal{O}(N^{(d_{a}+d_{s})-1}s^{-1}m^{-1})\), then_ \[E[V^{*}(S_{1})- V_{\pi_{N,\tau}}(S_{1})]\leq \tag{29}\] \[\hat{C}_{1}\mathcal{J}_{p,T}\left(\frac{N^{d_{a}+d_{s}}\log N\log m }{m}\right)^{1/2}\] \[\cdot\left(\frac{T}{1-3\mu}-\frac{3\mu(1-(3\mu)^{T})}{1-3\mu}\right)\] Our next theorem shows the generalization error of deep Q-learning in learning spatially sparse and smooth optimal Q-functions. **Theorem 5**: _Under Assumptions 1, 2, 4, 5 and 7, if \(\hat{\pi}_{n,N,L}\) is defined by (26) with_ \[n_{t}=\left(\frac{ms_{t}^{2}}{N_{t}^{\max\{\tilde{d}_{t+1},\tilde{d}_{t}\}+2 \tilde{d}_{t}/p}^{\frac{\tilde{d}_{s}}{2\tilde{d}_{t}+d_{s}}}},\quad t=1, \ldots,T,\] _then_ \[E[V^{*}(S_{1})-V_{\hat{\pi}_{n,N,L}}(S_{1})]\leq\] \[\hat{C}_{2}\mathcal{J}_{p,T}\sum_{t=1}^{T}\sum_{j=t}^{T}(3\mu)^{j -t}m^{-\frac{r}{2r+d_{j}}}s_{j}^{\frac{\tilde{d}_{j}}{2r+d_{j}}}\] \[\cdot N_{j}^{\frac{pr\max\{j_{t+1},\tilde{d}_{j}\}-\tilde{d}_{t}^ {2}}{(2r+d_{j})r}}(\max\{\tilde{d}_{j},\tilde{d}_{j+1}\})^{\frac{3}{2}}\log( mN_{j}),\] _where \(\hat{C}_{2}\) is a constant depending only on \(r,p\), and \(U\)._ Since we consider a general case for deep Q-learning, the generalization error established in Theorem 5 seems a little bit overly sophisticated, by showing its dependence on the smoothness \(r_{t}\), sparsity \(s_{t}\), number of partitions \(N_{t}\), dimension \(\tilde{d}_{t}\), distortion \(\mathcal{J}_{p,T}\), probability parameter \(\mu\), and size of samples \(m\). We shall discuss the impact of each factor and simplify our result in the rest of this section. As shown in Theorem 3, obtaining an approximation accuracy of order \(\mathcal{O}(n^{-r/d}sN^{-d/p})\) requires a number of free parameters equal to at least \(\mathcal{O}(nN^{d})\). This shows a bias-variance tradeoff by noting that the capacity of deep nets depends heavily on the number of free parameters. Under these conditions, \(n_{t}\) in Theorem 4 is selected to balance the bias and variance. Since the reward functions in Q-learning are practically extremely sparse, the sparsity \(s_{t}\), compared with the number of partitions \(N_{t}^{d}\), is often extremely small, which together with \(pr\leq\min_{1\leq j\leq T}\tilde{d}_{j}\) yields very good generalization error estimates for deep Q-learning. In the following, we present a corollary for Theorem 5 under added assumptions to explicitly exhibit the generalization error. **Corollary 3**: _Under Assumptions 1-5 with \(p=2\) and Assumption 7, if \(d_{a,1}=\cdots=d_{a,T}=d_{s},r=d_{s},s_{1}=\cdots=s_{T}=s,r_{1}=\cdots=r_{T}\), \(N_{1}=\cdots=N_{T}=N\), and_ \[n=\left(\frac{ms^{2}}{N^{2d_{a}+2d_{s}}}\right)^{\frac{d_{a}+d_{s}}{2\tilde{r}+d_ {a}+d_{s}}},\] _then_ \[E[V^{*}(S_{1})-V_{\hat{\pi}_{n,N,L}}(S_{1})]\leq\] \[\hat{C}_{3}m^{-\frac{r}{2r+d_{a}+d_{s}}}\log(MN)s^{\frac{d_{a}+d_{s }}{2r+d_{a}+d_{s}}}\] \[\cdot N^{\frac{(2r-d_{a}-d_{s})(d_{a}+d_{s})}{d_{s}+2\tilde{d}_{s}}} \sum_{t=1}^{T}\left(\frac{T}{1-3\mu}-\frac{3\mu(1-(3\mu)^{T})}{1-3\mu}\right),\] _where \(\tilde{C}_{3}\) is a constant independent of \(N,s,m\), or \(T\)._ From Corollary 3, a generalization error bound of order \(\mathcal{O}(m^{-\frac{r}{2r+d_{a}+d_{s}}}N^{\frac{(r-d_{a}-d_{s})(d_{a}+d_{s})}{d _{s}+2\tilde{d}_{s}+2\tilde{d}_{s}+2\tilde{d}_{s}+2\tilde{d}_{s}+2\tilde{d}_{s}+ \tilde{d}_{s}}}s^{\frac{d_{a}+d_{s}}{2r+d_{s}+d_{s}}})\) is derived. If \(r\) is large, then the dominant term is \(m^{-\frac{r}{2r+d_{a}+d_{s}}}\). Under this circumstance, numerous data are required to produce a high-quality policy, just as AlphaGo did in [5]. If is relatively small, then \(N^{\frac{(2r-d_{0}-d_{1})(d_{2}+d_{2})}{4r+2d_{0}+2d_{1}}}s^{\frac{d_{0}+d_{1}}{ 2r+d_{0}+d_{2}}}\) is the dominant term, implying that there are a few candidates available to make a decision, which is also popular in practice [3]. Furthermore, given that the linear models and shallow nets cannot approximate the spatially sparse and piecewise smooth functions well [23, 42], it is difficult to derive a similar result for traditional Q-learning as Theorem 4 and Corollary 3, which demonstrates the power of depth in deep Q-learning. To end this section, we differentiate our results are from those of [43], where theoretical verification is also conducted for deep Q-learning. As discussed in Sec. 1.3, the setting in [43] is available for RL with infinite horizons and requires strong assumptions on the optimal Q-functions and likelihood \(P\) (defined by (1)), which are difficult to check in practice. As shown in Sec. 3.1, Assumptions 2, 5, 6, and 7 on \(Q_{t}^{*}\) are easily satisfied for numerous real-world applications (e.g., Table IV.1). Furthermore, we only impose two extra assumptions including Assumptions 1 and 5 on the likelihood \(P\), which is essentially looser than the concentration coefficient assumption in [43] and easily satisfied in practice. The weakness of the assumptions and availability in practice are main the reasons that our derived generalization error bounds behave exponentially badly in the time horizon. It would be interesting to find more suitable assumptions from real-world applications to reduce this negative effect. ## 6 Experimental Results In this section, we apply deep Q-learning to the beer game, a well-known supply chain management problem, and a recommender system application, to illustrate the roles of the depth of neural networks, rewards, and data size in RL. ### _Beer Game Experiment_ The first experiment is conducted in the context of an inventory management problem, named the Beer Game. Beer Game is a multi-agent supply chain management problem, with four agents participating, from upstream to downstream of which are the manufacturer, distributor, warehouse and retailer. In each period of the game, each agent observes the current inventory state and decides on an order to send to his predecessor (supplier). We examine how DQN can help agents decide on the right orders to minimize the total inventory in hand over a long time. The detailed introduction as well as an RL framework of the beer game can be found in Section 4 of the supplementary material. We report our experiment designs and numerical results in the following subsections. Each subsection contains experiments based on simulations and three real-world data sets. The experimental settings based on the simulated data and real-world data sets are given in Section 5 and Section 6 of the supplementary material. #### 6.1.1 Power of the depth Our first simulation focuses on the power of depth in deep Q-learning. According to Theorems 4 and 5, the effect of depth depends heavily on the reward property. Therefore, we use the shaped reward in [14] in this simulation and record our method as shaped reward deep Q-networks (SRDQN). The based stock policy (bs for short) is regarded as a baseline. As there are four agents in the beer game, we only apply deep Q-learning on the first agent while applying bs on the three remaining agents. In this way, we record our approach as shaped reward deep Q-networks and based stock policy (SRDQN-BS). For further clarity, we refer the readers to Section 4 and Section 5 of the supplementary material for details. As shown in Section 4 of the supplementary material, the reward, as a function of actions, possesses the spatially sparse and piecewise constancy property. Our theoretical assertions suggest that deep learning outperforms the classical (shallow) approach in such an RL problem. To verify this, we test the SRDQN-BS policy with five cases, with one, two, three, four, and five hidden layers in the SRDQN. Results of the one- and four-layer cases are shown in Figure 6. From Figure 6, there are three interesting observations: (1) The test total loss of the four-layers SRDQN-BS is much less than that of the one-layer SRDQN-BS, showing that deepening the network is crucial for improving the performance of the classical shallow learning policy. (2) After a few iterations, the prediction of four-layer SRDQN-BS stabilizes, showing that it can generalize well, which is beyond the capability of the one-layer SRDQN-BS. This verifies Theorem 4 in the sense that the variance of deep Q-learning is not large, since deepening the network does not essentially enlarge the capacity of hypothesis space. (3) After 15000 iterations, four-layer SRDQN-BS performs almost as well as bs, showing that our adopted approach can achieve an almost optimal generalization performance. All these observations show that depth plays an important role in deep Q-learning if spatially sparse and piecewise constant rewards are utilized. To show the power of depth and stability of SRDQN-BS with different layers, we also extract the best-performance-segment of SRDQN-BS with consecutive iterations 5000, 10000 and 20000 of five cases, as compared in Figure 7. Two interesting phenomena are exhibited in Figure 7: (1) Before a threshold value (L=4 in this simulation), depth plays a positive role in SRDQN-BS. This shows again the power of depth in deep Q-learning. (2) After the threshold value, depth is not so important in generalization, since five-layer SRDQN-BS performs a little bit worse than the four-layer policy. This does not contradict our theoretical assertions. In fact, according to our proofs in the supplementary Fig. 6: Performance of SRDQN-BS policy with different-depth-SRDQN material, the constants \(\hat{C}_{1}\) and \(\hat{C}_{2}\) depend on the number of layers, which may cause a small oscillation. Furthermore, the covering number estimate (Lemma 1) shows that the capacity of deep nets increases as they are deepened, leading to an instability phenomenon for five-layer SRDQN-BS. We further conduct experiments based on real-world historical demand data for three different items to examine the power of depth in deep Q-learning. The training process of one- and three-layer SRDQN-BS policy is shown in Figure 8, where the bs policy acts as the optimal baseline. The findings are quite similar to those in synthetic simulations. It's clear that depth is essential for good performance of SRDQN. In all three experiments, SRDQN with 3 layers converges to a stable point with a small total loss more quickly than SRDQN with 1 layer. In the experiment of item 34, SRDQN with 1 layer even doesn't converge in the end. The second point that deserves to be noticed is that the convergence point of SRDQN-BS policy with 3 layers is pretty close to bs policy in all three experiments, which indicates that it achieves near-optimal performance. This shows the strong generalization ability of the SRDQN-BS policy with 3 layers. We conduct a more comprehensive comparison by extracting the performance of SRDQN-BS at different training iterations. We implement SRDQN-BS with 1 layer, 3 layers, 5 layers, and 7 layers. The results are shown in Figure 9. Clearly, the SRDQN-BS policy with 1 layer performs badly while the SRDQN-BS policy with 3 layers performs quite well. However, continuing to increase the depths in SRDQN, such as increasing to 5 and 7 layers, doesn't keep improving performance but worsens the performance. This is because deeper nets lead to instability in the training process. #### 6.1.2 Influence of different rewards In this simulation, we shall show the important role of reward in deep Q-learning. We use the classical rewards in the beer game (see Section 4 of the supplementary material) rather than the shaped one proposed in [14] and then obtain the deep Q-learnings (DQNBS) policy. DQN-BS and SRDQN-BS differ only in their rewards. Comparison of DQN-BS and SRDQN-BS policy with one-layer and four-layers are shown in Figure 10. Figure 10 exhibits three important findings: (1) From Figure 10 (a), if the hypothesis space is not appropriately selected, then rewards do not affect the performance of Q-learning substantially. This shows the necessity of taking different hypothesis spaces in Q-learning and implies the importance of Theorem 1. (2) From Figure 10 (b), when the suitable hypothesis space is used, then rewards are key to improving the performance of Q-learning. The perfect performance of SRDQN-BS is based on suitable rewards and hypothesis space, which verifies Theorems 2 and 3; (3) As shown in Section 4 of the supplementary material, the rewards of DQN-BS are also spatially sparse and piecewise constant. As compared with the rewards of SRDQN-BS, the only difference is that there are many more pieces in DQN-BS. According to Theorems 4 and 5, the generalization error of DQN-BS should be worse than that of SRDQN-BS. This assertion is verified by comparing Figures 10 (a) and 10 (b). In particular, deep Q-learning improves the performance of shallow Q-learning significantly for SRDQN-BS, while conservatively for DQN-BS. We report the comparison of SRDQN-BS policy and DQN-BS policy on three real-world datasets in Figure 11. We can see that even with 3 layers, the DQN-BS policy can hardly converge, and it performs much worse than the SRDQN-BS policy with 3 layers. This indicates that reward with nice properties is crucial for the power of deep Q-learning. #### 6.1.3 Influence of data size In our last simulation, we devoted ourselves to studying the role of data size in deep Q-learning by investigating how the performance of SRDQN changes with respect to data size. Unlike in the above experiments that draw the sample with replacement, we draw the sample without replacement to show the role of data size. Specifically, we sample 16 examples from replay memory and discard them after training. By doing this, we relate the number of iterations with the number of samples. After 500 iterations, meeting the minimum experience replay size to start training, 100 samples are used to train SRDQN per iteration. We test the SRDQN-BS policy with two cases (one and four hidden layers of neural networks). The simulation results are shown in Figure 13. Based on Figure 13, we can draw the following three conclusions: (1) As shown in Theorem 1, the required size of the sample to guarantee the generalization performance of Q-learning behaves exponentially with \(T\), which implies that SRDQN-BS needs large samples. This demonstrates the poor performance of SRDQN-BS when the data size is smaller than 1,500,000. (2) According to Theorem 4, compared with shallow Q-learning, deep Q-learning requires fewer samples to achieve the same long-term rewards. Therefore, the increasing number of samples gradually offsets the negative effects of long-term Q-learning. As a result, the test total cost begins to decrease with respect to the number of samples, as the data size interval changes from 1,500,000 to 3,500,000. However, shallow Q-learning still oscillates because it requires much more samples. (3) After the training of 3,500,000 samples, the four-layer SRDQN-BS converges to a stable state and performs almost the same as the bs, showing that this policy can achieve the Fig. 7: Best-performance-segment of SRDQN-BS policy with different iteration ranges optimal generalization performance, and 3,500,000 is almost the smallest data size needed to guarantee the optimal generalization performance of deep Q-learning in this beer game. We test the SRDQN-BS policy with 1 layer and 3 layers on three real-world datasets. The results are shown in Figure 12. The result shows that the SRDQN-BS policy with 3 layers converges to near-optimal after approximately 3 million samples. However, shallow nets diverge after certain periods of training, and it can never achieve near-optimal performance. Fig. 11: Comparison of DQN-BS policy and SRDQN-BS policy with different depth Fig. 8: Performance of SRDQN-BS policy with different-depth-SRDQN Fig. 10: Comparison of DQN-BS policy and SRDQN-BS policy with different depth Fig. 9: Best-performance-segment of SRDQN-BS policy with different iteration ranges ### _Recommender System Experiment_ We conduct a second experiment in the context of the recommender system. We look forward to providing criteria for choosing and designing Q-learning frameworks in recommender systems by answering the three questions that we are interested in. Different RL algorithms have been applied to designing recommender systems [24, 62], due to the long-term reward maximization nature of reinforcement learning. We conduct experiments on a simulation recommender system platform called Recsim [44]. In this simulated recommender system, we use DQN to recommend a set of items for a certain user at each period over a long time. We aim to maximize long-term user engagement, considering user interest shifts. We examine DQN with different depths, with expected reward, which possesses the spatially sparse and piecewise constancy property, and standard reward. The detailed recommender system context that we consider and the RL framework is introduced in Section 7 of the supplementary material. In the following, we describe how to conduct experiments to investigate the aforementioned three different points and how the experimental results verify our theoretical results. Firstly, we check the power of depth in DQN with the standard reward (DQN-s). The result is shown in Figure 14. Clearly, DQN-s with 1 layer perform worst. Although DQN-s with 3 layers, 5 layers, and 8 layers are better than DQN-s with 1 layer, they can't achieve an obvious improvement in the training process. This indicates that DQNs can't learn a proper recommendation policy even with a deep net. Next, we replace the standard reward with the expected reward, leading to the DQN with the expected reward (DQN-e). The result is shown in Figure 15. We can see that DQN-e with 1 layer still performs badly. But when the net becomes deeper, i.e., DQN-e with 3 layers and 5 layers, the performance improves obviously after around 1000 training steps. This shows that DQN-e with a proper deep net can perform well in this recommendation task, and so the power of depth in DQN has been shown. On the other hand, when it comes to DQN-e with 8 layers, the performance decreases. This reveals the trade-off between capacity and stability of the deep nets. In order to show the effectiveness of the learned policy, we compare DQN-e with 3 layers with two benchmarks. The first benchmark is the Random policy, which randomly samples two documents for recommendation each time \(t\). The second benchmark is called Myopic, which means that we train a net without considering long-term reward maximization. Specifically, when we train the DQN net, we Fig. 14: Performance of DQN with different depths and standard reward Fig. 12: Role of data size in SRDON-BS Fig. 13: Role of data size in SRDON-BS Fig. 15: Performance of DQN with different depths and expected reward update the net in the following form: \[Q^{(t)}(s,A) \leftarrow\alpha^{(t)}\left[r+\max_{A^{\prime}}\gamma Q^{(t-1)}\left( s^{\prime},A^{\prime}\right)\right]\] \[\quad+\left(1-\alpha^{(t)}\right)Q^{(t-1)}(s,A)\] In Myopic, we set \(\gamma\) as 0 to optimize only the immediate reward. We set the net in Myopic policy as the same net in DQN-e. The result is shown in Figure 16. We can see that the mean reward of the Random policy is approximately a horizontal line. The Myopic policy is better than the Random policy but worse than DQN-e, which shows the importance of considering state transitions and long-term reward maximization. We compare the performance of DQN-s and DQN-e with different layers in Figure 17. We can reveal the power of DQN only with both proper reward function and proper depth. The management implications here are that DQN is not almighty without any precondition. The first thing we must decide is a reward function with the nice property we proposed in our theory. The second is to decide on a proper depth to uncover the full ability of the DQN method. Finally, we examine the role of sample size in DQN-e. Here we discard the used samples in the way described in the Beer Game experiment. We try DQN-e with 1 layer, 3 layers, and 8 layers. The results are reported in Figure 18. It's clear that the performance of DQN-e with 3 layers improves to a stable point after the first 3000 samples, while DQN-e with 1 layer and 8 layers can't converge even after more than 10000 samples. ## 7 Conclusion In this paper, we demonstrate the power of depth in deep Q-learning by showing its better generalization error bounds compared with those of the traditional version. Our main tools are a novel oracle inequality for Q-learning to show the importance of hypothesis spaces, two novel approximation theorems to show the expressive power of deep nets, and two generalization error estimates to exhibit the power of depth in deep Q-learning. We find that the main reason for the success of deep Q-learning is the outperformance of deep nets in approximating spatially sparse and piecewise smooth (or piecewise constant) functions, rather than its large capacity. Our study has provided answers to Questions 1-3 in Sec. 1.1. \(\diamondsuit\) **Answer to Question 1.** As shown in Section 3.1, the most widely used reward functions in Q-learning are spatially sparse and piecewise constant (or piecewise smooth). Deep nets succeed in capturing these properties (see Theorems 2 and 3), which is beyond the capability of shallow nets or linear models [23, 42]. Thus, deep Q-learning performs much better than shallow nets and linear models in practice. \(\diamondsuit\) **Answer to Question 2.** As discussed in Sec. 3.2, deep Q-learning does not always outperform the traditional Q-learning. However, if reward functions in Q-learning possess certain sophisticated properties such as spatial sparseness, piecewise smoothness, piecewise constancy, and the properties in Table II, then deep Q-learning performs better than shallow nets. \(\diamondsuit\) **Answer to Question 3.** The required sample size to finish a specific sequential decision-making problem depends on the properties of reward functions and the horizon \(T\). Our results in Theorem 4, Theorem 5, Corollary 2, and Corollary 3 quantified this relationship in terms of generalization error bounds.
2304.02539
Multi-annotator Deep Learning: A Probabilistic Framework for Classification
Solving complex classification tasks using deep neural networks typically requires large amounts of annotated data. However, corresponding class labels are noisy when provided by error-prone annotators, e.g., crowdworkers. Training standard deep neural networks leads to subpar performances in such multi-annotator supervised learning settings. We address this issue by presenting a probabilistic training framework named multi-annotator deep learning (MaDL). A downstream ground truth and an annotator performance model are jointly trained in an end-to-end learning approach. The ground truth model learns to predict instances' true class labels, while the annotator performance model infers probabilistic estimates of annotators' performances. A modular network architecture enables us to make varying assumptions regarding annotators' performances, e.g., an optional class or instance dependency. Further, we learn annotator embeddings to estimate annotators' densities within a latent space as proxies of their potentially correlated annotations. Together with a weighted loss function, we improve the learning from correlated annotation patterns. In a comprehensive evaluation, we examine three research questions about multi-annotator supervised learning. Our findings show MaDL's state-of-the-art performance and robustness against many correlated, spamming annotators.
Marek Herde, Denis Huseljic, Bernhard Sick
2023-04-05T16:00:42Z
http://arxiv.org/abs/2304.02539v2
# Multi-annotator Deep Learning: ###### Abstract Solving complex classification tasks using deep neural networks typically requires large amounts of annotated data. However, corresponding class labels are noisy when provided by error-prone annotators, e.g., crowd workers. Training standard deep neural networks leads to subpar performances in such multi-annotator supervised learning settings. We address this issue by presenting a probabilistic training framework named multi-annotator deep learning (MaDL). A ground truth and an annotator performance model are jointly trained in an end-to-end learning approach. The ground truth model learns to predict instances' true class labels, while the annotator performance model infers probabilistic estimates of annotators' performances. A modular network architecture enables us to make varying assumptions regarding annotators' performances, e.g., an optional class or instance dependency. Further, we learn annotator embeddings to estimate annotators' densities within a latent space as proxies of their potentially correlated annotations. Together with a weighted loss function, we improve the learning from correlated annotation patterns. In a comprehensive evaluation, we examine three research questions about multi-annotator supervised learning. Our findings indicate MaDL's state-of-the-art performance and robustness against many correlated, spamming annotators. ## 1 Introduction Supervised _deep neural networks_ (DNNs) have recently achieved great success in many classification tasks (Pouyanfar et al., 2018). In general, these DNNs require a vast amount of annotated data for their successful employment (Algan and Ulusoy, 2021). However, acquiring top-quality class labels as annotations is time-intensive and/or financially expensive (Herde et al., 2021). Moreover, the overall annotation load may exceed a single annotator's workforce (Uma et al., 2021). For these reasons, multiple non-expert annotators, e.g., crowd workers, are often tasked with data annotation (Zhang, 2022; Gilyazev and Turdakov, 2018). Annotators' missing domain expertise can lead to erroneous annotations, known as noisy labels. Furthermore, even expert annotators cannot be assumed to be omniscient because further factors, such as missing motivation, fatigue, or an annotation task's ambiguity (Vaughan, 2018), may negatively affect their performances. A popular annotation quality assurance option is the acquisition of multiple annotations per data instance with subsequent aggregation (Zhang et al., 2016), e.g., via majority rule. The aggregated annotations are proxies of the _ground truth_ (GT) labels to build DNNs using standard supervised training techniques. Corresponding aggregation strategies operate exclusively on the basis of annotations. In contrast, model-based approaches explicitly use feature or annotator information and thus work well in low-redundancy settings, e.g., even with just one annotation per instance (Khetan et al., 2018). Through predictive machine learning models, these techniques jointly estimate instances' GT labels and _annotators' performances_ (APs) by learning and inferring interdependencies between instances, annotators, and their annotations. As a result, model-based approaches cannot only predict GT labels and APs for training instances but also for test instances, i.e., they can be applied in transductive and inductive learning settings (Vapnik, 1995). Despite ongoing research, several **challenges** still need to be addressed in multi-annotator supervised learning. To introduce these challenges, we exemplarily look at the task of animal classification in Fig. 1. Eight annotators have been queried to provide annotations for the image of a jaguar. Such a query is difficult because jaguars have remarkable similarities to other predatory cats, e.g., leopards. Accordingly, the obtained annotations indicate a strong disagreement between leopard and jaguar classes. Simply taking the majority vote on these annotations would result in leopard as a false annotation. Therefore, dedicated multi-annotator supervised learning techniques leverage annotation information from other (similar) annotated images to estimate APs. However, producing accurate AP estimates is challenging because one needs to learn many annotation patterns. Otherwise, the estimated GT labels will be biased, e.g., when APs are exclusively modeled as a function of annotators. In this case, we cannot identify annotators who are only knowledge about specific classes or regions in the feature space. Another challenge in multi-annotator supervised learning concerns potential (latent) correlations between annotators. We illustrate this issue in our animal annotation task by visualizing three latent groups of similarly behaving annotators. Although we assume that the annotators work independently of each other, they can still share common or statistically correlated error patterns (Chu et al., 2021). This is particularly problematic if a group of laypersons strongly outvotes a much smaller group of professionals. Considering prior information about the annotators, i.e., annotator features or meta-data (Zhang et al., 2023), can help to identify these groups. Moreover, prior information enables a model to inductively learn performances for annotators who have provided few or no annotations. In our example, zoological interest could be a good indicator for this purpose. While the inductive learning of APs for annotators poses an additional challenge to the already complex task, its use may be beneficial for further applications, e.g., optimizing the annotator selection in an active learning setting (Herde et al., 2021) or training annotators to improve their own knowledge (Daniel et al., 2018). In this article, we address the above challenges by making the following **contributions**: * We propose _multi-annotator deep learning_ (MaDL) as a probabilistic and modular classification framework. In end-to-end training via a weighted maximum-likelihood approach, it learns embeddings of annotators to account for possible correlations among them. * We specify six properties concerning the estimation of APs and application scenarios for categorizing related multi-annotator supervised learning techniques. * Associated with these properties, we formulate three _research questions_ (RQs), which we experimentally investigate, including comparisons of MaDL to related techniques. Figure 1: Animal classification as an illustration of a multi-annotator supervised learning problem. The **remainder of this article** is structured as follows: In Section 2, we formally introduce the problem setting of supervised learning from multiple annotators. Subsequently, we identify central properties of multi-annotator supervised learning techniques as a basis for categorizing related works and pointing out their differences to MaDL in Section 3. Section 4 explains the details of our MaDL framework. Experimental evaluations of MaDL and related techniques are presented regarding RQs associated with the aforementioned properties in Section 5. Finally, we conclude and give an outlook regarding future research work in Section 6. ## 2 Problem Setting In this section, we formalize the assumptions and objectives of multi-annotator supervised learning for classification tasks. **Prerequisites:** Without loss of generality, we represent a data instance as a vector \(\mathbf{x}\coloneqq(x^{(1)},...,x^{(D)})^{\mathrm{T}}\), \(D\in\mathbb{N}_{>0}\) in a \(D\)-dimensional real-valued input or feature space \(\Omega_{X}\coloneqq\mathbb{R}^{D}\). The \(N\in\mathbb{N}_{>0}\) instances jointly form a matrix \(\mathbf{X}\coloneqq(\mathbf{x}_{1},...,\mathbf{x}_{N})^{\mathrm{T}}\) and originate from an unknown probability density function \(\Pr(\mathbf{x})\). For each observed instance \(\mathbf{x}_{n}\sim\Pr(\mathbf{x})\), there is a GT class label \(y_{n}\in\Omega_{Y}\coloneqq\{1,\ldots,C\}\). Each GT label \(y_{n}\) is assumed to be drawn from an unknown conditional distribution: \(y_{n}\sim\Pr(y\mid\mathbf{x}_{n})\). We denote the GT labels as the vector \(\mathbf{y}\coloneqq(y_{1},...,y_{N})^{\mathrm{T}}\). These GT labels are unobserved since there is no omniscient annotator. Instead, we consider multiple error-prone annotators. For the sake of simplicity, we represent an annotator through individual features as a vector \(\mathbf{a}_{m}\in\Omega_{A}\coloneqq\mathbb{R}^{O},O\in\mathbb{N}_{>0}\). If no prior annotator information is available, the annotators' features are defined through one-hot-encoded vectors, i.e., \(\Omega_{A}\coloneqq\{\mathbf{e}_{1},\ldots,\mathbf{e}_{M}\}\) with \(\mathbf{a}_{m}\coloneqq\mathbf{e}_{m}\), to identify each annotator uniquely. Otherwise, annotator features may provide information specific to the general annotation task, e.g., the zoological interest when annotating animal images. Together, the \(M\in\mathbb{N}_{>0}\) annotators form a matrix \(\mathbf{A}\coloneqq(\mathbf{a}_{1},\ldots,\mathbf{a}_{M})^{\mathrm{T}}\). We denote the annotation assigned by annotator \(\mathbf{a}_{m}\) to instance \(\mathbf{x}_{n}\) through \(z_{nm}\in\Omega_{Y}\cup\{\otimes\}\), where \(z_{nm}=\otimes\) indicates that an annotation is unobserved, i.e., not provided. An observed annotation is assumed to be drawn from an unknown conditional distribution: \(z_{nm}\sim\Pr(z\mid\mathbf{x}_{n},\mathbf{a}_{m},y)\). Multiple annotations for an instance \(\mathbf{x}_{n}\) can be summarized as a vector \(\mathbf{z}_{n}\coloneqq(z_{n1},...,z_{nM})^{\mathrm{T}}\). Thereby, the set \(\mathcal{A}_{n}\coloneqq\{m\mid m\in\{1,\ldots,M\}\wedge z_{nm}\in\Omega_{Y}\}\) represents the indices of the annotators who assigned an annotation to an instance \(\mathbf{x}_{n}\). Together, the annotations of all observed instances form the matrix \(\mathbf{Z}\coloneqq(\mathbf{z}_{1},...,\mathbf{z}_{N})^{\mathrm{T}}\). We further assume that there is a subset of annotators whose annotated instances are sufficient to approximate the GT label distribution. Otherwise, supervised learning is hardly possible without explicit prior knowledge about the distributions of GT labels and/or APs. Moreover, we expect that the annotators independently decide on instances' annotations and that their APs are time-invariant. **Objectives:** Given these prerequisites, the first objective is to train a GT model, which approximates the optimal GT decision function \(y_{\mathrm{GT}}:\Omega_{X}\rightarrow\Omega_{Y}\) by minimizing the expected loss across all classes: \[y_{\mathrm{GT}}(\mathbf{x})\coloneqq\operatorname*{arg\,min}_{y^{\prime}\in \Omega_{Y}}\left(\mathbb{E}_{y\mid\mathbf{x}}\left[L_{\mathrm{GT}}(y,y^{ \prime})\right]\right). \tag{1}\] Thereby, we define the loss function \(L_{\mathrm{GT}}:\Omega_{Y}\times\Omega_{Y}\rightarrow\{0,1\}\) through the zero-one loss: \[L_{\mathrm{GT}}(y,y^{\prime})\coloneqq\delta(y\neq y^{\prime})\coloneqq \begin{cases}0,\text{ if }y=y^{\prime},\\ 1,\text{ if }y\neq y^{\prime}.\end{cases} \tag{2}\] As a result, an optimal GT model for classification tasks can accurately predict the GT labels of instances. **Proposition 1**.: _Assuming \(L_{\mathrm{GT}}\) to be the zero-one loss in Eq 2, the Bayes optimal prediction for Eq. 1 is given by:_ \[y_{\mathrm{GT}}(\mathbf{x})=\operatorname*{arg\,max}_{y^{\prime}\in\Omega_{Y}} \left(\Pr(y^{\prime}\mid\mathbf{x})\right). \tag{3}\] Proof.: Minimizing the loss in Eq. 1 results in the following Bayes optimal prediction: \[y_{\mathrm{GT}}(\mathbf{x}) =\operatorname*{arg\,min}_{y^{\prime}\in\Omega_{Y}}\left(\mathbb{E} _{y|\mathbf{x}}\left[\delta(y\neq y^{\prime})\right]\right)=\operatorname*{arg \,min}_{y^{\prime}\in\Omega_{Y}}\left(\sum_{y\in\Omega_{Y}}\Pr(y\mid\mathbf{x })\delta(y\neq y^{\prime})\right)\] \[=\operatorname*{arg\,min}_{y^{\prime}\in\Omega_{Y}}\left(\sum_{y \in\Omega_{Y}\setminus\{y^{\prime}\}}\Pr(y\mid\mathbf{x})\right)= \operatorname*{arg\,min}_{y^{\prime}\in\Omega_{Y}}\left(1-\Pr(y^{\prime}\mid \mathbf{x})\right)=\operatorname*{arg\,max}_{y^{\prime}\in\Omega_{Y}}\left( \Pr(y^{\prime}\mid\mathbf{x})\right).\] When learning from multiple annotators, the APs are further quantities of interest. Therefore, the second objective is to train an AP model, which approximates the optimal AP decision function \(y_{\mathrm{AP}}:\Omega_{X}\times\Omega_{A}\to\{0,1\}\) by minimizing the following expected loss: \[y_{\mathrm{AP}}(\mathbf{x},\mathbf{a})\coloneqq\operatorname*{arg\,min}_{y^{ \prime}\in\{0,1\}}\left(\mathbb{E}_{y|\mathbf{x}}\mathbb{E}_{z|\mathbf{x}, \mathbf{a},y}\left[L_{\mathrm{AP}}\left(y^{\prime},L_{\mathrm{GT}}\left(y,z \right)\right)\right]\right). \tag{4}\] Defining \(L_{\mathrm{AP}}\) and \(L_{\mathrm{GT}}\) as zero-one loss, an optimal AP model for classification tasks can accurately predict the zero-one loss of annotator's class labels, i.e., whether an annotator \(\mathbf{a}\) provides a false, i.e., \(y_{\mathrm{AP}}(\mathbf{x},\mathbf{a})=1\), or correct, i.e., \(y_{\mathrm{AP}}(\mathbf{x},\mathbf{a})=0\), class label for an instance \(\mathbf{x}\). **Proposition 2**.: _Assuming both \(L_{\mathrm{AP}}\) and \(L_{\mathrm{GT}}\) to be the zero-one loss, as defined in Eq. 2, the Bayes optimal prediction for Eq. 4 is given by:_ \[y_{\mathrm{AP}}(\mathbf{x},\mathbf{a})=\delta\left(\sum_{y\in\Omega_{Y}}\Pr(y \mid\mathbf{x})\Pr(y\mid\mathbf{x},\mathbf{a},y)<0.5\right). \tag{5}\] Proof.: Minimizing the loss in Eq. 4 results in the following Bayes optimal prediction: \[y_{\mathrm{AP}}(\mathbf{x},\mathbf{a}) =\operatorname*{arg\,min}_{y^{\prime}\in\{0,1\}}\left(\mathbb{E} _{y|\mathbf{x}}\mathbb{E}_{z|\mathbf{x},\mathbf{a},y}\left[\delta\left(y^{ \prime}\neq\delta\left(y\neq z\right)\right)\right]\right)\] \[=\operatorname*{arg\,min}_{y^{\prime}\in\{0,1\}}\left(\sum_{y\in \Omega_{Y}}\Pr(y\mid\mathbf{x})\sum_{z\in\Omega_{Y}}\Pr(z\mid\mathbf{x}, \mathbf{a},y)\delta(y^{\prime}\neq\delta(y\neq z))\right)\] \[=\operatorname*{arg\,min}_{y^{\prime}\in\{0,1\}}\left(\sum_{y\in \Omega_{Y}}\Pr(y\mid\mathbf{x})\left(\sum_{z\in\Omega_{Y}\setminus\{y\}}\Pr( z\mid\mathbf{x},\mathbf{a},y)\delta(y^{\prime}\neq 1)+\Pr(y\mid\mathbf{x},\mathbf{a},y) \delta(y^{\prime}\neq 0)\right)\right)\] \[=\operatorname*{arg\,min}_{y^{\prime}\in\{0,1\}}\left(\sum_{y\in \Omega_{Y}}\Pr(y\mid\mathbf{x})\Big{(}(1-\Pr(y\mid\mathbf{x},\mathbf{a},y)) \delta(y^{\prime}\neq 1)+\Pr(y\mid\mathbf{x},\mathbf{a},y)\delta(y^{\prime}\neq 0) \Big{)}\right)\] \[=\delta\left(\sum_{y\in\Omega_{Y}}\Pr(y\mid\mathbf{x})\Pr(y\mid \mathbf{x},\mathbf{a},y)<\sum_{y\in\Omega_{Y}}\Pr(y\mid\mathbf{x})\left(1-\Pr( y\mid\mathbf{x},\mathbf{a},y)\right)\right)\] \[=\delta\left(\sum_{y\in\Omega_{Y}}\Pr(y\mid\mathbf{x})\Pr(y\mid \mathbf{x},\mathbf{a},y)<1-\sum_{y\in\Omega_{Y}}\Pr(y\mid\mathbf{x})\Pr(y \mid\mathbf{x},\mathbf{a},y)\right)\] \[=\delta\left(\sum_{y\in\Omega_{Y}}\Pr(y\mid\mathbf{x})\Pr(y\mid \mathbf{x},\mathbf{a},y)<0.5\right).\] Related Work This section discusses existing multi-annotator supervised learning techniques targeting our problem setting of Section 2. Since we focus on the AP next to the GT estimation, we restrict our discussion to techniques capable of estimating both target types. In this context, we analyze related research regarding three aspects, i.e., GT models, AP models, and algorithms for training these models. **Ground truth model:** The first multi-annotator supervised learning techniques employed logistic regression models (Raykar et al., 2010; Kajino et al., 2012; Rodrigues et al., 2013; Yan et al., 2014) for classification. Later, different kernel-based variants of GT models, e.g., Gaussian processes, were developed (Rodrigues et al., 2014; Long et al., 2016; Gil-Gonzalez et al., 2021). Rodrigues et al. (2017) focused on documents and extended topic models to the multi-annotator setting. More recently, several techniques were proposed to train DNNs for large-scale and especially image classification tasks with noisy annotations (Albarqouni et al., 2016; Guan et al., 2018; Khetan et al., 2018; Rodrigues & Pereira, 2018; Yang et al., 2018; Tanno et al., 2019; Cao et al., 2019; Platanios et al., 2020; Zhang et al., 2020; Gil-Gonzalez et al., 2021; Ruhling Cachay et al., 2021; Chu et al., 2021; Li et al., 2022; Wei et al., 2022; Gao et al., 2022). MaDL follows this line of work and also employs a (D)NN as the GT model. **Annotator performance model:** An AP model is typically seen as an auxiliary part of the GT model since it provides AP estimates for increasing the GT model's performance. In this article, we reframe an AP model's use in a more general context because accurately assessing APs can be crucial in improving several applications, e.g., human-in-the-loop processes (Herde et al., 2021) or knowledge tracing (Piech et al., 2015). For this reason, we analyze existing AP models regarding six properties, which we identified as relevant while reviewing literature about multi-annotator supervised learning. * **Class-dependent annotator performance:** The simplest AP representation is an overall accuracy value per annotator. On the one hand, AP models estimating such accuracy values have low complexity and thus do not overfit (Rodrigues et al., 2013; Long et al., 2016). On the other hand, they may be overly general and cannot assess APs on more granular levels. Therefore, many other AP models assume a dependency between APs and instances' GT labels. Class-dependent AP models typically estimate confusion matrices (Raykar et al., 2010; Rodrigues et al., 2014; 2017; Khetan et al., 2018; Tanno et al., 2019; Platanios et al., 2020; Gao et al., 2022; Li et al., 2022), which indicate annotator-specific probabilities of mistaking one class for another, e.g., recognizing a jaguar as a leopard. Alternatively, weights of annotation aggregation functions (Cao et al., 2019; Ruhling Cachay et al., 2021) or noise-adaption layers (Rodrigues & Pereira, 2018; Chu et al., 2021; Wei et al., 2022) can be interpreted as non-probabilistic versions of confusion matrices. MaDL estimates probabilistic confusion matrices or less complex approximations, e.g., the elements on their diagonals. * **Instance-dependent annotator performance:** In many real-world applications, APs are additionally instance-dependent (Yan et al., 2014) because instances of the same class can strongly differ in their feature values. For example, recognizing animals in blurry images is more difficult than in high-resolution images. Hence, several AP models estimate the probability of obtaining a correct annotation as a function of instances and annotators (Kajino et al., 2012; Yan et al., 2014; Guan et al., 2018; Yang et al., 2018; Gil-Gonzalez et al., 2021; Gil-Gonzalez et al., 2021). Combining instance- and class-dependent APs results in the most complex AP models, which estimate a confusion matrix per instance-annotator pair (Platanios et al., 2020; Zhang et al., 2020; Ruhling Cachay et al., 2021; Chu et al., 2021; Gao et al., 2022; Li et al., 2022). MaDL also employs an AP model of this type. However, it optionally allows dropping the instance and class dependency, which can benefit classification tasks where each annotator provides only a few annotations. * **Annotator correlations:** Although most approaches assume that annotators do not collaborate, they can still have correlations regarding their annotation patterns, e.g., by sharing statistically correlated error patterns (Chu et al., 2021). Gil-Gonzalez et al. (2021) proposed a kernel-based approach where a real-valued matrix quantifies such correlations for all pairs of annotators. Inspired by weak supervision, Cao et al. (2019) and Ruhling Cachay et al. (2021) employ an aggregation function taking all annotations per instance as input to model annotator correlations. Gil-Gonzalez et al. (2021) introduce a regularized chained DNN whose weights encode correlations. Wei et al. (2022) jointly model the annotations of all annotators as outputs and thus take account of potential correlated mistakes. Chu et al. (2021) consider common annotation noise through a noise adaptation layer shared across annotators. Moreover, similar to our MaDL framework, they learn embeddings of annotators. Going beyond, MaDL exploits these embeddings to determine annotator correlations. **(P4) Robustness to spamming annotators:** Especially on crowdsourcing platforms, there have been several reports of workers spamming annotations (Vuurens et al., 2011), e.g., by randomly guessing or permanently providing the same annotation. Such spamming annotators can strongly harm the learning process. As a result, multi-annotator supervised learning techniques are ideally robust against these types of annotation noise. Cao et al. (2019) employ an information-theoretic approach to separate expert annotators from possibly correlated spamming annotators. Ruhling Cachay et al. (2021) empirically demonstrated that their weak-supervised learning technique is robust to large numbers of randomly guessing annotators. MaDL ensures this robustness by training via a weighted likelihood function, assigning high weights to independent annotators whose annotation patterns have no or only slight statistical correlations to the patterns of other annotators. **(P5) Annotator prior information:** On crowdsourcing platforms, requesters may acquire prior annotator information (Daniel et al., 2018), e.g., through surveys, annotation quality tests, or publicly available profiles. Several existing AP models leverage such information to improve learning. Thereby, conjugate prior probability distributions, e.g., Dirichlet distributions, represent a straightforward way of including prior estimates of class-dependent accuracies (Raykar et al., 2010; Albarqouni et al., 2016; Rodrigues et al., 2017). Other approaches (Platanios et al., 2020; Chu et al., 2021), including our MaDL framework, do not directly expect prior accuracy estimates but work with all types of prior information represented as vectors of annotator features. **(P6) Inductive learning of annotator performance:** Accurate AP estimates can be beneficial in various applications, e.g., guiding an active learning strategy to select accurate annotators (Yang et al., 2018). For this purpose, it is necessary that a multi-annotator supervised learning technique can inductively infer APs for non-annotated instances. Moreover, an annotation process is often a dynamic system where annotators leave and enter. Hence, it is highly interesting to inductively estimate the performances of newly entered annotators, e.g., through annotator features as used by Platanios et al. 2020 and MaDL. **Training:** Several multi-annotator supervised learning techniques employ the _expectation-maximization_ (EM) algorithm for training (Raykar et al., 2010; Rodrigues et al., 2013; Yan et al., 2014; Long et al., 2016; Albarqouni et al., 2016; Guan et al., 2018; Khetan et al., 2018; Yang et al., 2018; Platanios et al., 2020). GT labels are modeled as latent variables and estimated during the E step, while the GT and AP models' parameters are optimized during the M step. The exact optimization in the M step depends on the underlying models. Typically, a variant of _gradient descent_ (GD), e.g., quasi-Newton methods, is employed, or a closed-form solution exists, e.g., for AP models with instance-independent AP estimates. Other approaches take a Bayesian view of the models' parameters and therefore resort to _expectation propagation_ (EP) (Rodrigues et al., 2014; Long et al., 2016) or _variational inference_ (VI) (Rodrigues et al., 2017). As approximate inference methods are computationally expensive and may lead to suboptimal results, several end-to-end training algorithms have been proposed. Gil-Gonzalez et al. (2021) introduced a localized kernel alignment-based relevance analysis that optimizes via GD. Through a regularization term, penalizing differences between GT and AP model parameters, Kajino et al. formulated a convex loss function for logistic regression models. Rodrigues & Pereira (2018), Gil-Gonzalez et al. (2021), and Wei et al. (2022) jointly train the GT and AP models by combining them into a single DNN with noise adaption layers. Chu et al. (2021) follow a similar approach with two types of noise adaption layers: one shared across annotators and one individual for each annotator. Gil-Gonzalez et al. (2021) employ a regularized chained DNN to estimate GT labels and AP performances jointly. In favor of probabilistic AP estimates, Tanno et al. (2019), Zhang et al. (2020), Li et al. (2022), and MaDL avoid noise adaption layers but employ loss functions suited for end-to-end learning. Cao et al. (2019) and Ruhling Cachay et al. (2021) jointly learn an aggregation function in combination with the AP and GT models. Table 1 summarizes and completes the aforementioned discussion by categorizing multi-annotator supervised learning techniques according to their GT model, AP model, and training algorithm. Thereby, the AP model is characterized by the six previously discussed properties (P1-P6). We assign \(\blackcheck\) if a property is supported, \(\blackcheck\) if not supported, and \(\blackblackdiamond\) if partially supported. More precisely, \(\black an instance's features, and the latent GT label. A function \(\mathbf{P}:\Omega_{X}\times\Omega_{A}\rightarrow[0,1]^{C\times C}\) outputting a row-wise normalized confusion matrix per instance-annotator pair can capture these dependencies. The probability that an annotator \(\mathbf{a}\) annotates an instance \(\mathbf{x}\) of class \(y=c\) with the annotation \(z=k\) can then be modeled through a categorical distribution: \[\Pr(z=k\mid\mathbf{x},\mathbf{a},y=c)\coloneqq\mathrm{Cat}\left(z=k\,\middle| \,\mathbf{P}^{(c,\cdot)}(\mathbf{x},\mathbf{a})\right)\coloneqq\prod_{l=1}^{C }\left(P^{(c,l)}(\mathbf{x},\mathbf{a})\right)^{\delta(l=k)}=P^{(c,k)}(\mathbf{ x},\mathbf{a}), \tag{7}\] where the column vector \(\mathbf{P}^{(c,\cdot)}(\mathbf{x},\mathbf{a})\in\Delta\) corresponds to the \(c\)-th row of the confusion matrix \(\mathbf{P}(\mathbf{x},\mathbf{a})\). ### Model Architectures Now, we introduce how MaDL's GT and AP models are designed to approximate the functions of true class-membership probabilities \(\mathbf{p}\) and true confusion matrices \(\mathbf{P}\) for the respective instances and annotators. Fig. 3 illustrates the architecture of the GT (purple) and AP (green) models within our MaDL framework. Solid arrows indicate mandatory components, while dashed arrows express optional ones. The GT model with parameters \(\mathbf{\theta}\) is a (D)NN (cf. 1 in Fig. 3), which takes an instance \(\mathbf{x}\) as input to approximate its true class-membership probabilities \(\mathbf{p}(\mathbf{x})\) via \(\mathbf{\hat{p}_{\theta}}(\mathbf{x})\). We define its decision function in analogy to the Bayes optimal prediction in Eq. 3 through \[\hat{y}_{\mathbf{\theta}}(\mathbf{x})\coloneqq\operatorname*{arg\,max}_{y\in \Omega_{Y}}\left(\hat{p}_{\mathbf{\theta}}^{(y)}(\mathbf{x})\right). \tag{8}\] Figure 3: Architectures of MaDL’s GT and AP models. Figure 2: Probabilistic graphical model of MaDL. The architecture of the AP model with parameters \(\mathbf{\omega}\) comprises mandatory and optional components. We start by describing its most general form, which consists of three (D)NNs and estimates annotator-, class-, and instance-dependent APs. Annotator features \(\mathbf{a}\) are propagated through a first (D)NN (cf. \(\mathbf{\updownarrow}\) in Fig. 3) to learn an annotator embedding \(\widetilde{\mathbf{a}}\in\mathbb{R}^{R},R\in\mathbb{N}_{\geq 1}\). During training, we will use such embeddings for quantifying correlations between annotators. Analogously, we propagate raw instance features \(\mathbf{x}\) or a representation learned by the GT model's hidden layers through a second (D)NN (cf. \(\mathbf{\updownarrow}\) in Fig. 3) for learning an instance embedding \(\widetilde{\mathbf{x}}\in\mathbb{R}^{Q},Q\in\mathbb{N}_{\geq 1}\). Subsequently, instance and annotator embeddings \(\widetilde{\mathbf{x}}\) and \(\widetilde{\mathbf{a}}\) are combined through a third and final (D)NN (cf. \(\mathbf{\updownarrow}\) in Fig. 3) for approximating the true confusion matrix \(\mathbf{P}(\mathbf{x},\mathbf{a})\) via \(\hat{\mathbf{P}}_{\mathbf{\omega}}(\mathbf{x},\mathbf{a})\). Various architectures for combining embeddings have already been proposed in the literature (Fiedler, 2021). We adopt a solution from recommender systems where often latent factors of users and items are combined (Zhang et al., 2019). Concretely, in DNN \(\mathbf{\updownarrow}\), we use an outer product-based layer outputting \(\widetilde{\mathbf{o}}\in\mathbb{R}^{F},F\in\mathbb{N}_{\geq 1}\) to model the interactions between instance and annotator embeddings (Qu et al., 2016). The concatenation of \(\widetilde{\mathbf{a}},\widetilde{\mathbf{x}}\), and \(\widetilde{\mathbf{o}}\) is propagated through a residual block (He et al., 2016), whose architecture is visualized in Fig. 4. There, we add only the annotator embedding \(\widetilde{\mathbf{a}}\) to the learned mapping \(\mathbf{h}(\widetilde{\mathbf{a}},\widetilde{\mathbf{x}},\widetilde{\mathbf{ o}})\in\mathbb{R}^{R}\). The motivation behind this modification is that the annotator embeddings, informing about an annotator's individuality, are likely to be the most influential inputs for estimating confusion matrices as APs. Empirical investigations showed that \(R=Q=F=16\) as the embedding size is a robust default. Finally, we define the AP model's decision function in analogy to the Bayes optimal prediction in Eq. 5 through \[\hat{y}_{\mathbf{\theta},\mathbf{\omega}}(\mathbf{x},\mathbf{a})\coloneqq\delta \left(\sum_{c=1}^{C}\hat{p}_{\mathbf{\theta}}^{(c)}(\mathbf{x})\cdot\hat{P}_{\mathbf{ \omega}}^{(c,c)}(\mathbf{x},\mathbf{a})<0.5\right)\coloneqq\delta\left( \underbrace{\hat{p}_{\mathbf{\theta},\mathbf{\omega}}(\mathbf{x},\mathbf{a})}_{\text {correctness probability}}<0.5\right). \tag{9}\] An AP model estimating a confusion matrix per instance-annotator pair can be overly complex if there are only a few annotations per annotator or the number of classes is high (Rodrigues et al., 2013). In such settings, ignoring the instance features as input of the AP model may be beneficial. Alternatively, we can constrain a confusion matrix's degrees of freedom by reducing the number of output neurons of the AP model. For example, we might estimate only the diagonal elements of the confusion matrix and assume that the remaining probability mass per row is uniformly distributed. Further, we can either estimate each diagonal element individually (corresponding to \(C\) output neurons) or approximate them via a single scalar (corresponding to one output neuron). ### End-to-end Training Given the probabilistic model and accompanying architectures of the GT and AP models, we propose an algorithm for jointly learning their parameters. A widespread method for training probabilistic models is to maximize the likelihood of the observed data with respect to the model parameters. Assuming that the joint distributions of annotations \(\mathbf{Z}\) are conditionally independent for given instances \(\mathbf{X}\), we can specify the likelihood function as follows: \[\Pr(\mathbf{Z}\mid\mathbf{X},\mathbf{A};\mathbf{\theta},\mathbf{\omega})=\prod_{n=1 }^{N}\Pr(\mathbf{z}_{n}\mid\mathbf{x}_{n},\mathbf{A};\mathbf{\theta},\mathbf{\omega}). \tag{10}\] We further expect that the distributions of annotations \(\mathbf{z}_{n}\) for a given instance \(\mathbf{x}_{n}\) are conditionally independent. Thus, we can simplify the likelihood function: \[\Pr(\mathbf{Z}\mid\mathbf{X},\mathbf{A};\mathbf{\theta},\mathbf{\omega})=\prod_{n=1 }^{N}\prod_{m\in\mathcal{A}_{n}}\Pr(z_{nm}\mid\mathbf{x}_{n},\mathbf{a}_{m}; \mathbf{\theta},\mathbf{\omega}). \tag{11}\] Figure 4: MaDL’s residual block combining annotator and instance embedding. Leveraging our probabilistic model in Fig. 2, we can express the probability of obtaining a certain annotation as an expectation with respect to an instance's (unknown) GT class label: \[\Pr(\mathbf{Z}\mid\mathbf{X},\mathbf{A};\mathbf{\theta},\mathbf{\omega}) =\prod_{n=1}^{N}\prod_{m\in\mathcal{A}_{n}}\mathbb{E}_{y_{n}|_{ \mathbf{x}_{n}};\mathbf{\theta}}\left[\Pr(z_{nm}\mid\mathbf{x}_{n},\mathbf{a}_{m},y _{n};\mathbf{\omega})\right] \tag{12}\] \[=\prod_{n=1}^{N}\prod_{m\in\mathcal{A}_{n}}\sum_{y_{n}=1}^{C}\Pr (y_{n}\mid\mathbf{x}_{n};\mathbf{\theta})\Pr(z_{nm}\mid\mathbf{x}_{n},\mathbf{a}_{ m},y_{n};\mathbf{\omega})\] (13) \[=\prod_{n=1}^{N}\prod_{m\in\mathcal{A}_{n}}\mathbf{e}_{z_{nm}}^{ \mathrm{T}}\underbrace{\hat{\mathbf{P}}_{\mathbf{\omega}}^{\mathrm{T}}(\mathbf{a} _{m},\mathbf{x}_{n})\hat{\mathbf{p}}_{\mathbf{\theta}}(\mathbf{x}_{n})}_{\text{ annotation probabilities}}. \tag{14}\] Taking the logarithm of this likelihood function and converting the maximization into a minimization problem, we get \[L_{\mathbf{X},\mathbf{A},\mathbf{Z}}(\mathbf{\theta},\mathbf{\omega}):=-\sum_{n=1}^{N }\sum_{m\in\mathcal{A}_{m}}\ln\left(\mathbf{e}_{z_{nm}}^{\mathrm{T}}\hat{ \mathbf{P}}_{\mathbf{\omega}}^{\mathrm{T}}(\mathbf{a}_{m},\mathbf{x}_{n})\hat{ \mathbf{p}}_{\mathbf{\theta}}(\mathbf{x}_{n})\right), \tag{15}\] as cross-entropy loss function for learning annotation probabilities by combining the outputs of the GT and AP models (cf. blue components in Fig 3). Yet, directly employing this loss function for learning may result in poor results for two reasons. **Initialization:** Reason number one has been noted by Tanno et al. (2019), who showed that such a loss function cannot ensure the separation of the AP and GT label distributions. This is because infinite many combinations of class-membership probabilities and confusion matrices perfectly comply with the true annotation probabilities, e.g., by swapping the rows of the confusion matrix as the following example shows: \[\underbrace{\mathbf{P}^{\mathrm{T}}(\mathbf{a}_{m},\mathbf{x}_{n})\mathbf{p}( \mathbf{x}_{n})}_{\text{true probabilities}}=\begin{bmatrix}1&0\\ 0&1\end{bmatrix}\begin{bmatrix}1\\ 0\end{bmatrix}=\begin{bmatrix}1\\ 0\end{bmatrix}=\begin{bmatrix}0&1\\ 1&0\end{bmatrix}\begin{bmatrix}0\\ 1\end{bmatrix}=\underbrace{\hat{\mathbf{P}}_{\mathbf{\omega}}^{\mathrm{T}}(\mathbf{a }_{m},\mathbf{x}_{n})\hat{\mathbf{p}}_{\mathbf{\theta}}(\mathbf{x}_{n})}_{\text{ predicted probabilities}}. \tag{16}\] Possible approaches aim at resolving this issue by favoring certain combinations, e.g., diagonally dominant confusion matrices. Typically, one can achieve this via regularization (Tanno et al., 2019; Zhang et al., 2020; Li et al., 2022) and/or suitable initialization of the AP model's parameters (Rodrigues and Pereira, 2018; Wei et al., 2022). We rely on the latter approach because it permits encoding prior knowledge about APs. Concretely, we approximate an initial confusion matrix for any instance-annotator pair \((\mathbf{x}_{n},\mathbf{a}_{m})\) through \[\hat{\mathbf{P}}_{\mathbf{\omega}}(\mathbf{x}_{n},\mathbf{a}_{m})\coloneqq\begin{bmatrix} \texttt{softmax}((\mathbf{v}^{\mathrm{T}}(\mathbf{x}_{n},\mathbf{a}_{m}) \mathbf{W}+\mathbf{B})^{(1,:)})\\ \vdots\\ \texttt{softmax}((\mathbf{v}^{\mathrm{T}}(\mathbf{x}_{n},\mathbf{a}_{m}) \mathbf{W}+\mathbf{B})^{(C,:)})\end{bmatrix}\approx\eta\mathbf{I}_{C}+\frac{(1 -\eta)}{C-1}\left(\mathbf{1}_{C}-\mathbf{I}_{C}\right), \tag{17}\] where \(\mathbf{I}_{C}\in\mathbb{R}^{C\times C}\) denotes an identity matrix, \(\mathbf{1}_{C}\in\mathbb{R}^{C\times C}\) an all-one matrix, and \(\eta\in(0,1)\) the prior probability of obtaining a correct annotation. For example, in a binary classification problem, the initial confusion matrix would approximately take the following values: \[P_{\mathbf{\omega}}(\mathbf{x}_{n},\mathbf{a}_{m})=\begin{bmatrix}\eta&1-\eta\\ 1-\eta&\eta\end{bmatrix}. \tag{18}\] The outputs of the softmax functions represent the confusion matrix's rows. Provided that the initial AP model's last layer's weights \(\mathbf{W}\in\mathbb{R}^{H\times C\times C},H\in\mathbb{N}_{>0}\) satisfy \(\mathbf{v}^{\mathrm{T}}(\mathbf{x}_{n},\mathbf{a}_{m})\mathbf{W}\approx\mathbf{0}_ {C}\in\mathbb{R}^{C\times C}\) for the hidden representation \(\mathbf{v}(\mathbf{x}_{n},\mathbf{a}_{m})\in\mathbb{R}^{H}\) of each instance-annotator pair, we approximate Eq. 17 by initializing the biases \(\mathbf{B}\in\mathbb{R}^{C\times C}\) of our AP model's output layer via \[\mathbf{B}\coloneqq\ln\left(\frac{\eta\cdot(C-1)}{1-\eta}\right)\mathbf{I}_{C}. \tag{19}\] By default, we set \(\eta=0.8\) to assume trustworthy annotators a priori. Accordingly, initial class-membership probability estimates are close to the annotation probability estimates. **Annotator weights:** Reason number two has been noted by Cao et al. (2019), who proved that maximum-likelihood solutions fail when there are strong annotator correlations, i.e., annotators with significant statistical correlations in their annotation patterns. To address this issue, we explore the annotator correlations in the latent space of the learned annotator embeddings. For this purpose, we assume that annotators with similar embeddings share correlated annotation patterns. Recalling our example in Fig. 1, this assumption implies that annotators of the same latent group are located near each other. The left plot of Fig. 5 visualizes this assumption for a two-dimensional embedding space, where the eight annotators are arranged into three clusters as proxies of the three latent annotator groups. We aim to extend our loss function so that its evaluation is independent of the annotator groups' cardinalities. For our example, we view the three annotator groups as three independent annotators of equal importance. To this purpose, we extend the original likelihood function in Eq. 11 by annotator weights, such that we obtain the weighted likelihood function: \[\Pr(\mathbf{Z}\mid\mathbf{X},\mathbf{A};\boldsymbol{\theta},\boldsymbol{ \omega},\mathbf{w})=\prod_{n=1}^{N}\prod_{m\in\mathcal{A}_{n}}\Pr(z_{nm}\mid \mathbf{x}_{n},\mathbf{a}_{m};\boldsymbol{\theta},\boldsymbol{\omega})^{w( \mathbf{a}_{m})} \tag{20}\] where \(\mathbf{w}(\mathbf{A})\coloneqq(w(\mathbf{a}_{1}),\dots,w(\mathbf{a}_{M}))^{ \mathrm{T}}\in\mathbb{R}_{\geq 0}^{M}\) denotes a vector of non-negative annotator weights. From a probabilistic perspective, we can interpret such a weight \(w(\mathbf{a}_{m})\) as the effective number of observations (or copies) per annotation of annotator \(\mathbf{a}_{m}\). Interpreting the annotators \(\mathbf{A}\) as samples from a continuous latent space, we define an annotator weight \(w(\mathbf{a}_{m})\) to be inversely proportional to an annotator's \(\mathbf{a}_{m}\) probability density: \[w(\mathbf{a}_{m})\coloneqq\frac{\Pr(\mathbf{a}_{m}\mid\mathbf{A})^{-1}}{Z}\text { with }Z\coloneqq M^{-1}\sum_{m=1}^{M}\Pr(\mathbf{a}_{m}\mid\mathbf{A})^{-1}\text{ provided that }\Pr(\mathbf{a}_{1}\mid\mathbf{A}),\dots,\Pr(\mathbf{a}_{M}\mid\mathbf{A})>0. \tag{21}\] The normalization term \(Z\in\mathbb{R}_{>0}\) ensures that the number of effective annotations remains equal to the number of annotators, i.e., \(\sum_{m=1}^{M}w(\mathbf{a}_{m})=M\). On the right side of our example in Fig. 5, we expect that an annotator's probability density is approximately proportional to the cardinality of the group to which the annotator belongs. As a result, we assign high (low) weights to annotators belonging to small (large) groups. Inspecting the exemplary annotator weights and adding the weights per annotator group, we observe that each group provides the same number of effective annotations, i.e., \(\sfrac{8}{3}\). More generally, we support our definition of the annotator weights by the following theorem. Figure 5: Visualization of annotator embeddings (left) accompanied by an exemplary calculation of annotator probability densities and annotator weights (right). **Theorem 1**.: _Let there be \(G\in\{1,\ldots,M\}\) non-empty, disjoint groups of annotators \(\mathcal{A}\coloneqq\mathcal{A}_{1}\cup\cdots\cup\mathcal{A}_{G}\). Further assume, the annotators within each group \(g\in\{1,\ldots,G\}\) share identical annotation patterns for the observed instances, i.e.,_ \[\forall n\in\{1,\ldots,N\},\forall\mathbf{a}_{m},\mathbf{a}_{l}\in\mathcal{A}_ {g}:\Pr(z_{nm}\mid\mathbf{x}_{n},\mathbf{a}_{m})=\Pr(z_{nm}\mid\mathbf{x}_{n}, \mathbf{a}_{l})\qquad(\dagger),\] _and the annotators' probability densities are proportional to their respective groups' cardinalities, i.e.,_ \[\forall\mathbf{a}_{m}\in\mathcal{A}:\Pr(\mathbf{a}_{m}\mid\mathbf{A})\propto \sum_{g=1}^{G}\delta(\mathbf{a}_{m}\in\mathcal{A}_{g})|\mathcal{A}_{g}|\qquad (\star).\] _Then, the true weighted log-likelihood function for all \(M\) annotators reduces to the log-likelihood for \(G\) annotators:_ \[\sum_{n=1}^{N}\sum_{m=1}^{M}w(\mathbf{a}_{m})\ln\left(\Pr(z_{nm}\mid\mathbf{x }_{n},\mathbf{a}_{m})\right)\propto\sum_{n=1}^{N}\sum_{g=1}^{G}\ln\left(\Pr(z _{nm}\mid\mathbf{x}_{n},\mathbf{a}_{m_{g}})\right),\] _where \(\mathbf{a}_{m_{g}}\in\mathcal{A}_{g},m_{g}\in\{1,\ldots,M\}\) represents an arbitrary annotator of the \(g\)-th group._ Proof.: Applying assumption \((\star)\) of Theorem 1 to Eq. 21, the weight \(w(\mathbf{a}_{m})\) for an annotator \(\mathbf{a}_{m}\) is given by: \[w(\mathbf{a}_{m})\overset{(\star)}{=}\frac{M}{G}\sum_{g=1}^{G}\frac{\delta( \mathbf{a}_{m}\in\mathcal{A}_{g})}{|\mathcal{A}_{g}|}.\] Accordingly, the sums of the annotator weights are uniformly distributed across the \(G\) groups: \[\sum_{\mathbf{a}_{m}\in\mathcal{A}_{1}}w(\mathbf{a}_{m})=\cdots=\sum_{ \mathbf{a}_{m}\in\mathcal{A}_{G}}w(\mathbf{a}_{m})=\frac{M}{G}\qquad(\diamond).\] Inserting these annotator weights into the weighted log-likelihood function and making use of assumption \((\dagger)\) in Theorem 1, we get \[\sum_{n=1}^{N}\sum_{m=1}^{M}w(\mathbf{a}_{m})\ln\left(\Pr(z_{nm} \mid\mathbf{x}_{n},\mathbf{a}_{m})\right) =\sum_{n=1}^{N}\sum_{g=1}^{G}\sum_{\mathbf{a}_{m}\in\mathcal{A}_{ g}}w(\mathbf{a}_{m})\ln\left(\Pr(z_{nm}\mid\mathbf{x}_{n},\mathbf{a}_{m})\right)\] \[\overset{(\dagger)}{=}\sum_{n=1}^{N}\sum_{g=1}^{G}\left(\sum_{ \mathbf{a}_{m}\in\mathcal{A}_{g}}w(\mathbf{a}_{m})\right)\ln\left(\Pr(z_{nm} \mid\mathbf{x}_{n},\mathbf{a}_{m_{g}})\right)\] \[\overset{(\diamond)}{=}\sum_{n=1}^{N}\sum_{g=1}^{G}\frac{M}{G} \ln\left(\Pr(z_{nm}\mid\mathbf{x}_{n},\mathbf{a}_{m_{g}})\right)\propto\sum_ {n=1}^{N}\sum_{g=1}^{G}\ln\left(\Pr(z_{nm}\mid\mathbf{x}_{n},\mathbf{a}_{m_{g} })\right).\] Intuitively, Theorem 1 confirms that each group \(\mathcal{A}_{g}\), independent of its cardinality \(|\mathcal{A}_{g}|\), equally contributes to the weighted log-likelihood function. This way, we avoid any bias toward a large group of highly correlated annotators during learning. Typically, the assumptions \((\dagger)\) and \((\star)\) of Theorem 1 do not hold in practice because there are no annotator groups with identical annotation patterns. However, we aim to approximately fulfill both by relying on a kernel density estimation, which quantifies similarities between annotator embeddings \(\widetilde{\mathbf{A}}=(\widetilde{\mathbf{a}}_{1},\ldots,\widetilde{\mathbf{ a}}_{M})^{\mathrm{T}}\), i.e., degrees of correlations, as the basis for the annotator probability density estimation: \[\Pr\left(\mathbf{a}_{m}\mid\mathbf{A}\right)\approx\Pr\left(\widetilde{ \mathbf{a}}_{m}\mid\widetilde{\mathbf{A}},k_{\gamma}\right)\propto\sum_{l=1}^{ M}k_{\gamma}\left(\texttt{no\_grad}\left(\widetilde{\mathbf{a}}_{l}\right), \texttt{no\_grad}\left(\widetilde{\mathbf{a}}_{m}\right)\right), \tag{22}\] where \(k_{\gamma}:\mathbb{R}^{R\times R}\to\mathbb{R}_{\geq 0}\) denotes a kernel function and \(\gamma\in\mathbb{R}_{>0}\) its kernel scale. The expression \(\texttt{no\_grad}(\widetilde{\mathbf{a}}_{m})\in\mathbb{R}^{R}\) indicates that no gradient regarding the learned annotator embedding \(\widetilde{\mathbf{a}}_{m}\) is computed, which is necessary to decouple the learning of embeddings from computing annotator weights. Otherwise, we would learn annotator embeddings optimizing the annotator weights instead of reflecting the annotation patterns. Although many kernel functions are conceivable, we will focus on the popular radial basis function: \[k_{\gamma}(\texttt{no\_grad}\left(\widetilde{\mathbf{a}}_{m}\right),\texttt{ no\_grad}\left(\widetilde{\mathbf{a}}_{l}\right))\coloneqq\exp\left(-\gamma\left| \texttt{no\_grad}\left(\widetilde{\mathbf{a}}_{m}\right)^{\mathrm{T}}-\texttt{ no\_grad}\left(\widetilde{\mathbf{a}}_{l}\right)\right|\left|{}_{2}^{2}\right). \tag{23}\] with \(||\cdot||_{2}\) as Euclidean distance. Typically, the kernel scale \(\gamma\) needs to fit the observed data, i.e., annotator embeddings in our case. Therefore, its definition a priori is challenging, such that we define \(\gamma\) as a learnable parameter subject to a prior distribution. Concretely, we employ the gamma distribution for this purpose: \[\Pr\left(\gamma\mid\alpha,\beta\right)\coloneqq\mathrm{Gam}\left(\gamma\mid \alpha,\beta\right)\coloneqq\frac{\beta^{\alpha}}{\Gamma(\alpha)}\gamma^{ \alpha-1}\exp\left(-\beta\gamma\right), \tag{24}\] where \(\Gamma\) is the gamma function and \(\alpha\in\mathbb{R}_{>1},\beta\in\mathbb{R}_{>0}\) are hyperparameters. Based on experiments, we set \(\alpha=1.25,\beta=0.25\) such that the mode is \(\nicefrac{{(\alpha-1)}}{{\beta}}=1\) (defining the initial value of \(\gamma\) before optimization) and the variance with \(\nicefrac{{\alpha}}{{\beta^{2}}}=20\) is high in favor of flexible learning. As a weighted loss function, we finally get \[L_{\mathbf{X},\mathbf{A},\mathbf{Z},\alpha,\beta}(\boldsymbol{ \theta},\boldsymbol{\omega},\gamma) \coloneqq-\frac{1}{|\mathbf{Z}|}\sum_{n=1}^{N}\sum_{m\in\mathcal{ A}_{n}}\left(\hat{w}_{\gamma}(\mathbf{a}_{m})\ln\left(\mathbf{e}_{z_{nm}}^{ \mathrm{T}}\mathbf{\hat{P}}_{\boldsymbol{\omega}}^{\mathrm{T}}(\mathbf{a}_{m},\mathbf{x}_{n})\mathbf{\hat{p}}_{\boldsymbol{\theta}}(\mathbf{x}_{n})\right) \right)-\ln\left(\mathrm{Gam}\left(\gamma\mid\alpha,\beta\right)\right), \tag{25}\] \[|\mathbf{Z}| \coloneqq\sum_{n=1}^{N}\sum_{m=1}^{M}\delta(z_{nm}\in\Omega_{Y}), \tag{26}\] where \(\hat{w}_{\gamma}(\mathbf{a}_{m})\) denotes that the annotator weights \(w(\mathbf{a}_{m})\) are estimated by learning the kernel scale \(\gamma\). The number of annotations \(|\mathbf{Z}|\) is a normalization factor, which accounts for potentially unevenly distributed annotations across mini-batches when using stochastic GD. Given the loss function in Eq. 25, we present the complete **end-to-end training algorithm** for MaDL in Algorithm 1. During each training step, we recompute the annotator weights and use them as the basis for the weighted loss function to optimize the AP and GT models' parameters. After training, the optimized model parameters \((\boldsymbol{\theta},\boldsymbol{\omega})\) can be used to make probabilistic predictions, e.g., class-membership probabilities \(\mathbf{\hat{p}}_{\boldsymbol{\theta}}(\mathbf{x})\) (cf. Fig. 3) and annotator confusion matrix \(\mathbf{\hat{P}}_{\boldsymbol{\omega}}(\mathbf{x},\mathbf{a})\) (cf. Fig. 3), or to decide on distinct labels, e.g., class label \(\hat{y}_{\boldsymbol{\theta}}(\mathbf{x})\) (cf. Eq. 8) and annotation error \(\hat{y}_{\boldsymbol{\theta},\boldsymbol{\omega}}(\mathbf{x},\mathbf{a})\) (cf. Eq. 9). ``` input: instances \(\mathbf{X}\), annotators \(\mathbf{A}\), annotations \(\mathbf{Z}\), number of training epochs \(E\), mini-batch size \(B\), initial model parameters \((\boldsymbol{\theta},\boldsymbol{\omega})\), prior annotation accuracy \(\eta\), gamma distribution parameters \((\alpha,\beta)\); 1: initialize biases \(\mathbf{B}\) of the AP model's output layer using \(\eta\) (cf. Eq. 19); 2: initialize kernel scale \(\gamma\coloneqq\nicefrac{{(\alpha-1)}}{{\beta}}\) ; 3:forepoch \(e\in\{1,\dots,E\}\)do 4:for sampled mini-batch \(\overline{\mathbf{X}}\coloneqq(\mathbf{x}_{i_{1}},\dots,\mathbf{x}_{i_{B}})^ {\mathrm{T}},\overline{\mathbf{Z}}\coloneqq(\mathbf{z}_{i_{1}},\dots,\mathbf{ z}_{i_{B}})^{\mathrm{T}}\) with \(\{i_{1},\dots,i_{B}\}\subset\{1,\dots,N\}\)do 5:for\(b\in\{i_{1},\dots,i_{B}\}\)do 6: compute class-membership probabilities \(\mathbf{\hat{p}}_{\boldsymbol{\theta}}(\mathbf{x}_{b})\) (cf. Fig. 3); 7:for\(m\in\{1,\dots,M\}\)do 8:for\((m,l)\in\{1,\dots,M\}^{2}\)do [MISSING_PAGE_POST] 43:endfor 44:end 45:endfor [MISSING_PAGE_POST] ## 5 Experimental Evaluation This section investigates three RQs regarding the properties P1-P6 (cf. Section 3) of multi-annotator supervised learning. We divide the analysis of each RQ into four parts, which are (1) a takeaway summarizing the key insights, (2) a setup describing the experiments, (3) a qualitative study, and (4) a quantitative study. The qualitative studies intuitively explain our design choices about MaDL, while the quantitative studies compare MaDL's performance to related techniques. Note that we analyze each RQ in the context of a concrete evaluation scenario. Accordingly, the results provide potential indications for an extension to related scenarios. As this section's starting point, we overview the general experimental setup, whose code base is publicly available at [https://www.github.com/ies-research/multi-annotator-deep-learning](https://www.github.com/ies-research/multi-annotator-deep-learning). ### Experimental Setup We base our experimental setup on the problem setting in Section 2. Accordingly, the goal is to evaluate the predictions of GT and AP models trained via multi-annotator supervised learning techniques. For this purpose, we perform experiments on several datasets with class labels provided by error-prone annotators, with models of varying hyperparameters, and in combination with a collection of different evaluation scores. **Datasets:** We conduct experiments for the tabular and image datasets listed by Table 2. labelme and music are actual crowdsourcing datasets, while we simulate annotators for the other five datasets. For the labelme dataset, Rodrigues Pereira Pereira Pereira Pereira Pereira performed a crowdsourcing study to annotate a subset of 1000 out of 2688 instances of eight different classes as training data. This dataset consists of images, but due to its small training set size, we follow the idea of Rodrigues Pereira and transform it into a tabular dataset by utilizing the features of a pretrained VGG-16 (Simonyan and Zisserman, 2015) as inputs. There are class labels obtained from 59 different annotators, and on average, about 2.5 class labels are assigned to an instance. music is another crowdsourcing dataset, where 700 of 1000 audio files are classified into ten music genres by 44 annotators, and on average, about 2.9 class labels are assigned to a file. We use the features extracted by Rodrigues et al. (2013) from the audio files for training and inference. The artificial toy dataset with two classes and features serves to visualize our design choices about MaDL. We generate this dataset via a Gaussian mixture model. Frey and Slate (1991) published the letter dataset to recognize a pixel display, represented through statistical moments and edge counts, as one of the 26 capital letters in the English alphabet. The datasets fmnist, cifar10, and svhn represent typical image benchmark classification tasks, each with ten classes but different object types to recognize. **Network Architectures:** Table 2 further lists the base network architectures selected to meet the requirements of the different datasets. These architectures are starting points for designing the GT and AP models, which we adjust according to the respective multi-annotator supervised learning technique. For the tabular datasets, we follow Rodrigues Pereira Pereira Pereira and train a _multilayer perceptron_ (MLP) with a single fully-connected layer of 128 neurons as a hidden layer. The LeNet-5 architecture (LeCun and Cortes, 1998), a simple and popular convolutional neural network, serves as the basis for fmnist as a gray-scale image dataset, while we employ a ResNet-18 (He et al., 2016) for cifar10 and svhn as RGB image datasets. We implement all activation functions in the hidden layers as _rectified linear units_ (ReLU, Glorot et al. 2011). \begin{table} \begin{tabular}{|l|l|c|c|c|c|} \hline \multicolumn{1}{|c|}{**Dataset**} & **Annotators** & **Instances** & **Classes** & **Features** & **Base Network Architecture** \\ \hline \hline toy & \multicolumn{4}{|c|}{Tabular Datasets} \\ letter (Frey and Slate, 1991) & simulated & 500 & 2 & 2 & MLP (Rodrigues and Pereira, 2018) \\ labelme (Rodrigues and Pereira, 2018) & simulated & 20000 & 26 & 16 & MLP (Rodrigues and Pereira, 2018) \\ labelme (Rodrigues and Pereira, 2018) & real-world & 2688 & 8 & 8192 & MLP (Rodrigues and Pereira, 2018) \\ music (Rodrigues et al., 2013) & real-world & 1000 & 10 & 124 & MLP (Rodrigues and Pereira, 2018) \\ \hline \hline fmnist(Xiao et al., 2017) & simulated & 60000 & 10 & 28 \(\times\) 28 & LeNet5 (LeCun and Cortes, 1998) \\ cifar10 (Krizhevsky, 2009) & simulated & 60000 & 10 & 3 \(\times\) 32 \(\times\) 32 & ResNet18 (He et al., 2016) \\ svhn(Netzer et al., 2011) & simulated & 90000 & 10 & 3 \(\times\) 32 \(\times\) 32 & ResNet18 (He et al., 2016) \\ \hline \hline \end{tabular} \end{table} Table 2: Overview of datasets and associated base network architectures. **Annotator simulation:** For the five datasets without real-world annotators, we adopt simulation strategies from related work (Yan et al., 2014; Cao et al., 2019; Ruhling Cachay et al., 2021; Wei et al., 2022) and simulate annotators according to the following five types: _Adversarial_: annotators provide false class labels on purpose. In our case, such an annotator provides a false class label with a probability of 0.95. _Randomly guessing_: annotators provide class labels drawn from a uniform categorical distribution. As a result, such an annotator provides a correct class label with a probability of \(\nicefrac{{1}}{{C}}\). _Cluster-specialized_: annotators' performances considerably vary across the clusters determined via the \(k\)-means clustering algorithm. For images, we cluster the latent representations of the Resnet18 pretrained on ImageNet (Russakovsky et al., 2015). In total, there are \(k=10\) clusters. For each annotator, we randomly define five weak and five expert clusters. An annotator provides a correct class label with a probability of 0.95 for an expert cluster and with a probability of 0.05 for a weak cluster. _Common_: annotators are simulated based on the identical clustering employed for the cluster-specialized annotators. However, their APs vary less between the clusters. Concretely, we randomly draw a correctness probability value in the range \(\left[\nicefrac{{1}}{{C}},1\right]\) for each cluster-annotator pair. _Class-specialized_: annotators' performances considerably vary across classes to which instances can belong. For each annotator, we randomly define \(\left\lfloor\nicefrac{{C}}{{2}}\right\rfloor\) weak and \(\left\lceil\nicefrac{{C}}{{2}}\right\rceil\) expert classes. An annotator provides a correct class label with a probability of 0.95 for an expert class and with a probability of 0.05 for a weak class. We simulate annotation mistakes by randomly selecting false class labels. Table 3 lists four annotator sets (blueish rows) with varying numbers of annotators per annotator type (first five columns) and annotation ratios (last column). Each annotator set is associated with a concrete RQ. A copy flag indicates that the annotators in the respective types provide identical annotations. This way, we follow Wei et al. (2022), Cao et al. (2019), and Ruhling Cachay et al. (2021) to simulate strong correlations between annotators. For example, the entry "1 + 11 copies" of the annotator set correlated indicates twelve cluster-specialized annotators, of which one annotator is independent, while the remaining eleven annotators share identical annotation patterns, i.e., they are copies of each other. The simulated annotator correlations are not directly observable because the copied annotators likely annotate different instances. This is because of the annotation ratios, e.g., a ratio of 0.2 indicates that each annotator provides annotations for only 20 % of randomly chosen instances. The annotation ratios are well below 1.0 because, in practice (especially in crowdsourcing applications), it is unrealistic for every annotator to annotate every instance. **Evaluation scores:** Since we are interested in quantitatively assessing GT and AP predictions, we need corresponding evaluation scores. In this context, we interpret the prediction of APs as a binary classification problem with the AP model predicting whether an annotator provides the correct or a false class label for an instance. Next to categorical predictions, the GT and AP models typically provide probabilistic outputs, which we examine regarding their quality (Huseljic et al., 2021). We list our evaluation scores in the following, where arrows indicate which scores need to be maximized (\(\uparrow\)) or minimized (\(\downarrow\)): \begin{table} \begin{tabular}{|c|c|c|c|c||c|} \hline \hline **Adversarial** & **Common** & **Cluster-specialized** & **Class-specialized** & **Random** & **Annotation Ratio** \\ \hline \hline & \multicolumn{4}{|c|}{independent (RQ1)} \\ \hline 1 & 6 & 2 & 1 & 0 & 0.2 \\ & \multicolumn{4}{|c|}{correlated (RQ2)} \\ \hline 11 copies & 6 & 1 + 11 copies & 11 copies & 0 & 0.2 \\ \hline & \multicolumn{4}{|c|}{random-correlated (RQ2)} \\ \hline 1 & 6 & 2 & 1 & 90 copies & 0.2 \\ \hline & \multicolumn{4}{|c|}{inductive (RQ3)} \\ \hline 10 & 60 & 20 & 10 & 0 & 0.02 \\ \hline \hline \end{tabular} \end{table} Table 3: Simulated annotator sets for each RQ. _Accuracy_: (ACC, \(\uparrow\)) is probably the most popular score for assessing classification performances. For the GT estimates, it describes the fraction of correctly classified instances, whereas, for the AP estimates, it is the fraction of (potential) annotations correctly identified as false or correct: \[\text{GT-ACC}(\mathbf{X},\mathbf{y},\hat{p}_{\boldsymbol{\theta}})\coloneqq \frac{1}{N}\sum_{n=1}^{N}\delta\left(y_{n}=\hat{p}_{\boldsymbol{\theta}}( \mathbf{x}_{n})\right), \tag{27}\] \[\text{AP-ACC}(\mathbf{X},\mathbf{y},\mathbf{Z},\hat{p}_{\boldsymbol{\theta}, \boldsymbol{\omega}})\coloneqq\frac{1}{|\mathbf{Z}|}\sum_{n=1}^{N}\sum_{m\in \mathcal{A}_{n}}\delta\left(\delta\left(y_{n}\neq z_{nm}\right)=\hat{y}_{ \boldsymbol{\theta},\boldsymbol{\omega}}(\mathbf{x}_{n},\mathbf{a}_{m}) \right). \tag{28}\] Maximizing both scores corresponds to the Bayes optimal predictions in Eq. 3 and Eq. 5. _Balanced accuracy_: (BAL-ACC, \(\uparrow\)) is a variant of ACC designed for imbalanced classification problems (Brodersen et al., 2010). For the GT estimation, the idea is to compute the ACC score for each class of instances separately and then average them. Since our datasets are balanced in their distributions of class labels, we use this evaluation score only for assessing AP estimates. We may be confronted with highly imbalanced binary classification problems per annotator, where a class represents either a false or correct annotation. For example, an adversarial annotator provides majorly false annotations. Therefore, we extend the definition of BAL-ACC by computing the ACC scores for each annotator-class pair separately to average them. _Negative log-likelihood_: (NLL, \(\downarrow\)) is not only used as a typical loss function for training (D)NNs but can also be used to assess the quality of probabilistic estimates: \[\text{GT-NLL}(\mathbf{X},\mathbf{y},\hat{\boldsymbol{p}}_{ \boldsymbol{\theta}})\coloneqq-\frac{1}{N}\sum_{n=1}^{N}\ln\left(\hat{p}_{ \boldsymbol{\theta}}^{(y_{n})}(\mathbf{x}_{n})\right), \tag{29}\] \[\text{AP-NLL}(\mathbf{X},\mathbf{y},\mathbf{Z},\hat{p}_{ \boldsymbol{\theta},\boldsymbol{\omega}})\coloneqq\] \[-\frac{1}{|\mathbf{Z}|}\sum_{n=1}^{N}\sum_{m\in\mathcal{A}_{n}} \Big{(}\delta\left(y_{n}=z_{nm}\right)\ln\left(\hat{p}_{\boldsymbol{\theta}, \boldsymbol{\omega}}(\mathbf{x}_{n},\mathbf{a}_{m})\right)+\delta\left(y_{n} \neq z_{nm}\right)\ln\left(1-\hat{p}_{\boldsymbol{\theta},\boldsymbol{\omega }}(\mathbf{x}_{n},\mathbf{a}_{m})\right)\Big{)}. \tag{30}\] Moreover, NLL is a proper scoring rule (Ovadia et al., 2019) such that the best score corresponds to a perfect prediction. _Brier score_: (BS, \(\downarrow\)), proposed by Brier (1950), is another proper scoring rule, which measures the squared error between predicted probability vectors and one-hot encoded target vectors: \[\text{GT-BS}(\mathbf{X},\mathbf{y},\hat{\boldsymbol{p}}_{ \boldsymbol{\theta}})\coloneqq\frac{1}{N}\sum_{n=1}^{N}(\mathbf{e}_{y_{n}}- \hat{\boldsymbol{p}}_{\boldsymbol{\theta}}(\mathbf{x}_{n}))^{\text{T}}( \mathbf{e}_{y_{n}}-\hat{\boldsymbol{p}}_{\boldsymbol{\theta}}(\mathbf{x}_{n})), \tag{31}\] \[\text{AP-BS}(\mathbf{X},\mathbf{y},\mathbf{Z},\hat{p}_{ \boldsymbol{\theta},\boldsymbol{\omega}})\coloneqq\frac{1}{|\mathbf{Z}|}\sum _{n=1}^{N}\sum_{m\in\mathcal{A}_{n}}\left(\delta\left(y_{n}=z_{nm}\right)-\hat {p}_{\boldsymbol{\theta},\boldsymbol{\omega}}(\mathbf{x}_{n},\mathbf{a}_{m}) \right)^{2}. \tag{32}\] In the literature, there exist many further evaluation scores, particularly for assessing probability calibration (Ovadia et al., 2019). As a comprehensive evaluation of probabilities is beyond this article's scope, we focus on proper scoring rules inducing calibration measures. Accordingly, we have omitted other evaluation scores, such as the expected calibration error (Naeini et al., 2015) being a non-proper scoring rule. **Multi-annotator supervised learning techniques:** By default, we train MaDL via the weighted loss function in Eq. 25 using the hyperparameter values from Section 4 and the most general architecture depicted by Fig. 3. In addition to ablations as part of analyzing the three RQs, we present a detailed ablation study on the hyperparameters of MaDL in Appendix A. We evaluate MaDL compared to a subset of the related techniques presented in Section 3. This subset consists of techniques that (1) provide probabilistic GT estimates for each instance, (2) provide probabilistic AP estimates for each instance-annotator pair, and (3) train a (D)NN as the GT model. Moreover, we focus on recent techniques with varying training algorithms and properties P1-P6 (cf. Section 3). As a result, we select _crowd layer_ (CL, Rodrigues Pereira, 2018), _regularized estimation of annotator confusion_ (REAC, Tanno et al., 2019), _learning from imperfect annotators_ (LIA, Platanios et al., 2020), _common noise adaption layers_ (CoNAL, Chu et al., 2021), and _union net_ (UNION, Wei et al., 2022). Further, we aggregate annotations through the majority rule as a _lower baseline_ (LB) and use the GT class labels as an _upper baseline_ (UB). We adopt the architectures of MaDL's GT and AP models for both baselines. The GT model then trains via the aggregated annotation (LB) or the GT class labels (UB). The AP model trains using the aggregated annotations (LB) or the GT class labels (UB) to optimize the annotator confusion matrices. Unless explicitly stated, no multi-annotator supervised learning technique can access annotator features containing prior knowledge. **Experiment:** An experiment's run starts by splitting a dataset into train, validation, and test set. For music and labelme, these splits are predefined, while for the other datasets, we randomly select \(75\,\mathrm{\char 37}\) of the samples for training, \(5\,\mathrm{\char 37}\) for validation, and \(20\,\mathrm{\char 37}\) for testing. Following Ruhling Cachay et al. (2021), a small validation set with GT class labels allows a fair comparison by finding suitable hyperparameter values for the optimizer of the respective multi-annotator supervised learning technique. Otherwise, the default hyperparameter values may majorly affect the results. We employ the AdamW (Loshchilov and Hutter, 2019) optimizer, where nine combinations of the learning rates \(\{0.01,0.005,0.001\}\) and weight decays \(\{0.0,0.001,0.0001\}\) are tested. The optimizer's mini-batch size is set to 64. For the datasets music and labelme, we additionally perform experiments with 8 and 16 as mini-batch sizes due to their smaller number of instances and, thus, higher sensitivity to the mini-batch size. The number of training epochs is set to 100 for all techniques except for LIA, which we train for 200 epochs due to its EM algorithm. After training, we select the models with the best validation GT-ACC across the epochs. Each experiment is run five times with different parameter initializations and data splits (except for labelme and music). We report quantitative results as means and standard deviations over the best five runs determined via the validation GT-ACC. ### RQ1: Do class- and instance-dependent modeled APs improve learning? (Properties P1, P2) **Takeaway:** Estimating class- (property P1) and instance-dependent (property P2) APs leads to superior performances of the GT and AP models. This observation is especially true for GT models trained on datasets with real-world annotators whose annotation patterns are unknown. **Setup:** We address RQ1 by evaluating multi-annotator supervised learning techniques with varying AP assumptions. We simulate ten annotators for the datasets without real-world annotators according to the annotator set independent in Table 3. Each simulated annotator provides class labels for \(20\,\mathrm{\char 37}\) of randomly selected training instances. Next to the related multi-annotator supervised learning techniques and the two baselines, we evaluate six variants of MaDL denoted via the scheme MaDL(P1, P2). Property P1 refers to the estimation of potential class-dependent APs. There, we differ between the options class-independent (I), partially (P) class-dependent, and fully (F) class-dependent APs. We implement them by constraining the annotator confusion matrices' degrees of freedom. Concretely, class-independent refers to a confusion matrix approximated by estimating a single scalar, partially class-dependent refers to a confusion matrix approximated by estimating its diagonal elements, and fully class-dependent refers to estimating each matrix element individually. Property P2 indicates whether the APs are estimated as a function of instances (X) or not (\(\overline{\mathrm{X}}\)). Combining the two options of the properties P1 and P2 represents one variant. For example, MaDL(X, F) is the default MaDL variant estimating instance- and fully class-dependent APs. **Qualitative study:** Fig. 6 visualizes MaDL's predictive behavior for the artificial dataset toy. Thereby, each row represents the predictions of a different MaDL variant. Since this is a binary classification problem, the variant MaDL(X, P) is identical to MaDL(X, F), and MaDL(\(\overline{\mathrm{X}}\), P) is identical to MaDL(\(\overline{\mathrm{X}}\), F). The first column visualizes instances as circles colored according to their GT labels, plots the class-membership probabilities predicted by the respective GT model as contours across the feature space, and depicts the decision boundary for classification as a black line. The last four columns show the class labels provided by four of the ten simulated annotators. The instances' colors indicate the class labels provided by an annotator, their forms mark whether the class labels are correct (circle) or false (cross) annotations, and the contours across the feature space visualize the AP model's predicted annotation correctness probabilities. The GT models of the variants MaDL(\(\overline{\text{X}}\), F), MaDL(X, I), and MaDL(X, F) successfully separate the instances of both classes, whereas the GT model of MaDL(\(\overline{\text{X}}\), I) fails in this task. Likely, the missing consideration of instance- and class-dependent APs explains this observation. Further, the class-membership probabilities of the successful MaDL variants reflect instances' actual class labels but exhibit the overconfident behavior typical of deterministic (D)NNs, particularly for feature space regions without observed instances (Huseljic et al., 2021). Investigating the estimated APs for the adversarial annotator (second column), we see that each MaDL variant correctly predicts low APs (indicated by the white-colored contours) across the feature space. When comparing the AP estimates for the class-specialized annotator (fifth column), clear differences between MaDL(\(\overline{\text{X}}\), I) and the other three variants of MaDL are visible. Since MaDL(\(\overline{\text{X}}\), I) ignores any class dependency regarding APs, it cannot differentiate between classes of high and low APs. In contrast, the AP predictions of the other three variants reflect the class structure learned by the respective GT model and thus can differ between weak and expert classes. The performances of the cluster-specialized and common annotator depend on the regions in the feature space. Therefore, only the variants MaDL(X, I) and MaDL(X, F) can separate clusters of low and high APs. For example, both variants successfully identify the two weak clusters of the cluster-specialized annotator. Analogous to the class-membership probabilities, the AP estimates are overconfident for feature space regions without observed instances. Figure 6: Visualization of MaDL’s predictive behavior for the two-dimensional dataset toy. **Quantitative study:** Table 4 presents the numerical evaluation results for the two datasets with real-world annotators. There, we only report the GT models' test results since no annotations for the test instances are available to assess the AP models' test results. Table 5 presents the GT and AP models' test results for the four datasets with simulated annotators. Both tables indicate whether a technique models class-dependent (property P1) and/or instance-dependent (property P2) APs. Generally, training with GT labels as UB achieves the best performances, while the LB with annotations aggregated according to the majority rule leads to the worst ones. The latter observation confirms that leveraging AP estimates during training is beneficial. Moreover, these estimates are typically meaningful, corresponding to BAL-ACC values above 0.5. An exception is MaDL(\(\overline{\text{X}}\), I) because this variant only estimates by design a constant performance per annotator across the feature space. Comparing MaDL(X, F) as the most general variant to related techniques, we observe that it achieves competitive or superior results for all datasets and evaluation scores. Next to MaDL(X, F), CoNAL often delivers better results than the competitors. When we investigate the performances of the MaDL variants with instance-independent APs, we find that MaDL(\(\overline{\text{X}}\), F) achieves the most robust performances across all datasets. In particular, for the datasets with real-world annotators, the ACC of the respective GT model is superior. This observation suggests that modeling class-dependent APs (property P1) is beneficial. We recognize a similar trend for the MaDL variants with instance-dependent APs (property P2). Comparing each pair of MaDL variants with X and \(\overline{\text{X}}\), we observe that instance-dependent APs often improve GT and, in particular, AP estimates. The advantage of class- and instance-dependent APs is confirmed by CoNAL as a strong competitor of MaDL(X, F). LIA's inferior performance contrasts this, although LIA estimates class- and instance-dependent APs. The difference in training algorithms can likely explain this observation. While MaDL(X, F) and CoNAL train via an end-to-end algorithm, LIA trains via the EM algorithm, leading to higher runtimes and introducing additional sensitive hyperparameters, e.g., the number of EM iterations and training epochs per M step. **Takeaway:** Modeling correlations between annotators leads to better results in scenarios with many correlated spamming annotators (property P4). Capturing the correlations of beneficial annotators does not lead to consistently better results (property P3). However, estimating and leveraging APs during training becomes more critical in scenarios with correlated annotators. **Setup:** We address RQ2 by evaluating multi-annotator supervised learning techniques with and without modeling annotator correlations. We simulate two annotator sets for each dataset without real-world annotators according to Table 3. The first annotator set correlated consists of the same ten annotators as in RQ1. However, we extend this set by ten additional copies of the adversarial, the class-specialized, and one of the two cluster-specialized annotators, so there are 40 annotators. The second annotator set \begin{table} \begin{tabular}{|l|c||c||c|c||c|c|c|} \hline \multirow{2}{*}{**Technique**} & \multirow{2}{*}{**P1**} & \multirow{2}{*}{**P2**} & \multicolumn{4}{c||}{**Ground Truth Model**} & \multicolumn{4}{c|}{**Ground Truth Model**} \\ & & & ACC \(\uparrow\) & NLL \(\downarrow\) & BS \(\downarrow\) & ACC \(\uparrow\) & NLL \(\downarrow\) & BS \(\downarrow\) \\ \hline \hline \multicolumn{10}{|c||}{**MUSE**} & \multicolumn{4}{c|}{**LAPEDAE**} \\ \hline UB & ✓ & ✓ & 0.785\(\pm\)0.020 & 0.710\(\pm\)0.037 & 0.314\(\pm\)0.027 & 0.914\(\pm\)0.003 & 0.580\(\pm\)0.112 & 0.150\(\pm\)0.003 \\ LB & ✓ & ✓ & 0.646\(\pm\)0.045 & 1.096\(\pm\)0.103 & 0.492\(\pm\)0.051 & 0.810\(\pm\)0.015 & 0.724\(\pm\)0.155 & 0.294\(\pm\)0.024 \\ \hline CL & ✓ & ✗ & 0.675\(\pm\)0.015 & 1.672\(\pm\)0.040 & 0.524\(\pm\)0.021 & 0.857\(\pm\)0.011 & 1.774\(\pm\)1.155 & 0.250\(\pm\)0.014 \\ REAC & ✓ & ✗ & 0.705\(\pm\)0.023 & 0.893\(\pm\)0.081 & 0.410\(\pm\)0.033 & 0.843\(\pm\)0.006 & 0.833\(\pm\)0.008 & 0.254\(\pm\)0.006 \\ UNION & ✓ & ✗ & 0.682\(\pm\)0.013 & 1.396\(\pm\)0.143 & 0.501\(\pm\)0.027 & 0.855\(\pm\)0.004 & 1.074\(\pm\)0.340 & 0.248\(\pm\)0.011 \\ LIA & ✓ & ✓ & 0.658\(\pm\)0.023 & 1.158\(\pm\)0.047 & 0.498\(\pm\)0.020 & 0.813\(\pm\)0.010 & 0.976\(\pm\)0.234 & 0.295\(\pm\)0.009 \\ CoNAL & ✓ & ✓ & 0.708\(\pm\)0.031 & 0.964\(\pm\)0.081 & 0.423\(\pm\)0.035 & 0.866\(\pm\)0.004 & 2.740\(\pm\)1.304 & 0.247\(\pm\)0.023 \\ \hline MaDL(X, I) & ✗ & ✗ & 0.718\(\pm\)0.010 & **0.871\(\pm\)0.027** & **0.394\(\pm\)0.009** & 0.815\(\pm\)0.009 & **0.616\(\pm\)0.125** & 0.276\(\pm\)0.017 \\ MaDL(X, F) & ✗ & ✗ & 0.720\(\pm\)0.018 & **0.871\(\pm\)0.030** & 0.396\(\pm\)0.009 & 0.811\(\pm\)0.012 & 0.630\(\pm\)0.128 & 0.281\(\pm\)0.022 \\ MaDL(X, F) & ✗ & ✓ & **0.725\(\pm\)0.015** & 0.977\(\pm\)0.064 & 0.403\(\pm\)0.019 & 0.859\(\pm\)0.007 & 1.008\(\pm\)0.278 & **0.240\(\pm\)0.014** \\ MaDL(X, I) & ✗ & ✓ & 0.713\(\pm\)0.027 & 0.876\(\pm\)0.041 & 0.402\(\pm\)0.022 & 0.816\(\pm\)0.008 & **0.559\(\pm\)0.027** & 0.276\(\pm\)0.010 \\ MaDL(X, P) & ✗ & ✓ & 0.714\(\pm\)0.014 & 0.909\(\pm\)0.036 & 0.398\(\pm\)0.013 & 0.811\(\pm\)0.009 & 0.771\(\pm\)0.160 & 0.289\(\pm\)0.016 \\ MaDL(X, F) & ✓ & ✓ & **0.743\(\pm\)0.018** & 0.877\(\pm\)0.030 & **0.381\(\pm\)0.012** & **0.867\(\pm\)0.004** & 0.623\(\pm\)0.124 & **0.214\(\pm\)0.008** \\ \hline \end{tabular} \end{table} Table 4: Results regarding RQ1 for datasets with real-world annotators: Best and second best performances are highlighted per dataset and evaluation score while excluding the performances of the UB. random-correlated also consists of the same ten annotators as in RQ1 but is extended by 90 identical randomly guessing annotators. Each simulated annotator provides class labels for 20 % of randomly selected training instances. Next to the related multi-annotator supervised learning techniques and the two baselines, we evaluate two variants of MaDL denoted via the scheme MaDL(P3). Property P3 refers to the modeling of potential annotator correlations. There, we differ between the variant MaDL(W) using annotator weights via the weighted loss function (cf. Eq. 25) and the variant MaDL(\(\overline{\text{W}}\)) training via the loss function without any weights (cf. Eq 15). MaDL(W) corresponds to MaDL's default variant in this setup. \begin{table} \begin{tabular}{l|c|c||c|c|c|c|c|c} \hline \hline \multirow{2}{*}{**Technique**} & \multirow{2}{*}{**P1**} & \multirow{2}{*}{**P2**} & \multicolumn{4}{c|}{**Ground Truth Model**} & \multicolumn{4}{c}{**Annotator Performance Model**} \\ & & & ACC \(\uparrow\) & NLL \(\downarrow\) & BS \(\downarrow\) & ACC \(\uparrow\) & NLL \(\downarrow\) & BS \(\downarrow\) & BAL-ACC \(\uparrow\) \\ \hline \hline \multicolumn{10}{c}{**Letter (independent)**} \\ \hline UB & ✓ & ✓ & 0.961\(\pm\)0.003 & 0.130\(\pm\)0.006 & 0.059\(\pm\)0.004 & 0.770\(\pm\)0.001 & 0.485\(\pm\)0.003 & 0.315\(\pm\)0.002 & 0.709\(\pm\)0.001 \\ LB & ✓ & ✓ & 0.878\(\pm\)0.004 & 0.980\(\pm\)0.021 & 0.385\(\pm\)0.008 & 0.664\(\pm\)0.004 & 0.624\(\pm\)0.003 & 0.433\(\pm\)0.003 & 0.666\(\pm\)0.004 \\ \hline REAC & ✓ & � & 0.936\(\pm\)0.005 & 0.238\(\pm\)0.018 & 0.097\(\pm\)0.007 & 0.683\(\pm\)0.002 & 0.560\(\pm\)0.001 & 0.385\(\pm\)0.001 & 0.604\(\pm\)0.002 \\ CL & ✓ & � & 0.886\(\pm\)0.013 & 1.062\(\pm\)0.145 & 0.181\(\pm\)0.020 & 0.663\(\pm\)0.006 & 0.625\(\pm\)0.013 & 0.430\(\pm\)0.010 & 0.601\(\pm\)0.002 \\ UNION & ✓ & ✓ & 0.905\(\pm\)0.016 & 0.906\(\pm\)0.035 & 0.151\(\pm\)0.039 & 0.670\(\pm\)0.004 & 0.589\(\pm\)0.008 & 0.408\(\pm\)0.006 & 0.605\(\pm\)0.002 \\ LIA & ✓ & ✓ & 0.897\(\pm\)0.005 & 0.778\(\pm\)0.052 & 0.305\(\pm\)0.021 & 0.669\(\pm\)0.004 & 0.654\(\pm\)0.010 & 0.447\(\pm\)0.004 & 0.616\(\pm\)0.003 \\ CoSAL & ✓ & ✓ & 0.907\(\pm\)0.016 & 0.813\(\pm\)0.354 & 0.143\(\pm\)0.027 & 0.723\(\pm\)0.018 & 0.555\(\pm\)0.024 & 0.372\(\pm\)0.020 & 0.663\(\pm\)0.017 \\ \hline MaDL(\(\overline{\text{X}}\), I) & ✗ & ✗ & 0.934\(\pm\)0.003 & 0.269\(\pm\)0.035 & 0.100\(\pm\)0.004 & 0.607\(\pm\)0.001 & 0.627\(\pm\)0.000 & 0.444\(\pm\)0.000 & 0.500\(\pm\)0.000 \\ MaDL(\(\overline{\text{X}}\), P) & ✗ & ✓ & 0.935\(\pm\)0.005 & **0.235\(\pm\)0.015** & 0.099\(\pm\)0.006 & 0.692\(\pm\)0.001 & 0.556\(\pm\)0.001 & 0.381\(\pm\)0.001 & 0.606\(\pm\)0.003 \\ MaDL(\(\overline{\text{X}}\), P) & ✗ & ✓ & 0.933\(\pm\)0.005 & 0.255\(\pm\)0.025 & 0.100\(\pm\)0.005 & 0.691\(\pm\)0.002 & 0.556\(\pm\)0.001 & 0.381\(\pm\)0.001 & 0.606\(\pm\)0.002 \\ MaDL(\(\text{X}\), I) & ✗ & ✓ & **0.938\(\pm\)0.006** & 0.247\(\pm\)0.034 & **0.092\(\pm\)0.008** & **0.070\(\pm\)0.004** & **0.492\(\pm\)0.016** & 0.316\(\pm\)0.007 & **0.708\(\pm\)0.004** \\ MaDL(\(\text{X}\), P) & ✓ & ✓ & **0.940\(\pm\)0.004** & 0.242\(\pm\)0.045 & **0.090\(\pm\)0.004** & **0.770\(\pm\)0.000** & **0.496\(\pm\)0.020** & **0.316\(\pm\)0.009** & **0.708\(\pm\)0.005** \\ MaDL(\(\text{X}\), F) & ✓ & ✓ & 0.935\(\pm\)0.006 & 0.303\(\pm\)0.002 & 0.098\(\pm\)0.009 & 0.766\(\pm\)0.004 & **0.491\(\pm\)0.006** & 0.317\(\pm\)0.004 & 0.702\(\pm\)0.005 \\ \hline UB & ✓ & ✓ & 0.509\(\pm\)0.002 & 0.246\(\pm\)0.005 & 0.131\(\pm\)0.003 & 0.736\(\pm\)0.001 & 0.485\(\pm\)0.001 & 0.321\(\pm\)0.001 & 0.704\(\pm\)0.001 \\ LB & ✓ & ✓ & 0.883\(\pm\)0.001 & 0.903\(\pm\)0.003 & 0.385\(\pm\)0.001 & 0.644\(\pm\)0.007 & 0.645\(\pm\)0.005 & 0.453\(\pm\)0.004 & 0.585\(\pm\)0.007 \\ \hline CL & ✓ & ✗ & 0.892\(\pm\)0.002 & 0.312\(\pm\)0.008 & 0.158\(\pm\)0.004 & 0.674\(\pm\)0.002 & 0.58\(\pm\)0.001 & 0.402\(\pm\)0.001 & 0.623\(\pm\)0.001 \\ REAC & ✓ & ✗ & 0.894\(\pm\)0.003 & 0.309\(\pm\)0.011 & 0.155\(\pm\)0.004 & 0.703\(\pm\)0.001 & 0.535\(\pm\)0.001 & 0.364\(\pm\)0.000 & 0.641\(\pm\)0.001 \\ UNION & ✓ & ✗ & 0.893\(\pm\)0.002 & 0.305\(\pm\)0.006 & 0.155\(\pm\)0.003 & 0.674\(\pm\)0.002 & 0.570\(\pm\)0.002 & 0.395\(\pm\)0.002 & 0.622\(\pm\)0.001 \\ LIA & ✓ & ✓ & 0.858\(\pm\)0.002 & 1.017\(\pm\)0.016 & 0.442\(\pm\)0.008 & 0.665\(\pm\)0.024 & 0.628\(\pm\)0.017 & 0.437\(\pm\)0.016 & 0.613\(\pm\)0.027 \\ CoSAL & ✓ & ✓ & 0.894\(\pm\)0.004 & 0.304\(\pm\)0.009 & 0.155\(\pm\)0.004 & 0.725\(\pm\)0.016 & 0.521\(\pm\)0.018 & 0.351\(\pm\)0.016 & 0.679\(\pm\)0.018 \\ \hline MaDL(\(\overline{\text{X}}\), I) & ✗ & ✗ & 0.896\(\pm\)0.003 & 0.340\(\pm\)0.006 & 0.161\(\pm\)0.004 & 0.590\(\pm\)0.000 & 0.605\(\pm\)0.000 & 0.500\(\pm\)0.000 \\ MaDL(\(\overline{\text{X}} **Qualitative study:** Fig. 7 visualizes MaDL(W)'s learned annotator embeddings and weights for the dataset letter with the two annotator sets, correlated and random-correlated, after five training epochs. Based on MaDL(W)'s learned kernel function, we create the two scatter plots via multi-dimensional scaling (Kruskal, 1964) for dimensionality reduction. This way, the annotator embeddings, originally located in an (\(R=16\))-dimensional space, are transformed into a two-dimensional space, where each circle represents one annotator embedding. A circle's color indicates to which annotator group the embedding belongs. The two bar plots visualize the mean annotator weight of the different annotator groups, again indicated by their respective color. Analyzing the scatter plot of the annotator set correlated, we observe that the annotator embeddings' latent representations approximately reflect the annotator groups' correlations. Concretely, there are four clusters. The center cluster corresponds to the seven independent annotators, one cluster-specialized annotator and six common annotators. The three clusters in the outer area represent the three groups of correlated annotators. The bar plot confirms our goal to assign lower weights to strongly correlated annotators. For example, the single independent cluster-specialized annotator has a weight of 4.06, while the eleven correlated cluster-specialized annotators have a mean weight of 0.43. We make similar observations for the annotator set random-correlated. The scatter plot shows that the independent annotators also form a cluster, separated from the cluster of the large group of correlated, randomly guessing annotators. The single adversarial annotator belongs to the cluster of randomly guessing annotators since both groups of annotators make many annotation errors and thus have highly correlated annotation patterns. Again, the bar plot confirms that the correlated annotators get low weights. Moreover, these annotator weights are inversely proportional to the size of a group of correlated annotators. For example, the 90 randomly guessing annotators have a similar weight in sum as the single class-specialized annotator. **Quantitative study:** Table 6 presents the GT and AP models' test performances for the four datasets with the annotator set correlated and Table 7 for the annotator set random-correlated. Both tables indicate whether a technique models correlations between annotators (property P3) and whether the authors of a technique demonstrated its robustness against spamming annotators (property P4). Analogous to RQ1, training with GT labels achieves the best performances (UB), while annotation aggregation via the majority rule leads to the worst ones (LB). The LB's significant underperformance confirms the importance of modeling APs in scenarios with correlated annotators. MaDL(W), as the default MaDL variant, achieves competitive and often superior results for all datasets and evaluation scores. In particular, for the annotator set random-correlated, MaDL(W) outperforms the other techniques, which are vulnerable to many randomly guessing annotators. This observation is also confirmed when we compare MaDL(W) to MaDL(\(\overline{\text{W}}\)). In contrast, there is no consistent performance gain of MaDL(W) over MaDL(\(\overline{\text{W}}\)) for the annotator set correlated. While CoNAL is competitive for the annotator set correlated, its performance strongly degrades for the annotator set random-correlated. The initial E step in LIA's EM algorithm estimates the GT class labels via a probabilistic variant of the majority rule. Similarly to the LB, such an estimate is less accurate for correlated and/or spamming annotators. Besides MaDL(W), only CL and UNION consistently outperform the LB by large margins for the annotator set random-correlated. Figure 7: Visualization of MaDL(W)’s learned similarities between annotator embeddings and associated annotator weights. **RQ3: Do annotator features containing prior information improve learning and enable inductively learning annotators' performances? (Properties P5, P6)** **Takeaway:** Annotator features containing prior information about annotators improve the learning of GT and AP models (property P5). Furthermore, we can use these annotator features to inductively estimate the performances of annotators unavailable during training (property P6). **Setup:** We address RQ3 by evaluating multi-annotator supervised learning techniques with and without using annotator features containing prior information. For each dataset, we simulate 100 annotators according to the annotator set inductive in Table 3. However, only 75 annotators provide class labels for training. Each of them provides class labels for 2 % of randomly selected training instances. The lower annotation ratio is used to study the generalization across annotators sharing similar features. The remaining 25 annotators form a test set to assess AP predictions. We generate annotator features containing prior information by composing information about annotator type, class-wise APs, and cluster-wise APs. Fig. 8 provides examples for two annotators based on two classes and four clusters. We evaluate two variants of LIA, CoNAL, and MaDL, denoted respectively by the schemes LIA(P5), CoNAL(P5), and MaDL(P5). Property P5 refers to a technique's ability to consider prior information about annotators. We differ between the variant with annotator features containing prior information (A) and the one using one-hot encoded features to differ \begin{table} \begin{tabular}{|l|c||c||c|c||c|c||c|c|} \hline \multirow{2}{*}{**Technique**} & \multirow{2}{*}{**P3**} & \multirow{2}{*}{**P4**} & \multicolumn{3}{c||}{**Ground Truth Model**} & \multicolumn{3}{c|}{**Annotator Performance Model**} \\ & & & ACC \(\uparrow\) & NLL \(\downarrow\) & BS \(\downarrow\) & ACC \(\uparrow\) & NLL \(\downarrow\) & BS \(\downarrow\) & BAL-ACC \(\uparrow\) \\ \hline \multicolumn{10}{|l|}{**LeftFFER (Correlated)**} \\ \hline UB & ✗ & ✓ & 0.962\(\pm\)0.004 & 0.129\(\pm\)0.004 & 0.055\(\pm\)0.003 & 0.887\(\pm\)0.002 & 0.305\(\pm\)0.004 & 0.173\(\pm\)0.002 & 0.757\(\pm\)0.002 \\ LB & ✗ & ✗ & 0.762\(\pm\)0.007 & 1.302\(\pm\)0.005 & 0.482\(\pm\)0.004 & 0.682\(\pm\)0.005 & 0.604\(\pm\)0.003 & 0.416\(\pm\)0.002 & 0.602\(\pm\)0.006 \\ \hline CL & ✗ & ✗ & 0.803\(\pm\)0.005 & 2.435\(\pm\)1.128 & 0.818\(\pm\)0.057 & 0.800\(\pm\)0.008 & 0.446\(\pm\)0.016 & 0.285\(\pm\)0.012 & 0.674\(\pm\)0.007 \\ REAC & ✗ & ✗ & 0.922\(\pm\)0.003 & **0.288\(\pm\)0.005** & 0.115\(\pm\)0.007 & 0.815\(\pm\)0.001 & 0.395\(\pm\)0.001 & 0.249\(\pm\)0.001 & 0.684\(\pm\)0.001 \\ UNION & ✓ & ✗ & 0.866\(\pm\)0.019 & 1.668\(\pm\)0.322 & 0.224\(\pm\)0.034 & 0.795\(\pm\)0.007 & 0.432\(\pm\)0.007 & 0.278\(\pm\)0.007 & 0.667\(\pm\)0.006 \\ LIA & ✗ & ✗ & 0.823\(\pm\)0.005 & 1.483\(\pm\)0.018 & 0.569\(\pm\)0.007 & 0.676\(\pm\)0.005 & 0.629\(\pm\)0.004 & 0.436\(\pm\)0.004 & 0.575\(\pm\)0.004 \\ CoNAL & ✓ & ✓ & 0.871\(\pm\)0.015 & 1.380\(\pm\)0.349 & 0.213\(\pm\)0.024 & 0.840\(\pm\)0.014 & 0.390\(\pm\)0.028 & 0.238\(\pm\)0.021 & 0.712\(\pm\)0.014 \\ \hline MaDL(W) & ✗ & ✗ & **0.944\(\pm\)0.006** & 0.293\(\pm\)0.082 & **0.083\(\pm\)0.009** & **0.883\(\pm\)0.002** & **0.314\(\pm\)0.001** & **0.178\(\pm\)0.002** & **0.751\(\pm\)0.003** \\ MaDL(W) & ✓ & ✓ & **0.947\(\pm\)0.003** & **0.282\(\pm\)0.069** & **0.089\(\pm\)0.004** & **0.087\(\pm\)0.001** & **0.003\(\pm\)0.004** & **0.175\(\pm\)0.002** & **0.756\(\pm\)0.001** \\ \hline UB & ✗ & ✓ & 0.995\(\pm\)0.002 & 0.240\(\pm\)0.015 & 0.131\(\pm\)0.003 & 0.860\(\pm\)0.002 & 0.333\(\pm\)0.002 & 0.193\(\pm\)0.002 & 0.741\(\pm\)0.002 \\ LB & ✗ & ✗ & 0.787\(\pm\)0.003 & 1.127\(\pm\)0.013 & 0.475\(\pm\)0.007 & 0.668\(\pm\)0.009 & 0.626\(\pm\)0.006 & 0.436\(\pm\)0.006 & 0.580\(\pm\)0.005 \\ \hline CL & ✗ & ✗ & 0.868\(\pm\)0.003 & 0.447\(\pm\)0.020 & 0.217\(\pm\)0.010 & 0.799\(\pm\)0.004 & 0.421\(\pm\)0.004 & 0.270\(\pm\)0.003 & 0.677\(\pm\)0.004 \\ REAC & ✗ & ✗ & 0.873\(\pm\)0.004 & 0.415\(\pm\)0.012 & 0.196\(\pm\)0.006 & 0.828\(\pm\)0.001 & 0.382\(\pm\)0.001 & 0.237\(\pm\)0.001 & 0.697\(\pm\)0.001 \\ UNION & ✗ & ✗ & 0.859\(\pm\)0.006 & 0.411\(\pm\)0.018 & 0.205\(\pm\)0.008 & 0.801\(\pm\)0.009 & 0.420\(\pm\)0.014 & 0.269\(\pm\)0.011 & 0.678\(\pm\)0.009 \\ LIA & ✗ & ✗ & 0.837\(\pm\)0.006 & 1.277\(\pm\)0.008 & 0.553\(\pm\)0.004 & 0.685\(\pm\)0.002 & 0.633\(\pm\)0.001 & 0.441\(\pm\)0.001 & 0.569\(\pm\)0.002 \\ CoNAL & ✓ & ✓ & 0.897\(\pm\)0.002 & 0.299\(\pm\)0.009 & 0.152\(\pm\)0.004 & 0.844\(\pm\)0.001 & 0.356\(\pm\)0.003 & 0.217\(\pm\)0.002 & 0.721\(\pm\)0.001 \\ \hline MaDL(W) & ✗ & ✗ & **0.904\(\pm\)0.002** & **0.272\(\pm\)0.007** & **0.139\(\pm\)0.003** & **0.863\(\pm\)0.003** & **0.337\(\pm\)0.004** & **0.201\(\pm\)0.004** & **0.737\(\pm\)0.004** \\ MaDL(W) & ✓ & ✓ & **0.903\(\pm\)0.002** & **0.273\(\pm\)0.004** & **0.141\(\pm\)0.002** & **0.863\(\pm\)0.003** & **0.338\(\pm\)0.003** & **0.202\(\pm\)0.003** & **0.738\(\pm\)0.003** \\ \hline \multicolumn{10}{|l|}{**CIRAL (Complete)**} \\ \hline UB & ✗ & ✓ & 0.933\(\pm\)0.002 & 0.489\(\pm\)0.017 & 0.118\(\pm\)0.003 & 0.837\(\pm\)0.001 & 0.384\(\pm\)0.001 & 0.233\(\pm\)0.001 & 0.711\(\pm\)0.001 \\ LB & ✗ & ✗ & 0.652\(\pm\)0.014 & 1.309\(\pm\)0.016 & 0.540\(\pm\)0.008 & 0.602\(\pm\)0.011 & 0.623\(\pm\)0.003 & 0.436\(\pm\)0.003 & 0.541\(\pm\)0.008 between annotators' identities (\(\overline{\Lambda}\)). MaDL(\(\overline{\Lambda}\)) corresponds to MaDL's default variant in this setup. We do not evaluate CL, UNION, and REAC since these techniques cannot handle annotator features. **Qualitative study:** Fig. 8 visualizes AP predictions of MaDL(A) regarding two exemplary annotators for the dataset toy. The visualization of these AP predictions is analogous to Fig. 6. Neither of the two annotators provides class labels for the training, and the plotted training instances show only potential annotations to visualize the annotation patterns. The vectors at the right list the annotator features containing prior information for both annotators. The colors reveal the meanings of the respective feature values. These meanings are unknown to MaDL(A), such that its AP predictions exclusively result from generalizing similar annotators' features and their annotations available during training. MaDL(A) correctly identifies the left annotator as adversarial because it predicts low (white) AP scores across the feature space regions close to training instances. For the right cluster-specialized annotator, MaDL(A) accurately separates the two weak clusters (feature space regions with predominantly crosses) with low AP estimates from the two expert clusters (feature space regions with predominantly circles) with high AP estimates. **Quantitative study:** Table 8 presents the GT and AP models' test performances for the four datasets with the simulated annotator set inductive. The table further indicates whether a technique processes prior information as annotator features (property P5) and whether a technique can inductively estimate the performances of annotators unavailable during the training phase (property P6). Note that the AP results refer to the aforementioned 25 test annotators. Hence, there are no results (marked as -) for techniques with AP models not fulfilling property P6. For completeness, we provide the results for the 75 annotators \begin{table} \begin{tabular}{l|c||c||c|c||c|c|c|c} \hline \hline \multirow{2}{*}{**Technique**} & \multirow{2}{*}{**PS**} & \multirow{2}{*}{**P4**} & \multicolumn{4}{c||}{**Ground Truth Model**} & \multicolumn{4}{c}{**Annotator Performance Model**} \\ & & & ACC \(\uparrow\) & NLL \(\downarrow\) & BS \(\downarrow\) & ACC \(\uparrow\) & NLL \(\downarrow\) & BS \(\downarrow\) & BAL-ACC \(\uparrow\) \\ \hline \multicolumn{10}{c}{**Leffree (random-correlated)**} \\ \hline UB & ✗ & ✓ & 0.960\(\pm\)0.003 & 0.131\(\pm\)0.006 & 0.059\(\pm\)0.003 & 0.937\(\pm\)0.002 & 0.212\(\pm\)0.003 & 0.104\(\pm\)0.002 & 0.516\(\pm\)0.002 \\ LB & ✗ & ✓ & 0.056\(\pm\)0.009 & 3.007\(\pm\)0.049 & 0.965\(\pm\)0.004 & 0.088\(\pm\)0.000 & 0.950\(\pm\)0.290 & 0.186\(\pm\)0.002 & 0.500\(\pm\)0.000 \\ \hline CL & ✗ & ✗ & 0.565\(\pm\)0.028 & 3.519\(\pm\)0.455 & 0.682\(\pm\)0.052 & 0.925\(\pm\)0.000 & 0.237\(\pm\)0.004 & 0.124\(\pm\)0.002 & 0.506\(\pm\)0.000 \\ REAC & ✗ & ✗ & 0.607\(\pm\)0.024 & **1.810\(\pm\)0.127** & **0.561\(\pm\)0.034** & **0.926\(\pm\)0.000** & **0.221\(\pm\)0.004** & **0.116\(\pm\)0.002** & **0.507\(\pm\)0.000** \\ UNION & ✗ & ✗ & **0.615\(\pm\)0.034** & 3.317\(\pm\)0.582 & 0.625\(\pm\)0.056 & 0.925\(\pm\)0.000 & 0.232\(\pm\)0.004 & 0.122\(\pm\)0.002 & 0.506\(\pm\)0.000 \\ LIA & ✗ & ✗ & 0.352\(\pm\)0.010 & 2.960\(\pm\)0.035 & 0.932\(\pm\)0.004 & 0.088\(\pm\)0.000 & 0.213\(\pm\)0.137 & 1.474\(\pm\)0.041 & 0.500\(\pm\)0.000 \\ CoNAL & ✓ & ✓ & 0.581\(\pm\)0.015 & 2.325\(\pm\)0.249 & 0.599\(\pm\)0.027 & 0.925\(\pm\)0.000 & 0.236\(\pm\)0.002 & 0.124\(\pm\)0.001 & 0.507\(\pm\)0.000 \\ \hline MaDL(W) & ✗ & ✗ & 0.548\(\pm\)0.033 & 1.092\(\pm\)0.215 & 0.673\(\pm\)0.044 & 0.801\(\pm\)0.044 & 0.423\(\pm\)0.033 & 0.265\(\pm\)0.027 & 0.506\(\pm\)0.006 \\ MaDL(W) & ✓ & ✓ & **0.932\(\pm\)0.003** & **0.277\(\pm\)0.038** & 0.101\(\pm\)0.005 & **0.940\(\pm\)0.000** & **0.003** & **0.101\(\pm\)0.001** & **0.519\(\pm\)0.001** \\ \hline \multicolumn{10}{c}{**Finnster (random-correlated)**} \\ \hline UB & ✗ & ✓ & 0.009\(\pm\)0.002 & 0.240\(\pm\)0.005 & 0.131\(\pm\)0.003 & 0.588\(\pm\)0.000 & 0.337\(\pm\)0.001 & 0.191\(\pm\)0.000 & 0.520\(\pm\)0.000 \\ LB & ✗ & ✗ & 0.172\(\pm\)0.019 & 2.296\(\pm\)0.005 & 0.899\(\pm\)0.001 & 0.140\(\pm\)0.000 & 2.186\(\pm\)0.169 & 1.703\(\pm\)0.000 & 0.500\(\pm\)0.000 \\ \hline CL & ✗ & ✗ & 0.880\(\pm\)0.003 & 0.462\(\pm\)0.169 & 0.222\(\pm\)0.073 & 0.880\(\pm\)0.003 & 0.347\(\pm\)0.004 & 0.200\(\pm\)0.003 & 0.513\(\pm\)0.002 \\ REAC & ✗ & ✗ & 0.870\(\pm\)0.003 & 0.470\(\pm\)0.009 & 0.204\(\pm\)0.004 & **0.885\(\pm\)0.000** & **0.342\(\pm\)0.000** & **0.194\(\pm\)0.000** & 0.514\(\pm\)0.000 \\ UNION & ✗ & ✗ & **0.884\(\pm\)0.002** & **0.387\(\pm\)0.002** & **0.182\(\pm\)0.007** & 0.881\(\pm\)0.000 & 0.345\(\pm\)0.000 & 0.198\(\pm\)0.000 & 0.514\(\pm\)0.000 \\ LIA & ✗ & ✓ & 0.677\(\pm\)0.008 & 2.094\(\pm\)0.002 & 0.852\(\pm\)0.001 & 0.140\(\pm\)0.000 & 2.067\(\pm\)0.005 & 1.418\(\pm\)0.002 & 0.500\(\pm\)0.000 \\ CoNAL & ✓ & ✓ & 0.858\(\pm\)0.012 & 0.457\(\pm\)0.086 & 0.219\(\pm\)0.031 & 0.882\(\pm\)0.002 & 0.344\(\pm\)0.002 & 0.197\(\pm\)0.002 & **0.516\(\pm\)0.001** \\ \hline MaDL(W) & ✗ & ✗ & 0.337\(\pm\)0.046 & 2.131\(\pm\)0.090 & 0.855\(\pm\)0.029 & 0.229\(\pm\)0.075 & 0.138\(\pm\)0.146 & 0.814\(\pm\)0.128 & 0.498\(\pm\)0.002 \\ MaDL(W) & ✓ & ✓ & **0.896\(\pm\)0.002** & **0.290\(\pm\)0.003** & **0.150\(\pm\)0.002** & **0.889\(\pm\)0.000** & **0.337\(\pm\)0.000** & **0.191\(\pm\)0.000** & **0.520\(\pm\)0.000** \\ \hline \multicolumn{10}{c}{**CIField (random-correlated)**} \\ \hline UB & ✗ & ✓ & 0.932\(\pm\)0.002 & 0.519\(\pm\)0.016 & 0.119\(\pm\)0.004 & 0.886\(\pm\)0.000 & 0.340\(\pm\)0.002 & 0.192\(\pm\)0.001 & 0.515\(\pm\)0.000 \\ LB & ✗ & ✗ & 0.141\(\pm\)0.008 & 2.301\(\pm\)0.002 & 0 providing class labels for training in Appendix A. As for RQ1 and RQ2, training with GT labels leads to the best performance results, whereas learning from annotations aggregated via the majority rule mostly results in the worst performances. Inspecting the results of MaDL(A)'s GT model compared to the other techniques, we observe competitive or partially superior results across all four datasets. Concerning its AP model, we further note that MaDL(A) provides meaningful AP estimates, indicated by BAL-ACC values greater than 0.5. Comparing the GT models' results of each pair of variants, performance gains for LIA and MaDL demonstrate the potential benefits of learning from annotator features containing prior information. In contrast, the GT models' results of CoNAL(A) and CoNAL(\(\overline{\text{A}}\)) hardly differ. ## 6 Conclusion In this article, we made three main contributions. (1) We started with a formalization of the objectives in multi-annotator supervised learning. Focusing on AP estimation, we then presented six relevant properties (cf. P1-P6 in Section 3) for categorizing related techniques in this research area. (2) Considering these six properties, we proposed our framework MaDL. A modular, probabilistic nature and a weighted loss function modeling annotator correlations characterize its novelties. (3) We experimentally investigated the six properties via three RQs. The results confirmed MaDL's robust and often superior performance to related multi-annotator supervised learning techniques. The findings of this article, with a focus on AP estimation, provide a starting point for several aspects of future research, some examples of which are given below. Although the annotator embeddings already contain information about the annotation patterns concerning instances and classes, MaDL is currently limited to computing annotator correlations on a global level, i.e., annotator weights are not an explicit function of instance-annotator pairs. For example, an extension in this direction may be valuable to quantify correlations in certain regions of the feature space. Leveraging AP estimates for additional applications, e.g., selecting the best annotators to obtain high-quality annotations during active learning (Herde et al., 2021), is also of great value. Furthermore, we have limited ourselves to class labels as annotations. Future investigations of learning with different annotation types, such as class labels with confidence scores (Berthon et al., 2021) or partial labels (Yu et al., 2022), are apparent. Another neglected aspect is the study of epistemic uncertainty (Huseljic et al., 2021). For example, the visualizations for the two-dimensional dataset in Fig. 6 show high certainty of the GT and AP models in feature space regions with no observed instances. However, meaningful epistemic uncertainty estimates are essential in many (safety-critical) applications (Hullermeier and Waegeman, 2021) and would improve the characterization of annotators' knowledge. During our experiments, we showed the potential benefit of annotator features. We had no access to a dataset with prior information from real-world annotators, so we needed a suitable simulation for these features. Therefore, and also noted by Zhang et al. (2023), future research may acquire Figure 8: Visualization of MaDL(A)’s inductive AP estimates for two unknown annotators. such prior information via crowdsourcing to verify their benefit. As the concentration of annotators may fluctuate or annotators may learn during the annotation process, taking time-varying APs into account is another potential avenue for future research (Donmez et al., 2010). Finally, there are already crowdsourcing approaches (Chang et al., 2017) and concepts (Calma et al., 2016) that support collaboration between annotators. Accordingly, developing techniques that consider or even recommend such collaborations is of practical value (Fang et al., 2012). ### Broader Impact Statement Big data annotation is a driving force behind the success of machine learning (Zhou et al., 2017). Reducing the effort and cost required for this step is essential for the ongoing successful development. In this context, our proposed framework MaDL is a possible tool to leverage the workforce of multiple cost-efficient but error-prone annotators. However, as a central resource for data annotation, crowdsourcing can harm individuals or even entire communities. Some of these impacts include exploiting vulnerable individuals who participate in low-way crowdsourcing tasks (Schlagwein et al., 2019), producing low-quality data (Daniel et al., 2018), and outsourcing jobs (Howe, 2008). On the one hand, multi-annotator supervised learning techniques can improve data quality and support awarding well-performing crowd workers. On the other hand, such a technique may intensify the already present competition between crowd workers (Schlagwein et al., 2019). It also requires tight monitoring to ensure that the assessments of crowd workers are fair. Besides the benefits of annotator features containing prior information, e.g., improved learning and recommending annotators, \begin{table} \begin{tabular}{|l|c||c||c|c||c|c|c|c|} \hline \multirow{2}{*}{**Technique**} & \multirow{2}{*}{**P5**} & \multirow{2}{*}{**P6**} & \multicolumn{3}{c||}{**Ground Truth Model**} & \multicolumn{3}{c|}{**Annotator Performance Model**} \\ & & & Acc \(\uparrow\) & NLL \(\downarrow\) & BS \(\downarrow\) & ACC \(\uparrow\) & NLL \(\downarrow\) & BS \(\downarrow\) & BAL-ACC \(\uparrow\) \\ \hline \hline \multicolumn{11}{|c|}{**Letter (inductive)**} \\ \hline UB & ✓ & ✓ & 0.962\(\pm\)0.002 & 0.129\(\pm\)0.003 & 0.038\(\pm\)0.002 & 0.672\(\pm\)0.003 & 0.745\(\pm\)0.047 & 0.437\(\pm\)0.011 & 0.612\(\pm\)0.003 \\ LB & ✓ & ✓ & 0.861\(\pm\)0.005 & 1.090\(\pm\)0.017 & 0.429\(\pm\)0.008 & 0.569\(\pm\)0.008 & **0.730\(\pm\)0.011** & **0.522\(\pm\)0.007** & 0.537\(\pm\)0.006 \\ \hline LiA(A) & ✗ & ✗ & 0.875\(\pm\)0.006 & 0.901\(\pm\)0.006 & 0.350\(\pm\)0.024 & & & & \\ LiA(A) & ✓ & 0.876\(\pm\)0.006 & 1.006\(\pm\)0.177 & 0.397\(\pm\)0.074 & **0.609\(\pm\)0.017** & 1.447\(\pm\)0.845 & 0.597\(\pm\)0.105 & **0.545\(\pm\)0.033** \\ CoNAL(A) & ✗ & ✗ & 0.875\(\pm\)0.009 & 0.804\(\pm\)0.119 & 0.186\(\pm\)0.010 & – & – & – & – \\ CoNAL(A) & ✗ & ✗ & 0.874\(\pm\)0.007 & 0.088\(\pm\)0.116 & 0.186\(\pm\)0.011 & – & – & – & – \\ \hline MaDL(A) & ✗ & ✗ & **0.911\(\pm\)0.006** & **0.334\(\pm\)0.026** & **0.129\(\pm\)0.008** & – & – & – & – \\ MaDL(A) & ✓ & ✓ & **0.914\(\pm\)0.004** & **0.303\(\pm\)0.009** & **0.124\(\pm\)0.005** & **0.668\(\pm\)0.007** & **0.813\(\pm\)0.115** & **0.471\(\pm\)0.015** & **0.600\(\pm\)0.010** \\ \hline \multicolumn{11}{|c|}{**Finnet (inductive)**} \\ \hline UB & ✓ & ✓ & 0.890\(\pm\)0.002 & 0.244\(\pm\)0.005 & 0.131\(\pm\)0.003 & 0.730\(\pm\)0.008 & 0.536\(\pm\)0.019 & 0.387\(\pm\)0.010 & 0.656\(\pm\)0.005 \\ LB & ✓ & ✓, & 0.881\(\pm\)0.002 & 0.876\(\pm\)0.005 & 0.370\(\pm\)0.002 & 0.590\(\pm\)0.023 & 0.681\(\pm\)0.005 & 0.487\(\pm\)0.006 & 0.537\(\pm\)0.010 \\ \hline LiA(A) & ✗ & ✗ & 0.852\(\pm\)0.003 & 1.011\(\pm\)0.020 & 0.436\(\pm\)0.010 & – & – & – & – \\ LiA(A) & ✓ & ✓ & 0.855\(\pm\)0.002 & 0.972\(\pm\)0.012 & 0.417\(\pm\)0.006 & **0.674\(\pm\)0.036** & **0.626\(\pm\)0.026** & **0.436\(\pm\)0.024** & **0.601\(\pm\)0.027** \\ CoNAL(A) & ✗ & ✗ & 0.889\(\pm\)0.002 & 0.322\(\pm\)0.005 & 0.163\(\pm\)0.003 & – & – & – & – \\ CoNAL(A) & ✗ & ✗ & **0.890\(\pm\)0.002** & 0.323\(\pm\)0.011 & 0.163\(\pm\)0.005 & – & – & – & – \\ \hline MaDL(A) & ✗ & ✗ & **0.895\(\pm\)0.002** & **0.297\(\pm\)0.004** & **0.152\(\pm\)0.002** & – & – & – & – \\ MaDL(A) & ✓ & ✓ & **0.893\(\pm\)0.004** & **0.297\(\pm\)0.008** & **0.153\(\pm\)0.004** & **0.723\(\pm\)0.004** & **0.538\(\pm\)0.003** & **0.362\(\pm\)0.003** & **0.649\(\pm\)0.005** \\ \hline \multicolumn{11}{|c|}{**Ground (inductive)**} \\ \hline UB & ✓ & ✓ & 0.931\(\pm\)0.002 & 0.527\(\pm\)0.022 & 0.122\(\pm\)0.003 & 0.686\(\pm\)0.006 & 0.646\(\pm\)0.101 & 0.409\(\pm\)0.016 & 0.613\(\pm\)0.006 \\ LB & ✓ & ✓ & 0.781\(\pm\)0.003 & 1.054\(\pm\)0.035 & 0.447\(\pm\)0.016 & 0.583\(\pm\)0.009 & 0.684\(\pm\)0.004 & 0.490\(\pm\)0.004 & 0.521\(\pm\)0.003 \\ LiA(A) & ✗ & ✗ & 0.798\(\pm\)0.008 & 1.072\(\pm\)0.014 & 0.455\(\pm\)0.006 & – & – & – & – \\ LiA(A) & ✓ & ✓ & 0.804\(\pm\)0.004 & 1.056\(\pm\)0.022 & 0.447\(\pm\)0.011 & **0.607\(\pm\)0.020** & **0.670\(\pm\)0.017** & **0.477\(\pm\)0.016** & **0.544\(\pm\)0.010** \\ CoNAL(A) & ✗� & ✗ & **0.885\(\pm\)0.002** & 0.576\(\pm\)0.016 & 0.248\(\pm\)0.005 & – & – & – & – \\ CoNAL(A) & ✗ & ✗ & 0.834\(\pm\)0.006 & **0.574\(\pm\)0.017** & 0.248\(\pm\)0.007 & – & – & – & – \\ \hline MaDL(A) & ✗ & ✗ & 0.811\(\pm\)0.008 & 0.626\(\pm\)0.036 & 0.277\(\pm\)0.014 & – & – & – & – \\ MaDL(A) & ✓ ✓ & ✓ & **0.837\(\pm\)0.003** & **0.557\(\pm\)0.028** & **0.242\(\pm\)0.006** & **0.698\(\pm\)0.003** & **0.567\(\pm\)0.015** & **0.383\(\pm\)0.004** & **0.617\(\pm\)0.004** \\ \hline \multicolumn{11}{|c|} there are several risks. Collecting and leaking potentially sensitive personal data about the annotators is such a significant risk (Xia and McKernan, 2020). Therefore, the annotator features must contain only information relevant to the learning task. Moreover, a lack of control over this or other processes in crowdsourcing can lead to discrimination and bias based on gender, origin, and other factors (Goel and Faltings, 2019). For these reasons, it is crucial to consider and address the potential risks through responsible policies and practices when employing multi-annotator supervised learning techniques. #### Acknowledgments We thank Lukas Rauch for his insightful comments, which significantly improved this article.
2306.14522
Nonconvex Stochastic Bregman Proximal Gradient Method with Application to Deep Learning
The widely used stochastic gradient methods for minimizing nonconvex composite objective functions require the Lipschitz smoothness of the differentiable part. But the requirement does not hold true for problem classes including quadratic inverse problems and training neural networks. To address this issue, we investigate a family of stochastic Bregman proximal gradient (SBPG) methods, which only require smooth adaptivity of the differentiable part. SBPG replaces the upper quadratic approximation used in SGD with the Bregman proximity measure, resulting in a better approximation model that captures the non-Lipschitz gradients of the nonconvex objective. We formulate the vanilla SBPG and establish its convergence properties under nonconvex setting without finite-sum structure. Experimental results on quadratic inverse problems testify the robustness of SBPG. Moreover, we propose a momentum-based version of SBPG (MSBPG) and prove it has improved convergence properties. We apply MSBPG to the training of deep neural networks with a polynomial kernel function, which ensures the smooth adaptivity of the loss function. Experimental results on representative benchmarks demonstrate the effectiveness and robustness of MSBPG in training neural networks. Since the additional computation cost of MSBPG compared with SGD is negligible in large-scale optimization, MSBPG can potentially be employed as an universal open-source optimizer in the future.
Kuangyu Ding, Jingyang Li, Kim-Chuan Toh
2023-06-26T08:54:46Z
http://arxiv.org/abs/2306.14522v2
# Nonconvex Stochastic Bregman Proximal Gradient Method with Application to Deep Learning ###### Abstract The widely used stochastic gradient methods for minimizing nonconvex composite objective functions require the Lipschitz smoothness of the differentiable part. But the requirement does not hold true for problem classes including quadratic inverse problems and training neural networks. To address this issue, we investigate a family of stochastic Bregman proximal gradient (SBPG) methods, which only require smooth adaptivity of the differentiable part. SBPG replaces the upper quadratic approximation used in SGD with the Bregman proximity measure, resulting in a better approximation model that captures the non-Lipschitz gradients of the nonconvex objective. We formulate the vanilla SBPG and establish its convergence properties under nonconvex setting without finite-sum structure. Experimental results on quadratic inverse problems testify the robustness of SBPG. Moreover, we propose a momentum-based version of SBPG (MSBPG) and prove it has improved convergence properties. We apply MSBPG to the training of deep neural networks with a polynomial kernel function, which ensures the smooth adaptivity of the loss function. Experimental results on representative benchmarks demonstrate the effectiveness and robustness of MSBPG in training neural networks. Since the additional computation cost of MSBPG compared with SGD is negligible in large-scale optimization, MSBPG can potentially be employed as an universal open-source optimizer in the future. Nonconvex Stochastic Bregman Proximal Gradient Method in Deep Learning Nonconvex Stochastic Bregman Proximal Gradient Method in Deep Learning ## 1 Introduction In this paper, we present and analyze a family of nonconvex stochastic Bregman proximal gradient methods (SBPG) for solving the following generic stochastic minimization problem: \[\min_{\mathbf{x}\in\overline{C}}\ \mathbb{E}_{\mathbf{\xi}}[f(\mathbf{x},\mathbf{\xi})]+R( \mathbf{x}), \tag{1}\] where \(f(\cdot,\mathbf{\xi})\) is a nonconvex differentiable function on \(\overline{C}\), \(R\) is a proper lower-semicontinuous convex function, \(\mathbf{\xi}\) is a random variable, and \(\overline{C}\) is the closure of \(C\), which is a nonempty convex open subset of \(\mathbb{R}^{d}\). We denote \(F(\mathbf{x})\coloneqq\mathbb{E}_{\mathbf{\xi}}[f(\mathbf{x},\mathbf{\xi})]\), and \(\Phi(\mathbf{x})\coloneqq F(\mathbf{x})+R(\mathbf{x})\). This stochastic minimization problem, where an optimizer has limited access to the distribution of \(\mathbf{\xi}\) and can only draw samples from it, is prevalent in the fields of machine learning and statistics (Hastie et al., 2009; Shapiro et al., 2021; Zhang, 2004). In many instances, the smooth part of the objective function \(F(\mathbf{x})\) can be formulated as a finite-sum structure \(F(\mathbf{x})=\frac{1}{n}\sum_{i=1}^{n}f_{i}(\mathbf{x})\). However, when \(n\) is extremely large, calculating the true gradient for the smooth part of the objective function becomes extremely expensive. As a result, stochastic first-order methods, which trace back to the work of Robbins and Monro (1951), have become the prevailing approach for solving these large-scale optimization problems. In particular, stochastic (proximal) gradient descent and its numerous variants (Duchi et al., 2011; Duchi and Singer, 2009; Gu et al., 2020; Kingma and Ba, 2014; Allen-Zhu, 2018; Wang et al., 2022) have been widely utilized in large-scale stochastic optimization for machine learning (LeCun et al., 2015; Shapiro et al., 2021; Zhang, 2004). From modeling perspective, stochastic gradient descent can be viewed as minimizing a sequence of upper quadratic approximations of the nonconvex objective \(\Phi(\mathbf{x})\): \[\mathbf{x}^{k+1}=\operatorname*{argmin}_{\mathbf{x}\in\overline{C}}\left\{\underbrace {F(\mathbf{x}^{k},\mathbf{\Xi}_{k})+\langle\widetilde{\nabla}_{k},\,\mathbf{x}-\mathbf{x}^{k} \rangle+\frac{1}{2\alpha_{k}}\|\mathbf{x}-\mathbf{x}^{k}\|^{2}}_{F_{\mathbf{x}^{k}}(\mathbf{x} ):\,model\,of\,F\,\,at\,\mathbf{x}^{k}}+R(\mathbf{x})\right\}, \tag{2}\] where \(F(\mathbf{x}^{k},\mathbf{\Xi}_{k})\coloneqq\frac{1}{|\mathbf{\Xi}_{k}|}\sum_{\mathbf{\xi}\in \mathbf{\Xi}_{k}}f(\mathbf{x}^{k},\mathbf{\xi})\), \(\mathbf{\Xi}_{k}\) is the set of samples of \(\mathbf{\xi}\) at the \(k\)-th iteration, and \(\widetilde{\nabla}_{k}\) is an estimator of the exact gradient \(\nabla F(x^{k})\). This modeling perspective is well-known in deterministic optimization, and has been used in methods such as Newton method, Gauss-Newton method, bundle methods, and trust-region methods, as discussed in various sources such as Hiriart-Urruty and Lemarechal (1993); Nesterov (2003); Lin et al. (2007); Paren et al. (2022). Despite being widely used, stochastic gradient methods (2) still have several well-known bottlenecks both in theory and practice. One of the crucial requirements for analyzing stochastic gradient methods is the assumption of Lipschitz continuity of the gradient of the differentiable part, which is essential for ensuring the convergence of the algorithm. However, this assumption does not always hold true. For example, even the seemingly simple function \(F(x)=x^{4}\) does not admit a globally Lipschitz gradient over \(\mathbb{R}\), which highlights the challenges of analyzing stochastic gradient methods. In addition, choosing the appropriate stepsize is another challenge in the practical usage of stochastic gradient methods. The stepsize has a significant impact on the convergence performance of the algorithm, and finding the optimal stepsize can be a time-consuming process. Engineers may have to conduct multiple experiments to determine the optimal stepsize, further compounding the complexity of the problem at hand. To address these issues, classical approaches often resort to either line search or more complicated inner loops, but these methods can negatively impact the efficiency of the algorithm or even becomes intractable in a stochastic setting. For instance, stochastic proximal point algorithm (PPA) model the approximation of \(F(\mathbf{x})\) in (2) as \(F(\mathbf{x},\mathbf{\xi}_{k})+\frac{1}{2\alpha_{k}}\|\mathbf{x}-\mathbf{x}^{k}\|^{2}\)(Bertsekas, 2011; Bianchi, 2016; Patrascu and Necoara, 2017; Rockafellar, 1976), which makes the selection of stepsize \(\alpha_{k}\) more robust than the original model (2). However, the application of stochastic PPA is limited due to the difficulty of solving the subproblems, particularly when dealing with complicated objective functions, such as training deep neural networks. In such cases, solving the subproblem is almost as difficult as solving the original problem, rendering the approach impractical. Recently, Bauschke et al. (2017); Lu et al. (2018) have proposed using Bregman proximity measures to relax the assumption of gradient Lipschitz continuity to smooth adaptivity. The Bregman gradient method was first introduced as the mirror descent scheme by Nemirovskij and Yudin (1983) for minimizing convex nonsmooth functions. From the modeling perspective, Bregman methods consider the following subproblem at each iteration: \[\mathbf{x}^{k+1}=\operatorname*{argmin}_{\mathbf{x}\in\overline{C}}\left\{\underbrace {F(\mathbf{x}^{k},\mathbf{\Xi}_{k})+\langle\widetilde{\nabla}_{k},\,\mathbf{x}-\mathbf{x}^{k }\rangle+\frac{1}{\alpha_{k}}\mathcal{D}_{\phi}(\mathbf{x},\mathbf{x}^{k})}_{F_{\mathbf{x }^{k}}(\mathbf{x}):\,model\,\,of\,\,F\,\,at\,\mathbf{x}^{k}}+R(\mathbf{x})\right\}, \tag{3}\] where \(\mathcal{D}_{\phi}\) is the Bregman distance induced by the kernel function \(\phi\). To convey the advantage of the Bregman proximity model, we present a toy example. Consider objective function \(F(x)=x^{4}\) with non-Lipschitz gradient continuity, we compare the effectiveness of the upper quadratic approximation model (2) and Bregman proximity model (3). As apparent from Figure (1)(a), the Bregman proximity model (3) (\(F_{2}(x)\)) with the kernel function \(\phi(x)=\frac{1}{2}x^{2}+\frac{1}{4}x^{4}\) can provide a more suitable approximation for \(F(x)\) than the upper quadratic approximation model 2 (\(F_{1}(x)\)), as the yellow curve stays closer to the curve of the objective function \(F(x)=x^{4}\). With this improved approximation, \(x^{k+1}\) generated by the Bregman gradient method can make more significant progress towards approaching the optimal solution (\(x^{*}=0\)) than \(x^{k+1}\) generated by the gradient descent method, as depicted in Figure 1(b). While several stochastic extensions of Bregman methods that are based on smooth adaptivity assumption have been developed recently, the current literature primarily focuses on stochastic convex problems (Dragomir et al., 2021; Hanzely and Richtarik, 2021; Lu, 2019). The only existing convergence analysis of Bregman methods for nonconvex problems (Latafat et al., 2022) requires a finite-sum structure and a novel equivalent consensus reformulation. Moreover, it is memory-intensive and requires essentially periodic computation of the full gradient, which is expensive for large-scale problems such as deep neural networks (Defazio and Bottou, 2019). As we can see, stochastic Bregman methods have not been fully explored in the context of modern large-scale nonconvex problems such as training neural networks, and rigorous numerical evaluations of their performance are limited. Furthermore, current literature lacks attention towards the robustness of stochastic Bregman methods, particularly in terms of selecting stepsizes and initial points, which can significantly impact their performance in large-scale problems. In this paper, we consider stochastic Bregman proximal gradient methods (SBPG) for nonconvex problems (without finite-sum structure requirement) with application to the training of deep neural networks. We establish the convergence result of a vanilla SBPG without Lipschitz smoothness assumption for nonconvex problems. Moreover, we propose a momentum-based SBPG (denoted as MSBPG) and prove that it has improved convergence properties compared with vanilla SBPG. We apply MSBPG to the training of deep neural networks with a polynomial kernel function, which ensures the smooth adaptivity of the loss function. According to our analysis, MSBPG can improve the robustness of training neural networks by mitigating the gradient explosion phenomenon and improve generalization performance by adopting Bregman proximity approximation to the loss function locally. For numerical illustrations, we conduct numerical experiments on quadratic inverse problems (QIP) and testify vanilla SBPG's robustness to stepsize selection and initial point's scaling. We also conduct extensive experiments on training CNNs for image classification and LSTMs for language modeling by employing MSBPG, which is well-suited for solving large-scale problems. Experimental results on representative benchmarks show that our MSBPG has excellent generalization performance, outperforming the most frequently used optimization algorithms, including SGD (Robbins and Monro, 1951), Adam (Kingma and Ba, 2014), and AdamW (Loshchilov and Hutter, 2017). Furthermore, MSBPG is demonstrated to be robustness to large stepsize and initial point's scaling, which are the common reasons behind gradient explosion. To summarize, our contributions are as follows: 1. We investigate Stochastic Bregman Proximal Gradient (SBPG) method to solve non-convex problem without finite-sum requirement, which employs Bregman distance to handle the non-Lipschitz gradient continuity. We establish convergence results for the vanilla SBPG in the sense of expectation. Further, we propose a momentum-based SBPG (MSBPG) that is tailored for modern large-scale applications, and prove that it has improved convergence properties compared to the vanilla SBPG. To our best Figure 1: For function \(F(x)=x^{4}\), which does not admit a globally Lipschitz continuous gradient. We restrict the feasible set to \([-0.5,2]\). Consider the models (2) and (3) of \(F\) at \(x^{k}=1\). The Lipschitz constant of \(F\) with respect to the kernel \(\phi(x)=\frac{1}{2}x^{2}\) is 48. The Lipschitz constant of \(F\) with respect to the kernel \(\phi(x)=\frac{1}{2}x^{2}+\frac{1}{4}x^{4}\) is 4. The figure in (b) is a zoomed-in version of the plot in (a) for the range \([0.6,1]\). The unique minimum of \(F(x)\) is at \(x=0\). knowledge, this is the first time that the momentum technique has been integrated into a stochastic Bregman proximal gradient method. 2. We apply MSBPG to training deep neural networks (DNN), which leverages on a suitable polynomial kernel function to ensure that the DNN's loss function is smooth adaptable with respect to the designed kernel function. MSBPG exhibits good convergence behavior and excellent generalization performance on extensive tasks. Moreover, MSBPG is found to be more robust than the traditional SGD, especially when it comes to stepsize selection and initialization. We highlight that MSBPG is a theoretically derived method that is able to ease the difficulty of selecting stepsize, mitigate gradient explosion, and maintains excellent generalization performance simultaneously. This distinguishes MSBPG from many existing techniques that rely on intuition and empirical observations. 3. We demonstrate the efficiency and robustness of SBPG or MSBPG in a range of applications, including sparse quadratic inverse problems and large-scale deep neural networks. In quadratic inverse problem, SBPG is more robust in terms of both stepsize and initial point selections. In training deep neural networks, MSBPG has been been found to achieve a superior generalization performance compared with some of the most frequently used optimizers such as Adam and AdamW, and also exhibits robustness to stepsize selection and initialization. These results highlight the potential of MSBPG as a powerful tool for optimizing complex and large-scale deep neural networks, thus offering a promising direction for future research in this area. The remainder of this paper is organized as follows. In Section 2, we present notation, some related preliminaries and our problem setting. In Section 3, we first describe SBPG and establish its convergence results in the sense of expectation. Then, we propose a momentum-based SBPG (MSBPG) and prove its improved convergence properties. In Section 4, we adapt MSBPG to the training of deep neural networks and analyze its capacity in mitigating gradient explosion and improving generalization capacity. In Section 5, we present numerical experiments that demonstrate the efficiency and robustness of vanilla SBPG on quadratic inverse problems and MSBPG on training deep neural networks. Finally, we give some concluding remarks in Section 6, summarizaing our key contributions and outlining promising topics for future research. ## 2 Preliminaries and Problem setting In this paper, vectors are represented using boldface letters like \(\mathbf{v}\), while scalars are represented using normal font. Given a proper, lower-semicontinuous function \(F:\mathbb{R}^{d}\to\bar{\mathbb{R}}:=[-\infty,\infty]\), \(dom\,F=\{\mathbf{x}:F(\mathbf{x})<\infty\}\). The Fenchel conjugate function of \(F\) is defined as \(F^{*}(\mathbf{y})=\sup\{\langle\mathbf{x},\,\mathbf{y}\rangle-F(\mathbf{x}):\mathbf{x}\in\mathbb{ R}^{d}\}\). Given a set \(\mathcal{S}\subset\mathbb{R}^{d}\), \(\bar{\mathcal{S}}\) denotes its closure, \(int\,\mathcal{S}\) denotes the set of interior points. A function is of class \(\mathcal{C}^{k}(\mathcal{S})\) if it is \(k\) times differentiable and the \(k\)-th derivative is continuous on \(\mathcal{S}\). We say that \(F\) is level bounded if the set \(\{x:F(\mathbf{x})<\alpha\}\) is bounded for any real number \(\alpha\). Given a matrix \(A\), \(\text{Vec}(A)\) denotes the vectorization of \(A\) by column order. \(\text{Mat}(\cdot)\) is the inverse operation of \(\text{Vec}(\cdot)\), which reshapes a vector back into its original matrix form. Define the operator \(\text{Diag}(\cdot)\) to map a vector into a diagonal matrix with diagonal elements equal to the corresponding entries of the vector. The Hadamard product is represented by the symbol \(\circ\). If we use the notation \(\|\cdot\|\) without any additional explanation, we assume that it refers to the Euclidean vector norm for vectors and the Frobenius matrix norm for matrices. Let \((\Omega,\mathcal{F},\mathbb{P})\) be a probability space. Given a random variable \(\boldsymbol{\xi}\) and a \(\sigma\)-algebra \(\mathcal{F}\), we write \(\boldsymbol{\xi}\vartriangleleft\mathcal{F}\) if \(\boldsymbol{\xi}\) is measurable over \(\mathcal{F}\). Let \(\{\boldsymbol{\xi}_{k}\}_{k\geq 0}\) be a stochastic process, and \(\{\mathcal{F}_{k}\}_{k\geq 0}\) be a filtration, where \(\mathcal{F}_{k}\) defined by a \(\sigma\)-algebra \(\mathcal{F}_{k}:=\sigma(\boldsymbol{\xi}_{0},\ldots,\boldsymbol{\xi}_{k-1})\) on \(\Omega\). The conditional expectation is denoted by \(\mathbb{E}[\cdot|\mathcal{F}_{k}]\). For simplicity, we use the notation \(\mathbb{E}[\cdot]\) to denote \(\mathbb{E}[\cdot|\mathcal{F}_{\infty}]\). The sequence \(\{\boldsymbol{x}^{k}\}_{k\geq 0}\) generated by our proposed method is adapted to the filtration \(\{\mathcal{F}_{k}\}_{k\geq 0}\), i.e. \(\boldsymbol{x}^{k}\vartriangleleft\mathcal{F}_{k}\), for all \(k\geq 0\). The notation \(\widetilde{\nabla}_{k}\) represents an estimator of the exact gradient \(\nabla F(\boldsymbol{x}^{k})\), which satisfies \(\widetilde{\nabla}_{k}\vartriangleleft\mathcal{F}_{k+1}\). This estimator is applicable to both the vanilla and momentum cases. The stochastic error is denoted by \(\boldsymbol{\varepsilon}_{k}=\nabla F(\boldsymbol{x}^{k})-\widetilde{\nabla}_ {k}\). The unbiasedness of the stochastic error \(\boldsymbol{\varepsilon}_{k}\) is assumed throughout this paper, i.e., \(\mathbb{E}[\boldsymbol{\varepsilon}_{k}|\mathcal{F}_{k}]=0\). The following supermartingale convergence theorem is a fundamental tool in the analysis of stochastic algorithms. **Theorem 1**: _(Robbins and Monro, 1951) Let \(\left\{y_{k}\right\},\left\{u_{k}\right\},\left\{a_{k}\right\}\) and \(\left\{b_{k}\right\}\) be non-negative adapted processes with respect to the filtration \(\{\mathcal{F}_{k}\}\) such that \(\sum_{k}a_{k}<\infty,\sum_{k}b_{k}<\infty\), and for all \(k\), \(\mathbb{E}\left[y_{k+1}\mid\mathcal{F}_{k}\right]\leq\left(1+a_{k}\right)y_{k}- u_{k}+b_{k}\) almost surely. Then, \(\left\{y_{k}\right\}\) converges almost surely to a non-negative finite random variable and \(\sum_{k}u_{k}<\infty\) almost surely._ ### Smooth adaptable functions In this subsection, we introduce the concept of smooth adaptivity, which was initially proposed in Bolte et al. (2018). This concept is an extension of the idea of relative smoothness for convex functions introduced in Bauschke et al. (2017); Lu et al. (2018). We first give the definitions of kernel function and Bregman distance. **Definition 2**: _(Kernel function and Bregman distance). Let \(\mathcal{S}\) be a nonempty, convex and open subset of \(\mathbb{R}^{d}\). Associated with \(\mathcal{S}\), a function \(\phi:\mathbb{R}^{d}\to(-\infty,+\infty]\) is called a kernel function if it satisfies the following two conditions:_ 1. \(\phi\) _is proper, lower-semicontinuous and convex, with_ \(dom\,\phi\subset\bar{\mathcal{S}}\)_,_ \(dom\,\partial\phi=\mathcal{S}\)_._ 2. \(\phi\in\mathcal{C}^{1}(\mathcal{S})\) _and_ \(int\,dom\,\phi=\mathcal{S}\)_._ _Denote the class of kernel function associated with \(\mathcal{S}\) by \(\mathcal{M}(\mathcal{S})\). Given \(\phi\in\mathcal{M}(\mathcal{S})\), the Bregman distance (Bregman, 1967) generated by \(\phi\) is defined as \(\mathcal{D}_{\phi}(\boldsymbol{x},\boldsymbol{y}):dom\,\phi\times int\,dom\, \phi\to[0,+\infty)\), where_ \[\mathcal{D}_{\phi}(\boldsymbol{x},\boldsymbol{y})=\phi(\boldsymbol{x})-\phi( \boldsymbol{y})-\langle\nabla\phi(\boldsymbol{y}),\,\boldsymbol{x}-\boldsymbol {y}\rangle.\] Bregman distance measures the difference between the value of \(\phi\) at \(\boldsymbol{x}\) and its linear approximation at \(\boldsymbol{y}\) based on the gradient of \(\phi\) at \(\boldsymbol{y}\). Some basic properties of Bregman distance can be found in Chen and Teboulle (1993); Teboulle (2018). Some kernel functions commonly used in optimization are \(\frac{1}{2}\|\boldsymbol{x}\|^{2}\), \(\frac{1}{2}\|\boldsymbol{x}\|^{2}+\frac{\alpha}{4}\|\boldsymbol{x}\|^{4}\), \(-\sum_{i=1}^{d}\log\boldsymbol{x}_{i}\) and \(\sum_{i=1}^{d}\boldsymbol{x}_{i}\log\boldsymbol{x}_{i}\), where \(\frac{1}{2}\|\boldsymbol{x}\|^{2}\in\mathcal{M}(\mathbb{R}^{d})\) recovers the classical half squared Euclidean distance. The kernel function \(\frac{1}{2}\|\boldsymbol{x}\|^{2}+\frac{\alpha}{4}\|\boldsymbol{x}\|^{4}\in \mathcal{M}(\mathbb{R}^{d})\) has found applications in various problems, such as quadratic inverse problems, non-negative matrix factorization, and low-rank minimization (Bolte et al., 2018; Dragomir et al., 2021a). The entropy function \(\sum_{i=1}^{d}\mathbf{x}_{i}\log\mathbf{x}_{i}\in\mathcal{M}(\mathbb{R}_{++}^{d})\) is commonly used in applications that involve probability constraints, where the resulting Bregman distance is known as the Kullback-Leibler (KL) divergence. Throughout the paper we will focus on the following pair of functions \((f,\phi)\) satisfying smooth adaptivity condition. We introduce this concept in the following definition: **Definition 3**: _(Smooth adaptivity). Given a kernel function \(\phi\in\mathcal{M}(\mathcal{S})\), a proper lower-semicontinuous function \(f:\mathbb{R}^{d}\to(-\infty,+\infty]\) with \(dom\,f\supset dom\,\phi\) that is \(\mathcal{C}^{1}\) on \(\mathcal{S}\). \(f\) is \(L\)-smooth adaptable with respect to \(\phi\) if there exists \(L>0\), such that \(L\phi+f\) and \(L\phi-f\) are convex on \(\mathcal{S}\)._ Alternative definition of smooth adaptivity is the two-side descent lemma (Bolte et al., 2018, Lemma 2.1). When both \(f\) and \(\phi\) belong to \(\mathcal{C}^{2}(\mathcal{S})\), we can verify their smooth adaptivity by comparing the Hessians of \(f\) and \(\phi\). **Lemma 4**: \(f\) _is \(L\)-smooth adaptable with respect to \(\phi\in\mathcal{M}(\mathcal{S})\), if and only if_ \[|f(\mathbf{x})-f(\mathbf{y})-\langle\nabla f(\mathbf{y}),\,\mathbf{x}-\mathbf{y}\rangle|\leq L \mathcal{D}_{\phi}(\mathbf{x},\mathbf{y}),\;\forall\,\mathbf{x},\mathbf{y}\in int\,dom\,\phi.\] _Moreover, when both \(f\) and \(\phi\) belong to \(\mathcal{C}^{2}(int\,dom\,\phi)\), then the above is equivalent to_ \[\exists L>0,\;L\nabla^{2}\phi(\mathbf{x})-\nabla^{2}f(\mathbf{x})\succeq 0,\text{ for all }\mathbf{x}\in int\,dom\phi.\] The following four-point identity is frequently employed in our proofs, and can be easily verified. **Lemma 5**: _(Four points identity) Given points \(\mathbf{a},\mathbf{b},\mathbf{c},\mathbf{d}\) and any convex function \(\phi\) which is differentiable at \(\mathbf{a}\) and \(\mathbf{b}\), then_ \[\langle\nabla\phi(\mathbf{a})-\nabla\phi(\mathbf{b}),\,\mathbf{c}-\mathbf{d}\rangle= \mathcal{D}_{\phi}(\mathbf{c},\mathbf{b})+\mathcal{D}_{\phi}(\mathbf{d},\mathbf{a})-\mathcal{ D}_{\phi}(\mathbf{c},\mathbf{a})-\mathcal{D}_{\phi}(\mathbf{d},\mathbf{b}).\] ### Bregman Proximal Mapping Throughout this paper, we make the following basic assumptions. **Assumption 1**: _(Basic requirements). In problem (1):_ 1. _For every fixed_ \(\mathbf{\xi}\)_,_ \(f(\cdot,\mathbf{\xi})\) _is a proper lower-semicontinuous function with_ \(dom\,\phi\subset dom\,f(\cdot,\mathbf{\xi})\)_, and it is_ \(\mathcal{C}^{1}\) _on_ \(int\,C\)_._ 2. _The Legendre kernel (Definition_ 6_)_ \(\phi\in\mathcal{M}(C)\) _is_ \(\mu\)_-strongly convex for some_ \(\mu>0\)_. For every fixed_ \(\mathbf{\xi}\)_,_ \(f(\cdot,\mathbf{\xi})\) _is_ \(L_{F}\)_-smooth adaptable with respect to_ \(\phi\)_, where_ \(L_{F}\) _is independent of_ \(\mathbf{\xi}\)_._ 3. \(R\) _is is a proper, lower semicontinuous and convex function with_ \(dom\,R\cap int\,C\neq\emptyset\)_._ 4. \(\inf_{\mathbf{x}\in\overline{C}}\{\Phi(\mathbf{x})\}>-\infty\) Assumption 1 is a standard requirement for Bregman-type methods and is usually satisfied in practice. It ensures the well-definedness of Bregman-type methods, as shown in Bolte et al. (2018); Latafat et al. (2022). We also recall the definition of the Legendre function in Latafat et al. (2022), which makes additional supercoercive conditions on the concept in Rockafellar (1997). **Definition 6**: _(Legendre kernel). Let \(\phi:\overline{C}\to(-\infty,\infty]\) be a proper lower-semicontinuous convex function. It is called essentially smooth if \(\operatorname{int}\,dom\,\phi\) is nonempty and \(\phi\) is differentiable on \(\operatorname{int}\,dom\,\phi\), moreover \(\lim_{k\to\infty}\|\nabla\phi(\mathbf{x}^{k})\|=\infty\) whenever \(\{\mathbf{x}^{k}\}_{k\in\mathbb{N}}\) converges to a boundary point of \(\operatorname{dom}\,\phi\). The function \(\phi\) is called Legendre function if it is essentially smooth, strictly convex on \(\operatorname{int}\,dom\,\phi\) and supercoercive, i.e. \(\lim_{\|\mathbf{x}\|\to\infty}\frac{\phi(\mathbf{x})}{\|\mathbf{x}\|}=\infty\)._ **Definition 7**: _Given a nonempty convex open set \(C\), a proper lower-semicontinuous convex function \(R\) and a Legendre kernel function \(\phi\in\mathcal{M}(C)\), \(\mathbf{x}\in\operatorname{int}\,\phi\), we denote the Bregman proximal mapping by \(\operatorname{Prox}_{R}^{\phi}:=(\nabla\phi+\partial R)^{-1}\nabla\phi\), which is defined as_ \[\operatorname{Prox}_{R}^{\phi}(\mathbf{x}):=\operatorname*{argmin}_{\mathbf{u}\in \overline{C}}\,\{R(\mathbf{u})+\mathcal{D}_{\phi}(\mathbf{u},\mathbf{x})\}. \tag{4}\] Note that the objective function of (4) is strictly convex on \(\operatorname{dom}\,\phi\cap domR\), therefore (4) has at most one solution. To ensure that (4) is well-defined, the following result claims that \(\operatorname{Prox}_{\alpha R}^{\phi}(\mathbf{x})\) is well-defined for any \(\alpha>0\), and moreover \(\operatorname{Prox}_{\alpha R}^{\phi}(\mathbf{x})\in\operatorname{int}\, \operatorname{dom}\,\phi\) under standard assumptions. The proof can be found in Appendix A. **Lemma 8**: _Suppose Assumption 1 holds. Then (4) has a unique solution. Moreover, the solution \(\operatorname{Prox}_{\alpha R}^{\phi}(\mathbf{x})\in C\)._ The following proposition for Bregman proximal mapping generalizes the nonexpansive property of the classical proximal mapping (in the case \(\phi(\mathbf{x})=\frac{1}{2}\|\mathbf{x}\|^{2}\)). This property is commonly used in convergence proofs. The proof of the following proposition can be found in Appendix A. **Proposition 9**: _Suppose Assumption 1 holds. Let \(\mathbf{x}_{i}^{+}:=\operatorname{Prox}_{R}^{\phi}(\nabla\phi^{*}(\mathbf{x}_{i}))\), \(i=1,2\). Then \(\|\mathbf{x}_{1}^{+}-\mathbf{x}_{2}^{+}\|\leq\frac{1}{\mu}\|\mathbf{x}_{1}-\mathbf{x}_{2}\|\)._ In this paper, we make the assumption that \(R\) and \(\phi\) are simple enough so that (4) either has a closed-form solution or admits an efficient subroutine to solve it. Using the definition of the Bregman proximal mapping, we can then define the Bregman gradient mapping associated with (1). This mapping measures the solution accuracy of the methods we propose. Note that \(\phi\) is a Legendre kernel, which implies that \(\phi^{*}\in\mathcal{C}^{1}(\mathbb{R}^{d})\) is strictly convex and \((\nabla\phi)^{-1}=\nabla\phi^{*}\)(Rockafellar, 1997, Corollary 13.3.1, Theorem 26.5). Therefore, the following concept is well-defined. **Definition 10** (Bregman Gradient Mapping): _Given \(\alpha>0\), a nonempty convex open set \(C\) and a Legendre kernel function \(\phi\in\mathcal{M}(C)\), the Bregman gradient mapping associated with (1) is defined as follows_ \[\mathcal{G}_{\alpha}(\mathbf{x})=\frac{\mathbf{x}-\operatorname{Prox}_{\alpha R}^{ \phi}\left(\nabla\phi^{*}(\nabla\phi(\mathbf{x})-\alpha\nabla F(\mathbf{x}))\right) }{\alpha}.\] _To simplify notation, we use \(\mathcal{G}(\mathbf{x})\) to denote \(\mathcal{G}_{1}(\mathbf{x})\) when \(\alpha=1\)._ When the kernel function \(\phi(\mathbf{x})=\frac{1}{2}\|\mathbf{x}\|^{2}\), the resulting Bregman Gradient Mapping becomes equivalent to the classical Gradient Mapping (Nesterov, 2003, 2005), which measures the solution's accuracy for proximal gradient methods. **Definition 11**: _(Limiting subdifferential (Rockafellar and Wets, 1998, Definition 8.3)) Consider a function \(f:\mathbb{R}^{d}\rightarrow\bar{\mathbb{R}}\) and a point \(x\), the regular subdifferential is defines as_ \[\hat{\partial}f(\mathbf{x})=\{\mathbf{v}:f(\mathbf{y})\geq f(\mathbf{x})+\langle\mathbf{v},\,\mathbf{ y}-\mathbf{x}\rangle+o(\|\mathbf{y}-\mathbf{x}\|)\}.\] _The limiting subdifferential is defined as_ \[\partial f(\mathbf{x})=\{\mathbf{v}:\mathbf{x}_{n}\rightarrow\mathbf{x},f(\mathbf{x}_{n}) \to f(\mathbf{x}),\mathbf{v}_{n}\in\hat{\partial}f(\mathbf{x}_{n}),\,and\,\mathbf{v}_{n} \rightarrow\mathbf{v}\}.\] By Fermat's rule (Rockafellar and Wets, 1998, Theorem 10.1), the set of critical point of \(\Phi\) is given by \[\operatorname{crit}\Phi=\left\{x\in\mathbb{R}^{d}:\;0\in\partial\Phi(x)\equiv \nabla F(x)+\partial R(x)\right\}.\] The Bregman Gradient Mapping can also be used to evaluate the solution accuracy for Bregman methods. Let \(\mathbf{x}^{+}=\operatorname{Prox}_{\alpha R}^{\phi}(\nabla\phi^{*}(\nabla\phi( \mathbf{x})-\alpha\nabla F(\mathbf{x})))\). From Definition 10 and equation (4), it can be easily verified by definition that \(0\in\partial\Phi(\mathbf{x})\Leftrightarrow 0=\mathcal{G}_{\alpha}(\mathbf{x}).\) Hence, \(0\in\partial\Phi(\mathbf{x}^{+})\) for any \(\alpha>0\). The proof of this result is omitted for brevity. Furthermore, if \(\nabla\phi\) is \(L_{\phi}\)-Lipschitz continuous, then the following proposition holds, implying that \(\|\mathcal{G}_{\alpha}(\mathbf{x})\|\) can be used as a reasonable criterion to measure the accuracy of \(\mathbf{x}\). **Proposition 12**: _Suppose Assumption 1 holds and that \(\nabla\phi\) is \(L_{\phi}\) Lipschitz continuous. Then, we have the following inequality:_ \[\operatorname{dist}\left(0,\partial\Phi(\mathbf{x}^{+})\right)\leq(1+\alpha L_{F}) L_{\phi}\|\mathcal{G}_{\alpha}(\mathbf{x})\|.\] We also define the stochastic counterpart of Definition 10, which is commonly utilized to evaluate the accuracy of solutions for nonconvex stochastic proximal gradient methods, as discussed in Ghadimi et al. (2016). **Definition 13**: _(Stochastic Bregman Gradient Mapping). Given \(\alpha>0\) a nonempty convex open set \(C\) and a Legendre kernel function \(\phi\in\mathcal{M}(C)\), the stochastic Bregman gradient mapping associated with (1) is defined as follows_ \[\widetilde{\mathcal{G}}_{\alpha}(\mathbf{x}):=\frac{\mathbf{x}-\operatorname{Prox}_{ \alpha R}^{\phi}\left(\nabla\phi^{*}\left(\nabla\phi(\mathbf{x})-\alpha\widetilde{ \nabla}\right)\right)}{\alpha},\text{ where }\widetilde{\nabla}\text{ is an estimator of }\nabla F(x).\] ## 3 Stochastic Bregman Proximal Gradient Method In this section, we will study the Stochastic Bregman Proximal Gradient method (SBPG) with the following update scheme: \[\mathbf{x}^{k+1}=\operatorname*{argmin}_{\mathbf{x}\in\overline{C}}\;R(\mathbf{x})+ \langle\widetilde{\nabla}_{k},\,\mathbf{x}-\mathbf{x}^{k}\rangle+\frac{1}{\alpha_{k}} \mathcal{D}_{\phi}(\mathbf{x},\mathbf{x}^{k}). \tag{5}\] We call the above method as "vanilla" SBPG in this section, meaning that the method we study is a basic version without any additional techniques such as variance reduction, momentum, etc., except for the use of mini-batches. In this case, we suppose the following assumptions. **Assumption 2**: _(Noise requirement). The estimator satisfies the following two conditions:_ \[\mathbb{E}[\widetilde{\nabla}_{k}|\mathcal{F}_{k}]=\nabla F(x_{k})\quad\text{and} \quad\mathbb{E}[\|\widetilde{\nabla}_{k}-\nabla F(\mathbf{x}^{k})\|^{2}|\mathcal{F} _{k}]\leq\frac{\sigma^{2}}{m_{k}},\] _where \(m_{k}\) is the size of the mini-batch in the \(k\)-th iteration._ Note that we do not assume a finite-sum structure for \(F(\mathbf{x})\) in this section. The solution of (5) can be written in the form of the Bregman proximal mapping. This is stated in the following proposition. **Proposition 14**: _Suppose Assumption 1 holds. Then the solution of (5) can be written as the following Bregman proximal mapping:_ \[\mathbf{x}^{k+1}=\mathrm{Prox}_{\alpha_{k}R}^{\phi}\left(\nabla\phi^{*}\left( \nabla\phi(\mathbf{x}^{k})-\alpha_{k}\widetilde{\nabla}_{k}\right)\right).\] **Proof** From the optimality condition of the main subproblem (5), we have \[0\in\partial R(\mathbf{x}^{k+1})+\widetilde{\nabla}_{k}+\frac{1}{\alpha_{k}}\left( \nabla\phi(\mathbf{x}^{k+1})-\nabla\phi(\mathbf{x}^{k})\right).\] Let \(\mathbf{u}^{k+1}:=\mathrm{Prox}_{\alpha_{k}R}^{\phi}\left(\nabla\phi^{*}\left( \nabla\phi(\mathbf{x}^{k})-\alpha_{k}\widetilde{\nabla}_{k}\right)\right)\). From the definition of Bregman proximal mapping, we have \[\mathbf{u}^{k+1}=\arg\min_{\mathbf{u}}\ \left\{\alpha_{k}R(\mathbf{u})+\mathcal{D}_{\phi} \left(\mathbf{u},\nabla\phi^{*}\left(\nabla\phi(\mathbf{x}^{k})-\alpha_{k}\widetilde{ \nabla}_{k}\right)\right)\right\},\] which is equivalent to \[0\in\alpha_{k}\partial R(\mathbf{u}^{k+1})+\nabla\phi(\mathbf{u}^{k+1})-\nabla\phi \big{(}\nabla\phi^{*}(\nabla\phi(\mathbf{x}^{k})-\alpha_{k}\widetilde{\nabla}_{k} )\big{)}.\] Note that the function \(\phi^{*}\) is the Fenchel conjugate of the Legendre kernel \(\phi\), which implies that \(\nabla\phi(\nabla\phi^{*}(\mathbf{w}))=\mathbf{w}\) for all \(\mathbf{w}\in\mathbb{R}^{d}\), as stated in (Rockafellar, 1997, Corollary 13.3.1, Theorem 26.5). Furthermore, since the objective function in (5) is strictly convex, there exists a unique solution to the inclusion above. By comparing the two inclusions, we can conclude that \(\mathbf{u}^{k+1}=\mathbf{x}^{k+1}\). Based on Proposition 14 and definition of the definition of \(\widetilde{\mathcal{G}}_{\alpha}(\mathbf{x})\), we can easily observe that \(\mathbf{x}^{k+1}=\mathbf{x}^{k}-\alpha_{k}\widetilde{\mathcal{G}}_{\alpha_{k}}(\mathbf{x }^{k})\). We can derive the following proposition, which bounds the difference between \(\mathcal{G}_{\alpha}(\mathbf{x})\) and \(\widetilde{\mathcal{G}}_{\alpha}(\mathbf{x})\) directly from Proposition 9. The proof is omitted for brevity. **Proposition 15**: _Suppose Assumption 1 holds. At the \(k\)-th step, we have the estimation:_ \[\|\mathcal{G}_{\alpha_{k}}(\mathbf{x}^{k})-\widetilde{\mathcal{G}}_{\alpha_{k}}( \mathbf{x}^{k})\|\leq\frac{1}{\mu}\|\nabla F(\mathbf{x}^{k})-\widetilde{\nabla}_{k}\| =\frac{\|\mathbf{\varepsilon}_{k}\|}{\mu},\] _where \(\mathbf{\varepsilon}_{k}=\nabla F(\mathbf{x}^{k})-\widetilde{\nabla}_{k}\)._ Before presenting the main convergence result, we state the following one-step descent lemma below. **Lemma 16**: _Suppose Assumption 1 holds. The sequence generated by SBPG satisfies the following condition:_ \[\Phi(\mathbf{x}^{k+1})\leq\Phi(\mathbf{x}^{k})-\frac{1}{\alpha_{k}}\mathcal{D}_{\phi}( \mathbf{x}^{k},\mathbf{x}^{k+1})-\left(\frac{1}{\alpha_{k}}-L_{F}\right)\mathcal{D}_{ \phi}(\mathbf{x}^{k+1},\mathbf{x}^{k})+\langle\mathbf{\varepsilon}_{k},\,\mathbf{x}^{k+1}-\bm {x}^{k}\rangle.\] **Proof** By the optimality condition of (5), we obtain that \[0\in\partial R(\mathbf{x}^{k+1})+\widetilde{\nabla}_{k}+\frac{1}{\alpha_{k}} \left(\nabla\phi(\mathbf{x}^{k+1})-\nabla\phi(\mathbf{x}^{k})\right).\] Appealing to the convexity of \(R\), we have \[R(\mathbf{x})-R(\mathbf{x}^{k+1})\geq\big{\langle}-\widetilde{\nabla}_{k}-\frac{1}{ \alpha_{k}}\left(\nabla\phi(\mathbf{x}^{k+1})-\nabla\phi(\mathbf{x}^{k})\right)\!,\, \mathbf{x}-\mathbf{x}^{k+1}\big{\rangle}.\] By the four points identity and the definition of \(\mathbf{\varepsilon}_{k}\), we get \[R(\mathbf{x})-R(\mathbf{x}^{k+1})\geq\frac{1}{\alpha_{k}}\left[\mathcal{D}_{\phi}( \mathbf{x}^{k+1},\mathbf{x}^{k})+\mathcal{D}_{\phi}(\mathbf{x},\mathbf{x}^{k+1})-\mathcal{D}_{ \phi}(\mathbf{x},\mathbf{x}^{k})\right]-\langle\nabla F(\mathbf{x}^{k}),\,\mathbf{x}-\mathbf{x}^{k +1}\rangle+\langle\mathbf{\varepsilon}_{k},\,\mathbf{x}-\mathbf{x}^{k+1}\rangle.\] Set \(\mathbf{x}=\mathbf{x}^{k}\) in the above inequality, we have the following inequality: \[R(\mathbf{x}^{k})-R(\mathbf{x}^{k+1})\geq\frac{1}{\alpha_{k}}\left[\mathcal{D}_{\phi} (\mathbf{x}^{k+1},\mathbf{x}^{k})+\mathcal{D}_{\phi}(\mathbf{x}^{k},\mathbf{x}^{k+1})\right]- \langle\nabla F(\mathbf{x}^{k}),\,\mathbf{x}^{k}-\mathbf{x}^{k+1}\rangle+\langle\mathbf{ \varepsilon}_{k},\,\mathbf{x}^{k}-\mathbf{x}^{k+1}\rangle.\] By the smooth adaptivity of \(F\), we have \[F(\mathbf{x}^{k+1})\leq F(\mathbf{x}^{k})+\langle\nabla F(\mathbf{x}^{k}),\,\mathbf{x}^{k+1}- \mathbf{x}^{k}\rangle+L_{F}\mathcal{D}_{\phi}(\mathbf{x}^{k+1},\mathbf{x}^{k}).\] Combining the above two inequalities above, we complete the proof. ### Convergence analysis of SBPG In this subsection, we establish the convergence results for SBPG, which is an extension of the convergence result in Ghadimi et al. (2016), in which the classical Lipschitz gradient assumption is required. In many literature, the bounded sequence assumption is often required in the convergence analysis of stochastic algorithms. However, in this section, we relax this assumption and prove that under a certain condition, the sequence generated by (5) is bounded almost surely. We need the following result to bound the stochastic error term \(\langle\mathbf{\varepsilon}_{k},\,\mathbf{x}^{k+1}-\mathbf{x}^{k}\rangle\) in Lemma 16. **Lemma 17**: _Suppose Assumption 1, 2 hold. We have the following estimation of the error term:_ \[\mathbb{E}\left[\langle\mathbf{\varepsilon}_{k},\,\mathbf{x}^{k+1}-\mathbf{x}^{k}\rangle \right]\leq\frac{\alpha_{k}}{\mu}\mathbb{E}[\|\mathbf{\varepsilon}_{k}\|^{2}] \leq\frac{\alpha_{k}\sigma^{2}}{\mu m_{k}}.\] **Proof** Define \(\bar{\mathbf{x}}^{k+1}:=\text{Prox}_{\alpha_{k}R}^{\phi}(\nabla\phi^{*}(\nabla\phi( \mathbf{x}^{k})-\alpha_{k}\nabla F(\mathbf{x}^{k})))\). By Proposition 14 and the optimality condition for \(\bar{\mathbf{x}}^{k+1}\), we have \[0\in\partial R(\bar{\mathbf{x}}^{k+1})+\nabla F(\mathbf{x}^{k})+\frac{1}{\alpha_{k}}( \nabla\phi(\bar{\mathbf{x}}^{k+1})-\nabla\phi(\mathbf{x}^{k})).\] Similarly, \[0\in\partial R(\mathbf{x}^{k+1})+\widetilde{\nabla}_{k}+\frac{1}{\alpha_{k}}( \nabla\phi(\mathbf{x}^{k+1})-\nabla\phi(\mathbf{x}^{k})).\] By the monotonicity of \(\partial R\) and Lemma 5, we have \[\left\langle\bar{\mathbf{x}}^{k+1}-\mathbf{x}^{k+1},\,-\mathbf{\varepsilon}_{k}-\frac{1} {\alpha_{k}}(\nabla\phi(\bar{\mathbf{x}}^{k+1})-\nabla\phi(\mathbf{x}^{k+1}))\right\rangle \geq 0.\] Therefore, \[\left\langle\mathbf{x}^{k+1}-\bar{\mathbf{x}}^{k+1},\,\mathbf{\varepsilon}_{k}\right\rangle \geq\left\langle\bar{\mathbf{x}}^{k+1}-\mathbf{x}^{k+1},\,\frac{1}{\alpha_{k}}(\nabla \phi(\bar{\mathbf{x}}^{k+1})-\nabla\phi(\mathbf{x}^{k+1}))\right\rangle\geq\frac{\mu} {\alpha_{k}}\|\bar{\mathbf{x}}^{k+1}-\mathbf{x}^{k+1}\|^{2}.\] By Cauchy-Schwarz inequality, we get \(\|\bar{\mathbf{x}}^{k+1}-\mathbf{x}^{k+1}\|\leq\frac{\alpha_{k}}{\mu}\|\mathbf{ \varepsilon}_{k}\|\). Now, we are ready to prove Lemma 17. From the definition, we know that \(\bar{\mathbf{x}}^{k+1}\lhd\mathcal{F}_{k}\). Therefore, \(\mathbb{E}[\left\langle\mathbf{\varepsilon}_{k},\,\mathbf{x}^{k}-\bar{\mathbf{x}}^{k+1} \right\rangle]=\mathbb{E}[\mathbb{E}[\left\langle\mathbf{\varepsilon}_{k},\,\mathbf{x} ^{k}-\bar{\mathbf{x}}^{k+1}\right\rangle]\mathcal{F}_{k}]]=\mathbb{E}[\langle \mathbb{E}[\mathbf{\varepsilon}_{k}|\mathcal{F}_{k}],\,\mathbf{x}^{k}-\bar{\mathbf{x}}^{k+1 }\rangle]=0\), where the first equality is from the tower rule of conditional expectation, the second comes from the fact that \(\mathbf{x}^{k}-\bar{\mathbf{x}}^{k+1}\lhd\mathcal{F}_{k}\). Hence, \[\mathbb{E}\left[\left\langle\mathbf{\varepsilon}_{k},\,\mathbf{x}^{k+1}-\mathbf{x}^{k} \right\rangle\right]=\mathbb{E}\left[\left\langle\mathbf{\varepsilon}_{k},\,\mathbf{x }^{k+1}-\bar{\mathbf{x}}^{k+1}\right\rangle\right]-\mathbb{E}\left[\left\langle \mathbf{\varepsilon}_{k},\,\mathbf{x}^{k}-\bar{\mathbf{x}}^{k+1}\right\rangle\right]\leq \frac{\alpha_{k}}{\mu}\mathbb{E}[\|\mathbf{\varepsilon}_{k}\|^{2}]\leq\frac{\alpha _{k}\sigma^{2}}{\mu m_{k}},\] which completes the proof. \(\blacksquare\) **Lemma 18** (Bounded sequence): _Suppose Assumption 1, 2 hold. If \(\sum_{k}\frac{\alpha_{k}}{m_{k}}<\infty\), \(\sup_{k}\alpha_{k}\leq\bar{\alpha}<\frac{1}{L_{F}}\), then,_ 1. \(\sum_{k=0}^{\infty}\mathbb{E}[\mathcal{D}_{\phi}(\mathbf{x}^{k+1},\mathbf{x}^{k})]<\infty\)_._ 2. _If_ \(\Phi\) _is level bounded, then_ \(\{\mathbf{x}^{k}\}_{k\geq 0}\) _is bounded almost surely._ **Proof** By Cauchy-Young inequality, we have \[|\langle\mathbf{\varepsilon}_{k},\,\mathbf{x}^{k}-\mathbf{x}^{k+1}\rangle|\leq\frac{\mu}{2 \alpha_{k}}\|\mathbf{x}^{k}-\mathbf{x}^{k+1}\|^{2}+\frac{\alpha_{k}}{2\mu}\|\mathbf{ \varepsilon}_{k}\|^{2}\leq\frac{1}{\alpha_{k}}\mathcal{D}_{\phi}(\mathbf{x}^{k}, \mathbf{x}^{k+1})+\frac{\alpha_{k}}{2\mu}\|\mathbf{\varepsilon}_{k}\|^{2}.\] By Lemma 16, we have \[\left(\frac{1}{\alpha_{k}}-L_{F}\right)\mathcal{D}_{\phi}(\mathbf{x}^{k+1},\mathbf{x} ^{k})\leq\Phi(\mathbf{x}^{k})-\Phi(\mathbf{x}^{k+1})+\frac{\alpha_{k}}{2\mu}\|\mathbf{ \varepsilon}_{k}\|^{2}. \tag{6}\] Taking conditional expectation for both sides of (6), we get \[\mathbb{E}\left[\left(\frac{1}{\alpha_{k}}-L_{F}\right)\mathcal{D}_{\phi}(\mathbf{x }^{k+1},\mathbf{x}^{k})|\mathcal{F}_{k}\right]\leq\Phi(\mathbf{x}^{k})-\mathbb{E}[ \Phi(\mathbf{x}^{k+1})|\mathcal{F}_{k}]+\frac{\alpha_{k}}{2\mu}\mathbb{E}[\|\mathbf{ \varepsilon}_{k}\|^{2}|\mathcal{F}_{k}].\] Since \(\sum_{k\geq 0}\frac{\alpha_{k}}{2\mu}\mathbb{E}[\|\mathbf{\varepsilon}_{k}\|^{2}| \mathcal{F}_{k}]\leq\sum_{k\geq 0}\frac{\alpha_{k}\sigma^{2}}{2\mu m_{k}}<\infty\), applying Theorem 1, we have that \(\Phi(\mathbf{x}^{k})\) converges and \(\sum_{k\geq 0}\mathbb{E}\left[\left(\frac{1}{\alpha_{k}}-L_{F}\right)\mathcal{D }_{\phi}(\mathbf{x}^{k+1},\mathbf{x}^{k})|\mathcal{F}_{k}\right]<\infty\) almost surely. By the tower rule of conditional expectation, we have \(\sum_{k=0}^{\infty}\mathbb{E}[\mathcal{D}_{\phi}(\mathbf{x}^{k+1},\mathbf{x}^{k})]<\infty\). Since \(\Phi(\mathbf{x}^{k})\) converges almost surely, thus \(\{\Phi(\mathbf{x}^{k})\}_{k\geq 0}\) is bounded almost surely. By the level boundness of \(\Phi\), we deduce that \(\{\mathbf{x}^{k}\}_{k\geq 0}\) is bounded almost surely. Now, we present our main convergence result for the vanilla SBPG, which is in the sense of expectation. **Theorem 19** (Convergence result in expectation): _Suppose Assumption 1, 2 hold, \(\alpha_{k}<\frac{1}{L_{F}}\min\{1,\frac{1}{\mu}\}\). Define a random variable \(r\) with the distribution \(\mathbb{P}\{r=k\}=\frac{\alpha_{k}}{\sum_{k=0}^{N-1}\alpha_{k}}\) for \(k=0,...,N-1\). Then,_ \[\mathbb{E}[\|\widetilde{\mathcal{G}}_{\alpha_{r}}(\mathbf{x}^{r})\|^{2}]\leq\frac {2\Phi(x^{0})-2\Phi^{*}+2\sum_{k=0}^{N-1}\frac{\alpha_{k}\sigma^{2}}{\mu m_{k} }}{\mu\sum_{k=0}^{N-1}\alpha_{k}}. \tag{7}\] _Moreover, if \(\Phi\) is level bounded, \(\sum_{k}\frac{\alpha_{k}}{m_{k}}<+\infty\) and \(\sum_{k}\alpha_{k}=+\infty\), then the sequence \(\{\mathbf{x}^{k}\}_{k\geq 0}\) is bounded almost surely, and the right hand side of (7) converges to zero._ **Proof** Note that \(\mathbf{x}^{k+1}=\mathbf{x}^{k}-\alpha_{k}\widetilde{\mathcal{G}}_{\alpha_{k}}(\mathbf{x} ^{k})\) and by the strongly convexity of \(\phi\), Lemma 16 yields \[\mu(\alpha_{k}-\frac{L_{F}\alpha_{k}^{2}}{2})\|\widetilde{ \mathcal{G}}_{\alpha_{k}}(\mathbf{x}^{k})\|^{2} \leq\frac{1}{\alpha_{k}}\mathcal{D}_{\phi}(\mathbf{x}^{k},\mathbf{x}^{k+1 })+\left(\frac{1}{\alpha_{k}}-L_{F}\right)\mathcal{D}_{\phi}(\mathbf{x}^{k+1},\bm {x}^{k})\] \[\leq\Phi(\mathbf{x}^{k})-\Phi(\mathbf{x}^{k+1})+\langle\mathbf{\varepsilon}_{ k},\,\mathbf{x}^{k+1}-\mathbf{x}^{k}\rangle.\] Taking expectations, telescoping from \(k=0...N-1\), and using Lemma 17, we obtain \[\sum_{k=0}^{N-1}\mu(\alpha_{k}-\frac{\mu L_{F}\alpha_{k}^{2}}{2})\mathbb{E}[\| \widetilde{\mathcal{G}}_{\alpha_{k}}(\mathbf{x}^{k})\|^{2}]\leq\Phi(\mathbf{x}^{0})- \Phi(\mathbf{x}^{N})+\sum_{k=0}^{N-1}\frac{\alpha_{k}\sigma^{2}}{\mu m_{k}}. \tag{8}\] By utilizing the inequality \(\alpha_{k}-\frac{\mu L_{F}\alpha_{k}^{2}}{2}\geq\frac{\alpha_{k}}{2}\), the condition \(\Phi(\mathbf{x}^{N})\geq\Phi^{*}\), and considering the definition of the random variable \(r\), we can derive (7) from (8). **Remark 20**: _We give some remarks for Theorem 19._ 1. _The mini-batch setting is a crucial component for ensuring the convergence, as it allows us to control the stochastic error term in Lemma_ 16 _and provide a bound for_ \(\mathbb{E}[\|\widetilde{\mathcal{G}}_{\alpha_{k}}(\mathbf{x}^{k})\|^{2}]\) _that converges to zero as_ \(k\) _tends to infinity. If_ \(m_{k}=1\) _for all_ \(k\)_, then the upper bound for_ \(\mathbb{E}[\|\widetilde{\mathcal{G}}_{\alpha_{k}}(\mathbf{x}^{k})\|^{2}]\) _will not converge to zero, no matter how_ \(\{\alpha_{k}\}\) _is selected._ 2. _In Ghadimi et al._ (_2016_)__, a similar convergence result is established for nonconvex stochastic proximal gradient methods, but our analysis differs in a crucial aspect in that we do not assume the Lipschitz continuity of_ \(F(\mathbf{x})\)_. Instead, we assume that_ \(F(\mathbf{x})\) _is smooth adaptable, which is a more relaxed assumption. Moreover, we provide a specific choice of stepsizes_ \(\{\alpha_{k}\}\) _and mini-batch sizes_ \(\{m_{k}\}\) _that guarantee the convergence of_ \(\mathbb{E}[\|\widetilde{\mathcal{G}}_{\alpha_{k}}(\mathbf{x}^{k})\|^{2}]\) _to_ \(0\)_, as well as the almost sure boundedness of the sequence._ Based on Proposition 15 and Theorem 19, we can derive the following convergence result using the measure \(\mathcal{G}_{\alpha_{r}}(\mathbf{x}^{r})\). This can be obtained by observing that \(\|\mathcal{G}_{\alpha_{r}}(\mathbf{x}^{r})\|^{2}\leq 2\|\widetilde{\mathcal{G}}_{ \alpha_{r}}(\mathbf{x}^{r})\|^{2}+2\|\mathcal{G}_{\alpha_{r}}(\mathbf{x}^{r})- \widetilde{\mathcal{G}}_{\alpha_{r}}(\mathbf{x}^{r})\|^{2}\). **Corollary 21**: _Under the conditions in Theorem 19, we have_ \[\mathbb{E}[\|\mathcal{G}_{\alpha_{r}}(\mathbf{x}^{r})\|^{2}]\;\leq\;\frac{4\Phi( \mathbf{x}^{0})-4\Phi^{*}+4\sum_{k=0}^{N-1}\frac{\alpha_{k}\sigma^{2}}{\mu m_{k}}} {\sum_{k=0}^{N-1}\mu\alpha_{k}}+\frac{2\sigma^{2}}{\mu^{2}m_{k}}.\] ### Momentum based Stochastic Bregman Gradient Descent Method Remark 20 suggests that increasing the mini-batch size \(m_{k}\) is almost necessary for the error bound in Theorem 19 to converge to zero. However, using a large mini-batch size in each iteration can be computationally expensive in the modern large-scale problems, e.g. training deep neural network. In this part, we resort to the momentum technique to address this issue. Specifically, we consider using a stochastic moving average estimator (SMAE) for the true gradient given by: \[\mathbf{v}^{k}=(1-\beta_{k})\mathbf{v}^{k-1}+\beta_{k}\widetilde{\nabla}_{k},\quad \text{where}\quad\mathbb{E}[\widetilde{\nabla}_{k}|\mathcal{F}_{k}]=\nabla F( \mathbf{x}^{k}), \tag{9}\] where \(\mathbf{v}^{k-1}\) can be viewed as the momentum which contain the information of all historical stochastic gradients, and \(\mathbb{E}[\|\widetilde{\nabla}_{k}\|^{2}|\mathcal{F}_{k}]\leq\frac{\sigma^{ 2}}{m_{k}}\). We expect that incorporating the SMAE technique can achieve a certain level of variance reduction without increasing the mini-batch size. In our approach, we utilize the gradient estimator \(\mathbf{v}^{k}\) within SBPG, and we refer to the resulting method as MSBPG. Specifically, we consider the following update scheme: \[\mathbf{x}^{k+1}=\operatorname*{argmin}_{\mathbf{x}\in\overline{\mathcal{C}}}\;R(\bm {x})+\langle\mathbf{v}^{k},\,\mathbf{x}-\mathbf{x}^{k}\rangle+\frac{1}{\alpha_{k}} \mathcal{D}_{\phi}(\mathbf{x},\mathbf{x}^{k}). \tag{10}\] We need the following assumption that the difference of gradients of \(F\) can be bounded by the Bregman distance. **Assumption 3**: _There exists \(\kappa>0\), such that \(\|\nabla F(\mathbf{x})-\nabla F(\mathbf{y})\|^{2}\leq\kappa\mathcal{D}_{\phi}(\mathbf{x}, \mathbf{y})\) for all \(\mathbf{x}\in dom\,\phi\), \(\mathbf{y}\in int\,dom\,\phi\)._ **Remark 22**: _This assumption generalizes the case of Lipschitz kernel function. Assume that \(F\) is \(L_{F}\) smooth adaptable to \(\phi\), it can be easily shown that if \(\phi\) has \(L_{\phi}\)-Lipschitz gradient, this assumption holds for \(\kappa\geq\frac{2L_{F}^{2}L_{\phi}^{2}}{\mu}\). In this paper, we are more interested in polynomial kernel functions. For functions with polynomially bounded growth rates, this assumption is not restrictive. For example, consider the one-dimensional objective function \(F(x)=\frac{1}{4}x^{4}\) and the kernel function \(\phi(x)=\frac{1}{2}x^{2}+\frac{1}{8}x^{8}\). Then, by (Lu et al., 2018, Proposition 2.1), we know that \(F\) is smooth adaptable with respect to \(\phi\). Simple algebra shows that \(\mathcal{D}_{\phi}(x,y)=\frac{1}{8}(x-y)^{2}(x^{6}+2x^{5}y+3x^{4}y^{2}+4x^{3}y^ {3}+5x^{2}y^{4}+6xy^{5}+7y^{6}+4)\) and \((F^{\prime}(x)-F^{\prime}(y))^{2}=(x-y)^{2}(x^{2}+xy+y^{2})^{2}\). Numerical computation shows that \((x^{6}+2x^{5}y+3x^{4}y^{2}+4x^{3}y^{3}+5x^{2}y^{4}+6xy^{5}+7y^{6}+4)-(x^{2}+xy+ y^{2})^{2}\geq 3.71\). Therefore, \((F^{\prime}(x)-F^{\prime}(y))^{2}\leq 8\mathcal{D}_{\phi}(x,y)\), which holds globally for any \(x\) and \(y\)._ Next we present a recursion lemma that allows us to estimate the accuracy of the SMAE. While similar lemmas have been proposed in the literature, such as in Wang et al. (2017), their bounds are not directly applicable in the Bregman setting. As a result, we have developed a version of the recursion lemma that is tailored to our specific context. **Lemma 23**: _The following recursion holds_ \[\mathbb{E}[\|\mathbf{v}^{k}-\nabla F(\mathbf{x}^{k})\|^{2}|\mathcal{F}_{k}]\leq(1- \beta_{k})\|\mathbf{v}^{k-1}-\nabla F(\mathbf{x}^{k-1})\|^{2}+\beta_{k}^{2}\mathbb{E}[ \|\widetilde{\nabla}_{k}-\nabla F(\mathbf{x}^{k})\|^{2}|\mathcal{F}_{k}]+\frac{\| \nabla F(\mathbf{x}^{k-1})-\nabla F(\mathbf{x}^{k})\|^{2}}{\beta_{k}}.\] **Proof** Note that \(\mathbf{v}^{k}-\nabla F(\mathbf{x}^{k})=(1-\beta_{k})(\mathbf{v}^{k-1}-\nabla F(\mathbf{x}^{k- 1}))+(1-\beta_{k})(\nabla F(\mathbf{x}^{k-1})-\nabla F(\mathbf{x}^{k}))+\beta_{k}( \widetilde{\nabla}_{k}-\nabla F(\mathbf{x}^{k}))\), and \(\mathbb{E}[\widetilde{\nabla}_{k}-\nabla F(\mathbf{x}^{k})|\mathcal{F}_{k}]=0\). Then we have \[\mathbb{E}[\|\mathbf{v}^{k}-\nabla F(\mathbf{x}^{k})\|^{2}|\mathcal{F}_{k}]\] \[=(1-\beta_{k})^{2}\|\mathbf{v}^{k-1}-\nabla F(\mathbf{x}^{k-1})\|^{2}+(1- \beta_{k})^{2}\|\nabla F(\mathbf{x}^{k-1})-\nabla F(\mathbf{x}^{k})\|^{2}+\] \[\beta_{k}^{2}\mathbb{E}[\|\widetilde{\nabla}_{k}-\nabla F(\mathbf{x} ^{k})\|^{2}|\mathcal{F}_{k}]+2(1-\beta_{k})^{2}\langle\mathbf{v}^{k-1}-\nabla F( \mathbf{x}^{k-1}),\,\nabla F(\mathbf{x}^{k-1})-\nabla F(\mathbf{x}^{k})\rangle\] \[\leq(1-\beta_{k})^{2}\|\mathbf{v}^{k-1}-\nabla F(\mathbf{x}^{k-1})\|^{2}+ (1-\beta_{k})^{2}\|\nabla F(\mathbf{x}^{k-1})-\nabla F(\mathbf{x}^{k})\|^{2}+\] \[\beta_{k}^{2}\mathbb{E}[\|\widetilde{\nabla}_{k}-\nabla F(\mathbf{x} ^{k})\|^{2}|\mathcal{F}_{k}]+\beta_{k}(1-\beta_{k})\|\mathbf{v}^{k-1}-\nabla F( \mathbf{x}^{k-1})\|^{2}+\frac{(1-\beta_{k})^{3}}{\beta_{k}}\|\nabla F(\mathbf{x}^{k-1} )-\nabla F(\mathbf{x}^{k})\|^{2}\] \[=(1-\beta_{k})\|\mathbf{v}^{k-1}-\nabla F(\mathbf{x}^{k-1})\|^{2}+\beta_{k }^{2}\mathbb{E}[\|\widetilde{\nabla}_{k}-\nabla F(\mathbf{x}^{k})\|^{2}|\mathcal{F }_{k}]+\frac{(1-\beta_{k})^{2}\|\nabla F(\mathbf{x}^{k-1})-\nabla F(\mathbf{x}^{k})\|^ {2}}{\beta_{k}}\] \[\leq(1-\beta_{k})\|\mathbf{v}^{k-1}-\nabla F(\mathbf{x}^{k-1})\|^{2}+ \beta_{k}^{2}\mathbb{E}[\|\widetilde{\nabla}_{k}-\nabla F(\mathbf{x}^{k})\|^{2}| \mathcal{F}_{k}]+\frac{\|\nabla F(\mathbf{x}^{k-1})-\nabla F(\mathbf{x}^{k})\|^{2}}{ \beta_{k}}.\] This completes the proof. \(\blacksquare\) Now we are ready to provide the convergence result for our momentum based SBPG. **Theorem 24**: _Suppose Assumption 1, 2 and 3 hold. Let \(\alpha_{k}=c\mu\beta_{k+1}\) for any \(c\in(0,\frac{1}{2\sqrt{\mu}}]\). Then_ \[\mathbb{E}[\|\widetilde{\mathcal{G}}_{\alpha_{r}}(\mathbf{x}^{r})\|^{2}]\leq\frac{ \Phi^{0}-\Phi^{*}+c\|\mathbf{v}_{0}-\nabla F(\mathbf{x}^{0})\|^{2}+c\sum_{k=1}^{N} \frac{\beta_{k}^{2}\sigma^{2}}{m_{k}}}{\sum_{k=0}^{N-1}\frac{\mu\alpha_{k}}{8}}= \mathcal{O}\left(\frac{1}{\sum_{k=0}^{N-1}\alpha_{k}}+\frac{\sum_{k=0}^{N-1} \frac{\alpha_{k}^{2}}{m_{k+1}}}{\sum_{k=0}^{N-1}\alpha_{k}}\right),\] _where \(r\) is a random variable with distribution \(\mathbb{P}\{r=k\}=\frac{\alpha_{k}}{\sum_{k=0}^{N-1}\alpha_{k}}\), for \(k=0,...,N-1\)._ **Proof** From Lemma 16 and Cauchy-Young inequality, we have \[\Phi(\mathbf{x}^{k+1}) \leq\Phi(\mathbf{x}^{k})-\frac{1}{\alpha_{k}}\mathcal{D}_{\phi}(\mathbf{x }^{k},\mathbf{x}^{k+1})+\frac{\mu}{4\alpha_{k}}\|\mathbf{x}^{k}-\mathbf{x}^{k+1}\|^{2}+ \frac{\alpha_{k}}{\mu}\|\mathbf{\varepsilon}_{k}\|^{2}\] \[\leq\Phi(\mathbf{x}^{k})-\frac{1}{2\alpha_{k}}\mathcal{D}_{\phi}(\mathbf{x }^{k},\mathbf{x}^{k+1})+\frac{\alpha_{k}}{\mu}\|\mathbf{\varepsilon}_{k}\|^{2}.\] where we have defined \(\mathbf{\varepsilon}_{k}:=\nabla F(\mathbf{x}^{k})-\mathbf{v}^{k}\). Summing the above inequality over \(k=0,\ldots,N-1\) and rearranging the terms, we get \[\sum_{k=0}^{N-1}\frac{1}{2\alpha_{k}}\mathcal{D}_{\phi}(\mathbf{x}^{k},\mathbf{x}^{k+1} )\leq\Phi(\mathbf{x}^{0})-\Phi^{*}+\sum_{k=0}^{N-1}\frac{\alpha_{k}}{\mu}\|\mathbf{ \varepsilon}_{k}\|^{2}.\] By applying Lemma 23, we can obtain the following inequality: \[\beta_{k}\mathbb{E}[\|\mathbf{\varepsilon}_{k-1}\|^{2}]\ \leq\ \mathbb{E}[\|\mathbf{ \varepsilon}_{k-1}\|^{2}]-\mathbb{E}[\|\mathbf{\varepsilon}_{k}\|^{2}]+\beta_{k} ^{2}\mathbb{E}[\|\widetilde{\nabla}_{k}-\nabla F(\mathbf{x}^{k})\|^{2}]+\mathbb{ E}\left[\frac{\|\nabla F(\mathbf{x}^{k-1})-\nabla F(\mathbf{x}^{k})\|^{2}}{\beta_{k}} \right].\] Hence \[\sum_{k=0}^{N-1}\beta_{k+1}\mathbb{E}[\|\mathbf{\varepsilon}_{k}\|^{2}]=\sum_{k=1 }^{N}\beta_{k}\mathbb{E}[\|\mathbf{\varepsilon}_{k-1}\|^{2}]\leq\|\mathbf{\varepsilon }_{0}\|^{2}+\sum_{k=1}^{N}\beta_{k}^{2}\mathbb{E}[\|\widetilde{\nabla}_{k}- \nabla F(\mathbf{x}^{k})\|^{2}]+\sum_{k=1}^{N}\mathbb{E}\left[\frac{\|\nabla F(\bm {x}^{k-1})-\nabla F(\mathbf{x}^{k})\|^{2}}{\beta_{k}}\right].\] Since \(\frac{\alpha_{k}}{\mu}=c\beta_{k+1}\) for some constant \(c\), we get the following inequality: \[\sum_{k=0}^{N-1}\frac{1}{2\alpha_{k}}\mathbb{E}[\mathcal{D}_{\phi}(\mathbf{x}^{k}, \mathbf{x}^{k+1})]\leq\Phi(x^{0})-\Phi^{*}+c\left(\|\mathbf{\varepsilon}_{0}\|^{2}+ \sum_{k=1}^{N}\beta_{k}^{2}\mathbb{E}[\|\widetilde{\nabla}_{k}-\nabla F(\mathbf{x} ^{k})\|^{2}]+\sum_{k=1}^{N}\mathbb{E}\left[\frac{\|\nabla F(\mathbf{x}^{k-1})- \nabla F(\mathbf{x}^{k})\|^{2}}{\beta_{k}}\right]\right).\] By using Assumption 3, we obtain that \[\frac{\|\nabla F(\mathbf{x}^{k})-\nabla F(\mathbf{x}^{k+1})\|^{2}}{\beta_{k+1}}\leq \frac{\kappa}{\beta_{k+1}}\mathcal{D}_{\phi}(\mathbf{x}^{k},\mathbf{x}^{k+1}).\] Combining above two inequalities, we get \[\sum_{k=0}^{N-1}\frac{1}{2\alpha_{k}}\mathbb{E}[\mathcal{D}_{\phi}(\mathbf{x}^{k}, \mathbf{x}^{k+1})]\leq\Phi(\mathbf{x}^{0})-\Phi^{*}+c\left(\|\mathbf{\varepsilon}_{0}\|^{2 }+\sum_{k=1}^{N}\beta_{k}^{2}\mathbb{E}[\|\widetilde{\nabla}_{k}-\nabla F(\bm {x}^{k})\|^{2}]+\sum_{k=0}^{N-1}\frac{\kappa}{\beta_{k+1}}\mathbb{E}[\mathcal{ D}_{\phi}(\mathbf{x}^{k},\mathbf{x}^{k+1})]\right).\] Since \(c\leq\frac{1}{2\sqrt{\mu\mu}}\) and \(\frac{\alpha_{k}}{\mu}=c\beta_{k+1}\), we can deduce that \(\frac{c\kappa}{\beta_{k+1}}\leq\frac{1}{4\alpha_{k}}\). Using this condition, we obtain the inequality: \[\sum_{k=0}^{N-1}\frac{1}{4\alpha_{k}}\mathbb{E}[\mathcal{D}_{\phi}(\mathbf{x}^{k}, \mathbf{x}^{k+1})]\leq\Phi(\mathbf{x}^{0})-\Phi^{*}+c\left(\|\mathbf{\varepsilon}_{0}\|^{2 }+\sum_{k=1}^{N}\beta_{k}^{2}\mathbb{E}[\|\widetilde{\nabla}_{k}-\nabla F(\bm {x}^{k})\|^{2}]\right).\] Note that \(\mathcal{D}_{\phi}(\mathbf{x}^{k},\mathbf{x}^{k+1})\geq\frac{\mu}{2}\|\mathbf{x}^{k}-\mathbf{x} ^{k+1}\|^{2}=\frac{\mu\alpha_{k}^{2}}{2}\|\widetilde{\mathcal{G}}_{\alpha_{k}} (\mathbf{x}^{k})\|^{2}\) and by the definition of the random variable \(a\), we get \[\mathbb{E}[\|\widetilde{\mathcal{G}}_{\alpha_{k}}(\mathbf{x}^{a})\|^{2}]\leq\frac{ \Phi^{0}-\Phi^{*}+c\|\mathbf{\varepsilon}_{0}\|^{2}+c\sum_{k=1}^{N}\frac{\beta_{k}^ {2}\sigma^{2}}{m_{k}}}{\sum_{k=0}^{N-1}\frac{\mu\alpha_{k}}{8}}=\mathcal{O} \left(\frac{1}{\sum_{k=0}^{N-1}\alpha_{k}}+\frac{\sum_{k=0}^{N-1}\frac{\alpha_ {k}^{2}}{m_{k+1}}}{\sum_{k=0}^{N-1}\alpha_{k}}\right),\] which completes the proof. **Remark 25**: _Now we give some remarks for Theorem 24._ 1. _When the sequence_ \(\{\mathbf{x}_{k}\}\) _is bounded, an alternative assumption to Assumption_ 3 _is that_ \(C=\mathbb{R}^{d}\) _and_ \(\phi\) _has a locally Lipschitz gradient, as made in_ _(_Bolte et al._,_ 2018_, Theorem_ 4.1_)_ _and_ _(_Latafat et al._,_ 2022_, Theorem_ 4.7_)__. Under these assumptions, we can conclude that there exists a compact set_ \(\mathcal{U}\) _containing_ \(\{\mathbf{x}_{k}\}\)_. Therefore, there exists_ \(L_{\phi\mathcal{U}}>0\) _such that_ \(\nabla\phi\) _is Lipschitz continuous over_ \(\mathcal{U}\)_, and we can derive that_ \(\|\nabla F(\mathbf{x})-\nabla F(\mathbf{y})\|^{2}\leq L_{F}^{2}L_{\phi\mathcal{U}}^{2} \|\mathbf{x}-\mathbf{y}\|^{2}\leq\frac{2L_{F}^{2}L_{\phi\mathcal{U}}^{2}}{\mu}\mathcal{ D}_{\phi}(\mathbf{x},\mathbf{y})\) _holds._ 2. _Compared to the results presented in Theorem_ 19_, it is worth noting that even when keeping_ \(m_{k}\) _constant, we can still achieve convergence to zero error bound by carefully selecting the stepsize sequence_ \(\{\alpha_{k}\}\)_. A typical stepsize condition is that_ \(\sum_{k}\alpha_{k}=\infty\)_,_ \(\sum_{k}\alpha_{k}^{2}<\infty\)_, which coincides with the classical stepsize condition that guarantees a sufficient but not too fast decrease of the stepsize, as discussed in_ _Bertsekas and Tsitsiklis_ _(_2000_)__. Therefore, by incorporating the momentum technique, we can achieve an improved convergence with negligible additional computation costs. This desirable convergence property theoretically supports the use of SBPG with momentum for large-scale problems, such as deep neural networks, without the need to increase the mini-batch size._ ## 4 Application of MSBPG in deep neural networks In this section, we present a detailed description of MSBPG applied to training deep neural networks. Throughout this section, we assume that the optimization domain \(\overline{C}\) is the entire space \(\mathbb{R}^{d}\), so that \(\phi\in\mathcal{M}(\mathbb{R}^{d})\) and \(F\in\mathcal{C}^{1}(\mathbb{R}^{d})\). For simplicity, we omit the explicit mention of the feasible set \(\mathbb{R}^{d}\) in this section. The optimization problem we consider here is given by: \[\min_{\mathbf{W}}\ \underbrace{\frac{1}{N}\sum_{i=1}^{N}\mathcal{L}(\mathcal{DNN}( \mathbf{W},\mathbf{x}_{i}),y_{i})}_{F(\mathbf{W})}+\lambda\|\mathbf{W}\|_{1}, \tag{11}\] where \(\mathcal{DNN}(\mathbf{W},\mathbf{x})\) is the neural network function with training parameters \(\mathbf{W}\) and input data \(\mathbf{x}\), \(\mathcal{L}\) is the loss function that measures the difference between the output of the neural network \(\mathcal{DNN}(\mathbf{W},\mathbf{x}_{i})\) and the label \(y_{i}\), \(F(\mathbf{W})\) is the training loss evaluated on the training dataset \(\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{N}\), and \(\lambda\|\mathbf{W}\|_{1}\) is the \(L_{1}\) regularization term that is often used to avoid overfitting in training deep neural networks (Ng, 2004). To illustrate the neural network function \(\mathcal{DNN}(\mathbf{W},\mathbf{x})\), in the \(L\)-layer fully connected neural network, we have \(\mathbf{W}=[\mathbf{W}_{1},\mathbf{W}_{2},\cdots,\mathbf{W}_{L}]\) and \[\mathcal{DNN}(\mathbf{W},\mathbf{x})=\sigma_{L}(\mathbf{W}_{L}(\sigma_{L-1}(\mathbf{W}_{L-1}(...(\sigma_{1}(\mathbf{W}_{1}\mathbf{x}))...)))), \tag{12}\] where \(\sigma_{i}\) is the nonlinear activation function. In this paper, we focus on smooth activation functions. At the \(k\)-th iteration, MSBPG has the following update scheme: \[\mathbf{v}_{k} = (1-\beta_{k})\mathbf{v}^{k-1}+\beta_{k}\widetilde{\nabla}_{k} \tag{13}\] \[\mathbf{W}^{k+1} = \underset{\mathbf{W}}{\text{argmin}}\ \langle\mathbf{v}^{k},\,\mathbf{W}-\mathbf{W}^{k} \rangle+\frac{1}{\alpha_{k}}\mathcal{D}_{\phi}(\mathbf{W},\mathbf{W}^{k})+\lambda\| \mathbf{W}\|_{1}, \tag{14}\] where \(\widetilde{\nabla}_{k}\) is mini-batch gradient computed by automatic differentiation (Griewank and Walther, 2008). Omitting all the constants, the subproblem takes the form of: \[\mathbf{W}^{k+1}=\operatorname*{argmin}_{\mathbf{W}}\ \phi(\mathbf{W})+\langle\mathbf{p}^{k},\, \mathbf{W}\rangle+\alpha_{k}\lambda\|\mathbf{W}\|_{1}, \tag{15}\] where \(\mathbf{p}^{k}=\alpha_{k}\mathbf{v}^{k}-\nabla\phi(\mathbf{W}^{k})\). Here we adopt the kernel function \(\phi(\mathbf{W})=\frac{1}{2}\|\mathbf{W}\|^{2}+\frac{\delta}{r}\|\mathbf{W}\|^{r}\) (\(r\geq 2\)) for training neural networks, and then we have an explicit solution for (15) in Proposition 26. **Proposition 26**: _Given \(\mathbf{p}^{k}\in\mathbb{R}^{d}\), positive constant \(\alpha_{k}\), \(\lambda\), and the kernel function \(\phi(\mathbf{W})=\frac{1}{2}\|\mathbf{W}\|^{2}+\frac{\delta}{r}\|\mathbf{W}\|^{r}\)\((r\geq 2,\ \delta>0)\). The solution of the subproblem (15) is given by_ \[\mathbf{W}^{k+1}=-t^{*}\mathbf{p}^{+},\] _where \(t^{*}\) is the unique positive real root of the equation_ \[(\delta\|\mathbf{p}^{+}\|^{r-2})t^{r-1}+t-1=0, \tag{16}\] _and \(\mathbf{p}^{+}\) is given by_ \[\mathbf{p}^{+}=\operatorname*{argmin}_{\mathbf{p}}\left\{\frac{1}{2}\|\mathbf{p}-\mathbf{p}^{ k}\|^{2}+\alpha_{k}\lambda\|\mathbf{p}\|_{1}\right\}\] _which has an explicit expression given by \(\mathbf{p}^{+}_{j}=\operatorname*{sign}(\mathbf{p}^{k}_{j})\max(|\mathbf{p}^{k}_{j}|- \alpha_{k}\lambda,0)\) for the \(j\)-th coordinate._ **Proof** _The optimality condition of (15) is given by_ \[0=\mathbf{W}^{k+1}(1+\delta\|\mathbf{W}^{k+1}\|^{r-2})+\mathbf{p}^{k}+\alpha_{k}\lambda \mathbf{\Gamma}^{k},\ \text{where}\ \mathbf{\Gamma}^{k}\in\partial\|\cdot\|_{1}(\mathbf{W}^{k+1}).\] _Let \(\mathbf{p}^{+}=\mathbf{p}^{k}+\alpha_{k}\lambda\mathbf{\Gamma}^{k}\). By the optimality condition, we have \(\mathbf{W}^{k+1}=-t\mathbf{p}^{+}\) for some positive scalar \(t\), and_ \[(-t-\delta\|\mathbf{p}^{+}\|^{r-2}t^{r-1}+1)\mathbf{p}^{+}=0.\] _If \(\mathbf{p}^{+}\neq 0\), then \(\delta\|\mathbf{p}^{+}\|^{r-2}t^{r-1}+t-1=0\). If \(\mathbf{p}^{+}=0\), then \(\mathbf{W}^{k+1}=-t\mathbf{p}^{+}=0\). Since \(t>0\), then we have \(\partial\|\cdot\|_{1}(\mathbf{W}^{k+1})=\partial\|\cdot\|_{1}(-t\mathbf{p}^{+})=- \partial\|\cdot\|_{1}(\mathbf{p}^{+})\). Recall the definition of \(\mathbf{p}^{+}\), we have_ \[\mathbf{p}^{+}=\mathbf{p}^{k}+\alpha_{k}\lambda\mathbf{\Gamma}^{k}\in\mathbf{p}^{k}-\alpha_{k }\lambda\partial\|\cdot\|_{1}(\mathbf{p}^{+}),\] _which is sufficient and necessary optimality condition of the convex optimization problem:_ \[\mathbf{p}^{+}=\operatorname*{argmin}_{\mathbf{p}}\left\{\frac{1}{2}\|\mathbf{p}-\mathbf{p}^{ k}\|^{2}+\alpha_{k}\lambda\|\mathbf{p}\|_{1}\right\}.\] _This completes the proof by noting the the above minimization problem is the well-known soft threshold operator, see for example Friedman et al. (2010)._ **Example 1**: _In the absence of regularization, that is, when \(\lambda=0\), then \(\mathbf{p}^{+}=\mathbf{p}^{k}\) and the update formula for MSBPG at the \(k\)-th iteration simplifies to \(\mathbf{W}^{k+1}=-t^{*}\mathbf{p}^{k}\), where \(t^{*}\) is the positive root of the equation (16). In this case, \(\mathbf{W}^{k+1}=t^{*}(\nabla\phi(\mathbf{W}^{k})-\alpha_{k}\mathbf{v}^{k})\)._ **Example 2**: _If we set the \(L_{1}\) regularization parameter \(\lambda\) to zero and choose the kernel function simply as the Euclidean distance, i.e. \(\delta=0\), then SBPG reduces to SGD with momentum. Specifically, we have \(t^{*}=1\) and the update_ \[\mathbf{W}^{k+1}=\mathbf{W}^{k}-\alpha_{k}\mathbf{v}^{k}.\] Determining degree of kernel functionWe now turn our attention to selecting the appropriate parameter \(r\) for the kernel function. Intuitively, in order to bound the Hessian of the loss function in (11), particularly when the number of layers \(L\) in (12) is large, \(r\) should also be chosen to be larger, so that \(\nabla^{2}F\preceq\frac{1}{\alpha}\nabla^{2}\phi\) holds globally for some \(\alpha>0\). However, in this case, a significant numerical issue may arise when computing \(\|\mathbf{W}\|^{r-2}\). This problem can be avoided if the deep neural network exhibits some special structure such that a moderate \(r\) can make \(F(\mathbf{W})\) smooth adaptable with respect to \(\phi(\mathbf{W})\). For simplicity of analysis, we assume all the given label \(y_{i}\) as zero and consider a sum of squares error loss function. Then, we have a two-layer model defined as follows: \[\min_{\mathbf{W}=(\mathbf{u},\mathbf{v})}\;F(\mathbf{W})=\frac{1}{2}\,\|\sigma\left(\mathrm{Mat }(\mathbf{u})(g(\mathbf{v}))\right)\|^{2}\,, \tag{17}\] where \(\mathbf{v}\in\mathbb{R}^{n}\), \(\mathbf{u}\in\mathbb{R}^{km}\), \(g:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\), \(\sigma:\mathbb{R}\rightarrow\mathbb{R}\), \(\mathrm{Mat}(\mathbf{u})\in\mathbb{R}^{k\times m}\) and \(\sigma(\cdot)\) is a coordinate-wise operator. Notably, any deep neural network can be recast as the two-layer model given by (17). For instance, if we define \(\mathbf{v}=(\mathbf{W}_{1},...,\mathbf{W}_{L-1})\), \(\mathbf{u}=\mathrm{Vec}(\mathbf{W}_{L})\), \(g(\mathbf{W}_{1},...,\mathbf{W}_{L-1})=\sigma_{L-1}(\mathbf{W}_{L-1}(...(\sigma_{1}(\mathbf{ W}_{1}\mathbf{x}))...))\), then model (12) can be reformulated as (17). We make the following assumptions in this section, which guarantees that we can find a polynomial kernel function \(\phi\) with a moderate degree, such that \(F\) in (17) is smooth adaptable to \(\phi\). **Assumption 4**: \(\sigma\) _is twice differentiable and \(\sigma^{\prime}\) and \(\sigma\cdot\sigma^{\prime\prime}\) are globally bounded._ **Assumption 5**: \(g\) _is twice differentiable. All partial derivatives of order zero, one, and two of \(g\) are globally bounded._ **Remark 27**: _Now we give some remarks on the above assumptions._ 1. _Assumption_ 4 _is typically valid for various commonly used smooth activation functions. For example, the sigmoid function_ \(\sigma(x)=\frac{1}{1+e^{-x}}\) _satisfies global boundedness for both_ \(\sigma\) _and_ \(\sigma^{\prime\prime}\)_. Certain activation function may not have bounded function value, such as GELU_ _(_Hendrycks and Gimpel_,_ 2016_)__, which takes the formulation of_ \(\sigma(x)=x\Phi(x)\) _where_ \(\Phi\) _is the standard Gaussian cumulative distribution function. Nonetheless, the product_ \(\sigma\cdot\sigma^{\prime\prime}\) _is globally bounded. Another type of activation function satisfying Assumption_ 4 _is the smoothed ReLU function, for example, the following smoothed ReLU function, which we will consider in our numerical experiments:_ \[\sigma_{\epsilon}(x)=\left\{\begin{array}{cc}0&x\leq 0\\ x^{3}\left(\frac{1}{\epsilon^{2}}-\frac{x}{2\epsilon^{3}}\right)&0<x\leq \epsilon\\ x-\frac{\epsilon}{2}&x>\epsilon.\end{array}\right.\] _We observe that as_ \(\epsilon\) _tends to zero,_ \(\sigma_{\epsilon}\) _converges to the ReLU function. It is straightforward to verify that_ \(\sigma_{\epsilon}\cdot\sigma^{\prime\prime}_{\epsilon}\) _is globally bounded. Specifically,_ \(\frac{3}{4}\) _is a uniform bound on_ \(\sigma_{\epsilon}\cdot\sigma^{\prime\prime}_{\epsilon}\) _for_ \(\epsilon\in(0,\frac{1}{2})\)_._ 2. _In many popular neural network frameworks, batch normalization (BN) layers (Ioffe and Szegedy, 2015) are often used before the fully connected layers. For example, in the VGG_ _(_Simonyan and Zisserman_,_ 2014_)_ _and ResNet_ _(_He et al._,_ 2016_)__, BN layers are usually used before the last linear layer. In this case, we can treat all layers except the last one as one layer, which can be modeled as (17). It is expected that the BN layer can make the function \(g\) sufficiently smooth, thereby satisfying Assumption 5._ By applying the chain rule, we can compute the Hessian of \(F\) and determine a suitable degree parameter \(r\) in the kernel function, which will ensure that \(\nabla^{2}F\) is bounded by \(\nabla^{2}\phi\) globally. Consequently, \(F\) is smooth adaptable with respect to \(\phi\). In order to compute the Hessian of \(F\), two formulas are required, which can be verified directly. **Lemma 28**: _Let \(\mathbf{u}\in\mathbb{R}^{km}\), \(\mathbf{g}\in\mathbb{R}^{m}\), \(\mathbf{A}\in\mathbb{R}^{n\times m}\) and \(\mathbf{b}\in\mathbb{R}^{k}\). Consider two linear maps: \(\mathbf{u}\mapsto\mathrm{Mat}(\mathbf{u})\mathbf{g}\) and \(\mathbf{u}\mapsto\mathbf{A}(\mathrm{Mat}(\mathbf{u}))^{T}\mathbf{b}\), then, the Jacobian of the two maps are given by_ \[J_{\mathbf{u}}\left[\mathrm{Mat}(\mathbf{u})\mathbf{g}\right]=\mathbf{g}^{T} \otimes\mathbb{I}_{k},\] \[J_{\mathbf{u}}[\mathbf{A}(\mathrm{Mat}(\mathbf{u}))^{T}\mathbf{b}]=\mathbf{A}\otimes \mathbf{b}^{T}.\] **Proposition 29**: _Suppose Assumptions 4 and 5 hold. Then, for any given \(\delta>0\) and any \(r\geq 4\), the function \(F\) defined in (17) is smooth adaptable with respect to \(\phi(\mathbf{W})=\frac{1}{2}\|\mathbf{W}\|^{2}+\frac{\delta}{r}\|\mathbf{W}\|^{r}\)._ **Proof** We denote \(\mathrm{Mat}(\mathbf{u})\) by \(\mathbf{M}\). The Jacobian of \(g\) is denoted by \(Jg\), while its transpose is denoted by \(J^{T}g\). \(\mathbb{I}_{k}\) is \(k\times k\) identity matrix. Using Lemma 28, we can compute the Jacobian and Hessian of \(F\) as follows: **Jacobian of \(F\) :** \[\begin{split}\frac{\partial F}{\partial\mathbf{u}}&=(g( \mathbf{v})\otimes\mathbb{I}_{k})\left[\sigma^{\prime}(\mathbf{M}g(\mathbf{v}))\circ \sigma(\mathbf{M}g(\mathbf{v}))\right],\\ \frac{\partial F}{\partial\mathbf{v}}&=J^{T}g(\mathbf{v}) \mathbf{M}^{T}\left[\sigma^{\prime}(\mathbf{M}g(\mathbf{v}))\circ\sigma(\mathbf{M}g(\mathbf{v})) \right].\end{split} \tag{18}\] **Hessian of \(F\) :** \[\begin{split}\frac{\partial^{2}F}{\partial\mathbf{u}^{2}}& =(1)+(2),\\ \text{where }&(1)=(g(\mathbf{v})\otimes\mathbb{I}_{k}) \operatorname{Diag}\bigl{(}\sigma(\mathbf{M}g(\mathbf{v}))\circ\sigma^{\prime\prime}( \mathbf{M}g(\mathbf{v}))\bigr{)}(\mathbf{g}^{T}(v)\otimes\mathbb{I}_{k})\\ &(2)=(g(\mathbf{v})\otimes\mathbb{I}_{k})\operatorname{Diag}\bigl{(} \sigma^{\prime}(\mathbf{M}g(\mathbf{v}))\circ\sigma^{\prime}(\mathbf{M}g(\mathbf{v}))\bigr{)}( \mathbf{g}^{T}(v)\otimes\mathbb{I}_{k}).\end{split} \tag{19}\] \[\begin{split}\frac{\partial^{2}F}{\partial\mathbf{u}\partial\mathbf{v}}& =(1)+(2)+(3),\\ \text{where }&(1)=\left(J^{T}g(\mathbf{v})\right)\otimes\left[ \sigma^{\prime}(\mathbf{M}g(\mathbf{v}))\circ\sigma^{\prime}(\mathbf{M}g(\mathbf{v}))\right]^{T }\\ &(2)=J^{T}g(\mathbf{v})\mathbf{M}^{T}\operatorname{Diag}[\sigma(\mathbf{M}g( \mathbf{v}))\circ\sigma^{\prime\prime}(\mathbf{M}g(\mathbf{v}))]\left(g^{T}(\mathbf{v})\otimes \mathbb{I}_{k}\right)\\ &(3)=J^{T}g(\mathbf{v})\mathbf{M}^{T}\operatorname{Diag}[\sigma^{\prime}( \mathbf{M}g(\mathbf{v}))\circ\sigma^{\prime}(\mathbf{M}g(\mathbf{v}))]\left(g^{T}(\mathbf{v}) \otimes\mathbb{I}_{k}\right).\end{split} \tag{20}\] \[\begin{split}\frac{\partial^{2}F}{\partial\mathbf{v}^{2}}& =(1)+(2)+(3),\\ \text{where }&(1)=D^{2}g(\mathbf{v})\left[\mathbf{M}^{T}[\sigma^{ \prime}(\mathbf{M}g(\mathbf{v}))\circ\sigma(\mathbf{M}g(\mathbf{v}))]\right]=\sum d_{i}\nabla^{ 2}g_{i}(\mathbf{v})\\ &(2)=J^{T}g(\mathbf{v})\mathbf{M}^{T}\operatorname{Diag}[\sigma(\mathbf{M}g( \mathbf{v}))\circ\sigma^{\prime\prime}(\mathbf{M}g(\mathbf{v}))]\mathbf{M}Jg(\mathbf{v})\\ &(3)=J^{T}g(\mathbf{v})\mathbf{M}^{T}\operatorname{Diag}[\sigma^{\prime}( \mathbf{M}g(\mathbf{v}))\circ\sigma^{\prime}(\mathbf{M}g(\mathbf{v}))]\mathbf{M}Jg(\mathbf{v}),\end{split} \tag{21}\] where \(\mathbf{d}=\mathbf{M}^{T}[\sigma^{\prime}(\mathbf{M}g(\mathbf{v}))\circ\sigma(\mathbf{M}g(\mathbf{v}))]\). Now, we are ready to prove this proposition. For any \(\mathbf{w}\in\mathbb{R}^{km+n}\) and \(\mathbf{h}=[\mathbf{h}^{u};\mathbf{h}^{v}]\in\mathbb{R}^{km+n}\), it suffices to prove that \(\langle\nabla^{2}F(\mathbf{w})\mathbf{h},\,\mathbf{h}\rangle=\mathcal{O}(\langle\nabla^{2} \phi(\mathbf{w})\mathbf{h},\,\mathbf{h}\rangle)\). From (19)(20)(21) and Assumption 4, 5, we can easily get \(\langle\nabla^{2}F(\mathbf{w})\mathbf{h},\,\mathbf{h}\rangle=\mathcal{O}((1+\|\mathbf{w}\|^{2} )\|\mathbf{h}\|^{2})\). On the other hand, \(\nabla^{2}\phi(\mathbf{w})=I(1+\|\mathbf{w}\|^{r-2})+(r-2)\|\mathbf{w}\|^{r-4}\mathbf{w}\mathbf{w}^ {T}\). Hence \(\langle\nabla^{2}\phi(\mathbf{w})\mathbf{h},\,\mathbf{h}\rangle\geq(1+\|\mathbf{w}\|^{r-2}) \|\mathbf{h}\|^{2}\). So, we only require \(r-2\geq 2\). This completes the proof. Layerwise kernel functionIn Proposition 26, the kernel function \(\phi(\mathbf{W})=\frac{1}{2}\|\mathbf{W}\|^{2}+\frac{\delta}{r}\|\mathbf{W}\|^{r}\) is used, which means we adopt the same Bregman distance for all layers of deep neural networks. However, different layers have different optimization property for deep neural networks (You et al., 2019), and computing \(\|\mathbf{W}\|^{r}\) with \(r>2\) may result in numerical issues for neural networks with millions of parameters, such as in VGG (Simonyan and Zisserman, 2014). To take advantage of the layerwise structure of neural networks, we design a layerwise kernel function for a \(L\)-layer neural network as follows: \[\phi(\mathbf{W})=\sum_{i=1}^{L}\phi_{i}(\mathbf{W}_{i}),\quad\phi_{i}(\mathbf{W}_{i})= \frac{1}{2}\|\mathbf{W}_{i}\|^{2}+\frac{\delta}{r}\|\mathbf{W}_{i}\|^{r}. \tag{22}\] Note that \(\delta\) and \(r\) can vary from layer to layer, here we take the same \(\delta\) and \(r\) for different layers for simplicity. Then, we have the Bregman distance taking the form \(\mathcal{D}_{\phi}=\sum_{i=1}^{L}\mathcal{D}_{\phi_{i}}\). By employing this Bregman distance in subproblem (14), our MSBPG algorithm can be implemented in a layerwise manner. See the details in Algorithm 1. ``` 1:Input: Total number of iterations \(K\), stepsize \(\alpha_{k}\), momentum parameter \(\beta_{k}\), \(\delta\) and \(r\) to determine the kernel function \(\phi\). 2:Initialize: Set \(\mathbf{W}=\mathbf{W}^{0}\), \(\mathbf{v}_{0}=0\). 3:for\(k=0,\cdots,K-1\)do 4: Compute mini-batch gradient \(\widetilde{\nabla}_{k}\); 5: Compute SMAE: \(\mathbf{v}^{k}=(1-\beta_{k})\mathbf{v}^{k-1}+\beta_{k}\widetilde{\nabla}_{k}\); 6:for\(i=1,\dots,L\)do 7:\(\mathbf{p}_{i}^{k}=\alpha_{k}\mathbf{v}_{i}^{k}-\nabla\phi(\mathbf{W}_{i}^{k})\); 8:\(\mathbf{p}_{i}^{+}=\operatorname*{argmin}_{\mathbf{p}_{i}}\{\frac{1}{2}\|\mathbf{p}_{i}- \mathbf{p}_{i}^{k}\|^{2}+\alpha_{k}\lambda\|\mathbf{p}_{i}\|_{1}\}\); 9: Solve \((\delta\|\mathbf{p}_{i}^{+}\|^{r-2})t_{i}^{r-1}+t_{i}-1=0\) to get \(t_{i}^{k}\); 10:\(\mathbf{W}_{i}^{k+1}=-t_{i}^{k}\mathbf{p}_{i}^{+}\); 11:endfor 12:endfor 13:Output:\(\mathbf{W}^{1},\cdots,\mathbf{W}^{K}\). ``` **Algorithm 1** Momentum based Stochastic Bregman Proximal Gradient (MSBPG) for training neural networks Mitigating gradient explosionIn the training of deep neural networks, gradient explosion is a common undesired phenomenon, where the gradients of the loss function grow exponentially from layer to layer, leading to numerical instability or even collapse of the training process (Hochreiter, 1991; Manchev and Spratling, 2020). The reasons for gradient explosion include selecting a large stepsize and choosing an improper initialization for the model's parameters (Pascanu et al., 2013). In the following, we will show that MSBPG provides a novel approach to mitigate gradient explosion. Considering MSBPG without regularization, the update rule is given by: \[\mathbf{W}_{i}^{k+1}=-t_{i}^{k}\mathbf{p}_{i}^{k}=t_{i}^{k}\left((1+\delta\|\mathbf{W}_{i} ^{k}\|^{r-2})\mathbf{W}_{i}^{k}-\alpha_{k}\mathbf{v}_{i}^{k}\right), \tag{23}\] where \(t_{i}^{k}\in(0,1)\) is the unique positive root of \[\left(\delta\left\|(1+\delta\|\mathbf{W}_{i}^{k}\|^{r-2})\mathbf{W}_{i}^{k}-\alpha_{k }\mathbf{v}_{i}^{k}\right\|^{r-2}\right)t^{r-1}+t-1=0. \tag{24}\] Combining (23) and (24), we have the following equivalent implicit update scheme for the \(i\)-th layer: \[\mathbf{W}_{i}^{k+1}=\frac{1+\delta\|\mathbf{W}_{i}^{k}\|^{r-2}}{1+\delta\|\mathbf{W}_{i} ^{k+1}\|^{r-2}}\mathbf{W}_{i}^{k}-\frac{\alpha_{k}}{1+\delta\|\mathbf{W}_{i}^{k+1}\|^ {r-2}}\mathbf{v}_{i}^{k}. \tag{25}\] It is observed in practice that with large stepsize or large initial point, the gradient \(\mathbf{v}_{i}^{k}\) tends to explode if no scaling is done, while the norm of the weight \(\|\mathbf{W}_{i}^{k+1}\|\) also tends to be large. In (25), we see that by scaling the gradient with \(\frac{1}{1+\delta\|\mathbf{W}_{i}^{k+1}\|^{r-2}}\), the weight \(\mathbf{W}_{i}^{k+1}\) is relieved from moving excessively in the direction of the gradient to avoid rapid growth of its norm. At the same time, we can also see that if the norm \(\|\mathbf{W}_{i}^{k+1}\|\) does not change drastically, the coefficient of \(\mathbf{W}_{i}^{k}\) in (25) will be maintained to be approximately 1. Thus in (25), we see an automatic scaling of the gradient to avoid rapid growth of the weight and hence also mitigating subsequent gradient explosion. Experimental results in Section 5.2 indeed verify MSBPG's ability to mitigate gradient explosion for training deep neural networks. An intuitive illustration of MSBPG's "pull-back" ability is given in Figure 11 in Appendix B, and this "pull-back" ability originates from the Bregman proximity model and the polynomial kernel function we adopt. Improving generalization capacityFrom (25) we can see that MSBPG employs a scaling for the gradient during the update of the parameters as a result of adopting a Bregman proximity model and a polynomial kernel function. This scaling not only helps to mitigate the gradient explosion phenomenon, but also can improve the generalization capacity of MSBPG. To be specific, at the beginning of the training process, the initial weight of each layer \(\mathbf{W}_{i}\) tends to have a larger norm \(\|\mathbf{W}_{i}\|\) and therefore MSBPG takes a cautious update at the beginning phase of training. As the training goes on, due to the effect of regularization, either \(L_{1}\) or \(L_{2}\) regularization, the norm of each layer's weight \(\|\mathbf{W}_{i}\|\) becomes smaller, and MSBPG can then take bolder update of the parameters. This implicit training strategy of MSBPG is in agreement with the idea of a heuristic deep learning training technique called "learning rate warm-up" (Gotmare et al., 2018), which benefits the training stability and generalization performance (Liu et al., 2019). Experimental results in Section 5.2 also testify the excellent generalization capacity of MSBPG. ## 5 Numerical experiments In this section, we conduct numerical experiments to showcase the effectiveness and robustness of MSBPG in comparison to modern solvers commonly used in deep learning. We assess the impact of stepsize and initial point selection on the performance of our method. Our experiments consist of two parts. In the first part, we use a quadratic inverse problem as a toy example to illustrate the capabilities of vanilla SBPG. The second part is the main focus of this section, where we evaluate the performance of MSBPG in training deep neural networks. The experiments for the quadratic inverse problem are conducted using MATLAB R2021b on a Windows workstation equipped with a 12-core Intel Xeon E5-2680 @ 2.50GHz processor and 128GB of RAM. For the deep learning experiments, we conducted the experiments using PyTorch running on a single RTX3090 GPU. ### Quadratic inverse problem The quadratic inverse problem, as formulated in Bolte et al. (2018), is given by: \[\min\left\{\Phi(\mathbf{x}):=\underbrace{\frac{1}{4}\sum_{i=1}^{n}(\langle A_{i} \mathbf{x},\,\mathbf{x}\rangle-b_{i})^{2}}_{F(\mathbf{x})}+\lambda R(\mathbf{x}):\mathbf{x}\in \mathbb{R}^{d}\right\},\] which has practical applications (Beck and Eldar, 2013) that includes the phase retrieval problem as a special case (Luke, 2017). In this experiment, we consider the \(L_{1}\) regularization \(R(\mathbf{x})=\|\mathbf{x}\|_{1}\) with \(\lambda=1\times 10^{-3}\), and solve the quadratic inverse problem using SBPG and stochastic (Euclidean) proximal gradient (SPG) method (Bertsekas, 2011). Notably, SPG is a special case of SBPG, in which \(\phi(\mathbf{x})=\frac{1}{2}\|\mathbf{x}\|^{2}\). Since the smooth term in the objective function \(F(\mathbf{x})\) does not admit a globally Lipschitz continuous gradient, we employ the kernel function \(\phi(\mathbf{x})=\frac{1}{2}\|\mathbf{x}\|^{2}+\frac{1}{r}\|\mathbf{x}\|^{r}\) with \(r=4\). It has been shown in Lu et al. (2018) that any \(r\geq 4\) guarantees that \(F\) is \(\phi\)-smooth adaptable globally. Moreover, according to Bolte et al. (2018), the smooth adaptable constant \(L_{F}\) can be chosen such that \(L_{F}\geq\sum_{i=1}^{n}(3\|A_{i}\|^{2}+\|A_{i}\|b_{i}\|)\) for \(r=4\). In this experiment, we randomly generate the data by the following MATLAB commands: \[\mathtt{ai}=\mathtt{randn}(\mathtt{d},1);\mathtt{Ai}=\mathtt{ ai}*\mathtt{ai}^{\prime};\] \[\mathtt{x}\_true=\mathtt{sprandn}(\mathtt{d},1,\mathtt{density }\_\mathtt{x});\mathtt{b}\_\mathtt{i}=\mathtt{x}\_\true^{\prime}*(\mathtt{ Ai}*\mathtt{x}\_\true);\] The true solution for the quadratic inverse problem is chosen as a sparse vector \(\mathbf{x}^{*}\) that satisfies \(\langle A_{i}\mathbf{x}^{*},\,\mathbf{x}^{*}\rangle=b_{i}\) for \(i=1,\ldots,n\). We set the mini-batch size for all algorithms to be \(m=1\). To evaluate the effectiveness of each algorithm, we use the following criterion that takes into account the possibility of critical points being local minimum or saddle points: \[\epsilon_{k}=\max\left\{\|\mathcal{G}(\mathbf{x}^{k})\|,\;\epsilon_{\mathtt{obj}} :=\frac{\mathtt{obj}_{k}-\mathtt{obj}_{*}}{1+\mathtt{obj}_{*}}\right\},\] where \(\mathtt{obj}_{k}=\Phi(\mathbf{x}^{k})\) and \(\mathtt{obj}_{*}=\Phi(\mathbf{x}^{*})\). The term \(\|\mathcal{G}(\mathbf{x}^{k})\|\) measures the stationarity of the solution, while a small \(\epsilon_{\mathtt{obj}}\) indicates that the solution is a "nearly" global minimum. We conduct experiments on a problem with data size \(d=100\) and \(\mathtt{density}\_\mathtt{x}=0.05\). All methods are run until they reach an accuracy of \(\epsilon_{k}\leq 0.01\) within a time limit of 30 seconds. To ensure statistical significance, we run each algorithm 10 times and report the median value. The results are presented in Figure 2. For Figures 2(a), we randomly select initial points within a ball centered at the origin with radius \(1\times 10^{-2}\). We use the stepsize schedule of \(\alpha_{k}=\max\left\{10^{-4},\frac{\alpha_{0}}{\sqrt{1+k}}\right\}\), where \(\alpha_{0}\) is the initial stepsize. For Figure 2(b), we set constant stepsize schedule \(1\times 10^{-3}\). For Figures 2(c), we randomly select initial points within a ball centered at the origin with radius \(1\times 10^{-2}\). We use constant stepsize schedule. To prevent excessively small stepsizes that can slow down all methods, we set a lower bound for the stepsize. Figure 2(a) demonstrates that SBPG has a larger range of convergent stepsizes than SPG, indicating that SBPG is more robust in terms of stepsize selection. The impact of the initial stepsize on the performance of the algorithms is reported in this figure. Figure 2(b) shows that SBPG is much more robust than SPG in terms of initial point selection. Specifically, SBPG exhibits high resilience to initial point selection and avoids causing the training to collapse. Figure 2(c) reveals that a larger degree \(r\) in the kernel function increases the safe stepsize threshold. These observations are partly explained in Section 4. Since a large stepsize and a large radius of the initial point tend to lead to gradient explosion, Bregman proximal mapping helps to pull back the iterate and guide it towards a better solution. Figure 2: Comparison of SBPG and SPG in terms of their robustness with respect to stepsize and initial point selction. A method is considered non-convergent if it fails to reach an accuracy of \(\epsilon_{k}<10^{-2}\) within 30 seconds or if it collapses. Generally, choosing large stepsize and large radius for the initial point can cause an algorithm to collapse. The safe stepsize threshold is the maximum stepsize (constant schedule) that a method does not collapse. We run 10 tests for each algorithm and report the median of the results. ### Deep neural network For the evaluation of MSBPG's performance on training deep neural networks, we consider the model with \(L_{2}\) regularization here for its better generalization capacity: \[\min_{\mathbf{W}}\;\underbrace{\frac{1}{N}\sum_{i=1}^{N}\mathcal{L}(\mathcal{DNN}(\bm {W},\mathbf{x}_{i}),y_{i})}_{F(\mathbf{W})}+\lambda\|\mathbf{W}\|_{2}^{2}. \tag{26}\] We employ MSBPG to solve this large-scale problem. Following AdamW (Loshchilov and Hutter, 2017), we also conduct decoupled weight decay at the end of each iteration for \(L_{2}\) regularization and do not consider the \(L_{2}\) regularization term when solving the subproblems. The detailed algorithm is summarized in Algorithm 2. At iteration \(k\), MSBPG first uses automatic differentiation to compute the mini-batch gradient \(\widetilde{\nabla}_{k}\). Then, it maintains a bias-corrected gradient estimator \(\bar{\mathbf{v}}^{k}\)(Kingma and Ba, 2014) and use it to calculate the layerwise \(\mathbf{p}_{i}^{k}\). With \(\mathbf{p}_{i}^{k}\), MSBPG solves a univariate equation to get \(t_{i}^{k}\) and update the weight of the \(i\)-th layer to \(\mathbf{W}_{i}^{k}\). In the end, MSBPG conducts decoupled weight decay as \(L_{2}\) regularization. ``` 1:Input: Total number of training epochs \(K\), momentum coefficient \(\beta\), stepsize \(\alpha_{k}\), weight decay coefficient \(\gamma\), \(\delta\) and \(r\) to determine the kernel function \(\phi\). 2:Initialize: Set \(\mathbf{W}=\mathbf{W}^{0}\), \(\mathbf{v}^{0}=\mathbf{0}\). 3:for\(k=1,\cdots,K\)do 4: Compute mini-batch gradient \(\widetilde{\nabla}^{k}\); 5:\(\mathbf{v}^{k}=\beta\mathbf{v}^{k-1}+(1-\beta)\widetilde{\nabla}^{k}\), \(\bar{\mathbf{v}}^{k}=\mathbf{v}^{k}/(1-\beta^{k})\); 6:for\(i=1,\cdots,L\)do 7:\(\mathbf{p}_{i}^{k}=\alpha_{k}\bar{\mathbf{v}}_{i}^{k}-\nabla\phi(\mathbf{W}_{i}^{k-1})\); 8: Solve \((\delta\|\mathbf{p}_{i}^{k}\|^{r-2})t_{i}^{r-1}+t_{i}-1=0\) to get \(t_{i}^{k}\); 9:\(\widetilde{\mathbf{W}}_{i}^{k}=-t_{i}^{k}\mathbf{p}_{i}^{k}\); 10:endfor 11:\(\mathbf{W}^{k}=\widetilde{\mathbf{W}}^{k}-\alpha_{k}\gamma\mathbf{W}^{k-1}\); 12:endfor 13:Output:\(\mathbf{W}^{1},\cdots,\mathbf{W}^{K}\) ``` **Algorithm 2** MSBPG with \(L_{2}\) regularization We conducted extensive experiments on several representative benchmarks, including VGG16 (Simonyan and Zisserman, 2014), ResNet34 (He et al., 2016) on CIFAR10 dataset (Krizhevsky et al., 2009), ResNet34 (He et al., 2016), DenseNet121 (Huang et al., 2017) on CIFAR100 dataset (Krizhevsky et al., 2009), and LSTMs (Hochreiter and Schmidhuber, 1997) on the Penn Treebank dataset (Marcinkiewicz, 1994). We compare MSBPG with the most popular optimization algorithms used for training neural networks, including SGD (Sutskever et al., 2013), Adam (Kingma and Ba, 2014), and AdamW (Loshchilov and Hutter, 2017). Experimental results show that MSBPG has excellent convergence performance and best generalization capacity for both the task that SGD dominates (image classification with CNNs) and the task that Adam dominates (language modeling with LSTMs). We also conducted experiments to compare MSBPG with SGD on different initial stepsizes and different scales of initial point. Our experimental results demonstrate the robustness of MSBPG in training neural networks. Before getting into the details of our experiments, we first make a clarification about the activation function. The frequently used activation function ReLU in VGG, ResNet, and DenseNet takes the form of \(\text{ReLU}(x)=\max(0,x)\), which is not continuously differentiable. Here we design a smoothing approximation of ReLU with coefficient \(\epsilon\), which is twice continuously differentiable and satisfies our Assumption 4, namely, \[\sigma_{\epsilon}(x)=\left\{\begin{array}{cc}0&x\leq 0\\ x^{3}\left(\frac{1}{\epsilon^{2}}-\frac{x}{2\epsilon^{3}}\right)&0<x\leq \epsilon\\ x-\frac{\epsilon}{2}&x>\epsilon.\end{array}\right.\] This activation function has gradient taking the form: \[\sigma^{\prime}_{\epsilon}(x)=\left\{\begin{array}{cc}0&x\leq 0\\ x^{2}(\frac{3}{\epsilon^{2}}-\frac{2x}{\epsilon^{3}})&0<x\leq\epsilon\\ 1&x>\epsilon.\end{array}\right.\] Note that as \(\epsilon\) tends to \(0\), this twice continuously differentiable activation function tends to the ReLU function. We conducted experiments with VGG16 on the CIFAR10 dataset, where we replace all the activation functions in VGG16 by \(\sigma_{\epsilon}\) defined above. As shown in Figure 4, our algorithm MSBPG's performance does not degrade as \(\epsilon\) tends to \(0\). Therefore, in the subsequent experiments, we use the original neural network architectures with the ReLU activation function to evaluate our method MSBPG. CNNs on image classificationWe experimented with VGG16, ResNet34 on the CIFAR10 dataset, and ResNet34, DenseNet121 on CIFAR100 dataset. SGD usually has better generalization performance than adaptive gradient algorithms such as Adam and AdamW when training CNNs on image classification tasks and therefore is the default optimizer in these scenarios (He et al., 2016; Zhou et al., 2020). We used two dominant experimental settings for training neural networks, including reducing the stepsize to \(0.1\) times its original value near the end of training (Zhuang et al., 2020; Chen et al., 2021; Luo et al., 2019) Figure 3: Training and test accuracy (%) of VGG16 on CIFAR10 dataset under two frequently used training settings. Here the activation function of VGG16 adopts smoothed ReLU activation function \(\sigma_{\epsilon}\) with different choices of \(\epsilon\) (\(\epsilon=0\) denotes adopting the original ReLU activation function). and adopting a cosine annealing schedule for the stepsizes (Loshchilov and Hutter, 2016, 2017). The purpose of these two training strategies is to accelerate the convergence of the optimization algorithms so as to give a fair comparison of their generalization capacity. We use the default training hyperparameters of SGD, Adam, and AdamW in these settings (He et al., 2016; Zhuang et al., 2020; Chen et al., 2021), and set MSBPG's learning rate (initial stepsize) as 0.1, momentum coefficient \(\beta\) as 0.9, weight decay coefficient \(\gamma\) as \(1\times 10^{-3}\). For the layerwise kernel function \(\phi_{i}(\mathbf{W}_{i})=\frac{1}{2}\|\mathbf{W}_{i}\|^{2}+\frac{\delta}{r}\|\mathbf{W}_{i }\|^{r}\), we set \(r=4,\ \delta=1\times 10^{-2}\) for VGG16 and \(r=6,\ \delta=1\times 10^{-3}\) for ResNet34 on CIFAR10 dataset, and \(r=4,\ \delta=1\times 10^{-2}\) for ResNet34 and \(r=4,\ \delta=1\times 10^{-3}\) for DenseNet121 on CIFAR100 dataset. From the experimental results in Figure 4, 6, 5, 7, we can see that MSBPG attains 100% training accuracy in all the training settings, unlike Adam which fails to fully converge with the training strategy of reducing the learning rate to 0.1 times of original value at the 150th epoch. Furthermore, MSBPG consistently achieves the best generalization performance for all experimental settings, and attains at least 0.5% test accuracy improvement compared with the second best optimization algorithm. This generalization advantage of MSBPG can be attributed to the Bregman proximity model we adopt. Lstms on language modelingTo further evaluate the performance of MSBPG, we conducted experiments on LSTM with the Penn Treebank dataset, and report the training and test perplexity (lower is better). Adam generally have better generalization capacity than SGD on language modeling (Fu et al., 2016; Siami-Namini et al., 2019), and therefore Figure 4: Training and test accuracy (%) of CNNs on CIFAR10 dataset with learning rate reduced to 0.1 times of the original value at the 150th epoch. Figure 5: Training and test accuracy (%) of CNNs on CIFAR100 dataset with learning rate reduced to 0.1 times of the original value at the 150th epoch. is the default optimization algorithm for training LSTMs. Here we follow the commonly used experimental setting for training LSTMs (Zhuang et al., 2020; Chen et al., 2021), which reduces the stepsize to 0.1 times its original value two times (at 75th epoch and 150th epoch) during the training process. We also conducted experiments with the cosine annealing learning rate (stepsize) schedule (Loshchilov and Hutter, 2016), which is the most frequently used learning rate schedule in practice. For training hyperparameters, we use the default settings for SGD, Adam, and AdamW in training 1-, 2-, 3-layer LSTMs (Zhuang et al., 2020; Chen et al., 2021). For MSBPG, we set its learning rate as 25, 80, 80 for 1-, 2-, 3-layer LSTMs with momentum coefficient \(\beta=0.9\), weight decay coefficient \(\gamma=2\times 10^{-6}\). For the layerwise kernel function \(\phi_{i}(\mathbf{W}_{i})=\frac{1}{2}\|\mathbf{W}_{i}\|^{2}+\frac{\delta}{r}\|\mathbf{W}_{i }\|^{r}\), we set \(r=4\) and \(\delta=1\times 10^{-6}\). From Figure 8 and Figure 9 we can see that MSBPG converges well on training dataset for 1-, 2-, 3-layer LSTMs with both two training strategies. On the other hand, SGD with the cosine annealing learning rate schedule fails to get fully converged on the training dataset as shown in Figure 9. Moreover, MSBPG consistently achieves the best generalization performance for all the experiments with at least 1 unit of test perplexity lower. This excellent generalization capacity again can be attributed to the Bregman proxmity model we employ. Robustness to initial point scale and stepsizeAs demonstrated in Section 4, MSBPG can mitigate the problem of gradient explosion. Generally, choosing large stepsize and large initial point scale will lead to gradient explosion. Here we conduct experiments with VGG16 on CIFAR10 to verify MSBPG's robustness in training neural networks. To be specific, Figure 6: Training and test accuracy (%) of CNNs on CIFAR10 dataset with learning rate using the cosine annealing schedule. Figure 7: Training and test accuracy (%) of CNNs on CIFAR100 dataset with learning rate using the cosine annealing schedule. we compare the performance of MSBPG and SGD with different scales of initial point and different stepsizes, since MSBPG and SGD have the same default learning rate (\(1\times 10^{-1}\)). Adaptive gradient algorithms, on the other hand, have different scale of default learning rate Figure 8: Training and test perplexity (lower is better) of LSTMs on Penn Treebank dataset with learning rate reduced to 0.1 times of the original value at the 75th epoch and 150th epoch. Figure 9: Training and test perplexity (lower is better) of LSTMs on Penn Treebank dataset with learning rate using the cosine annealing schedule. (\(1\times 10^{-3}\)) and therefore we don't include them in our comparison here. For different choices of initial point scale and stepsize, we run the optimization algorithm for 50 iterations and report the best test accuracy. As we can see from Figure 10, MSBPG is more robust than SGD to large initial points and large stepsizes. Training deep neural networks, which has millions or billions of parameters, is sensitive to the scale of the initial point and stepsize choice. It can be seen from Figure 10 that SGD fails to converge with a slight increase of the initial point scale to 4.6 or increase the stepsize from 0.1 to 0.6. MSBPG, on the other hand, can converge with the initial point scale as large as 20 and stepsize as large as 5. This robustness of MSBPG can ease the tuning of hyperparameters for training neural networks, and can also make the training process more robust to noises and errors. ## 6 Conclusion In this paper, we consider the problem of minimizing nonconvex composite objectives where the differentiable part does not satisfy Lipschitz smoothness, which is a fundamental assumption made by classical stochastic gradient methods. To overcome this limitation, we investigate a family of stochastic Bregman proximal gradient (SBPG) methods that only require smooth adaptivity of the differentiable part. From a modeling perspective, SBPG replaces the upper quadratic approximation used in SGD with the Bregman proximity measure, which captures the non-Lipschitz geometry and results in a better approximation model. We first formulate the vanilla SBPG and establish its convergence properties under the nonconvex setting without a finite sum structure. We then propose a momentum-based version of SBPG (MSBPG) that further improves its convergence properties, making it well-suited for large-scale applications. To demonstrate the effectiveness of MSBPG, we apply it to train deep neural networks with a polynomial kernel function that ensures the smooth adaptivity of the loss function. We also demonstrate the ability of MSBPG to alleviate gradient explosion during the training of deep neural networks. We conduct numerical experiments on quadratic inverse problems and training deep neural networks to validate Figure 10: Test accuracy (%) of VGG16 on CIFAR10 dataset with different initial point scale and stepsize choice. the effectiveness of SBPG. The experimental results on sparse quadratic inverse problems show that SBPG is more robust than classical stochastic (proximal) gradient methods in terms of stepsize selection and initial point selection. Additionally, the experimental results on deep neural networks show that MSBPG outperforms state-of-the-art optimizers in terms of efficiency, robustness of stepsize selection, and yet achieves better generalization performance. In conclusion, our work demonstrates that MSBPG can constitute a valuable addition to the existing family of optimization methods for solving stochastic nonconvex optimization problems. The enhanced robustness, improved convergence results, the ability to alleviate gradient explosion, and negligible extra computational cost thus make MSBPG a promising approach for a broad range of machine learning applications and beyond. ## Appendix A Proofs in Preliminaries **Proof of Lemma 8** First, we prove the uniqueness of the solution. Problem (4) is equivalent to the following problem: \[\arg\min_{\mathbf{u}\in\overline{C}}\;\Psi(\mathbf{u}):=\alpha R(\mathbf{u})+\phi(\mathbf{u})- \langle\nabla\phi(\mathbf{x}),\,\mathbf{u}\rangle.\] We have that \[\Psi(\mathbf{u})\geq\alpha R(\mathbf{u})+\phi(\mathbf{u})-\|\nabla\phi(\mathbf{x})\|\|\mathbf{u}\| \geq\|\mathbf{u}\|\Big{(}\frac{\alpha R(\mathbf{u})+\phi(\mathbf{u})}{\|\mathbf{u}\|}-\|\nabla \phi(\mathbf{x})\|\Big{)}.\] As \(\|\mathbf{u}\|\to\infty\), we have \(\Psi(\mathbf{u})\geq\|\mathbf{u}\|\Big{(}\frac{\alpha R(\mathbf{u})+\phi(\mathbf{u})}{\|\mathbf{u} \|}-\|\nabla\phi(\mathbf{x})\|\Big{)}=\infty\), where we use the fact that \(\phi\) is supercoercive and \(R\) is convex. Since \(\Psi\) is a proper lower-semicontinuous convex function, by the modern form of Weierstrass theorem (Rockafellar, 1997, Chapter 1), we know that the solution set of (4) is a nonempty compact set. Also note that \(\Psi\) is a strictly convex function, which implies the uniqueness of the solution. For any Legendre function \(\phi\), from (Rockafellar, 1997, Chapter 26), we have \(dom\,\partial\phi=int\,dom\phi\) with \(\partial\phi(\mathbf{x})=\{\nabla\phi(\mathbf{x})\}\) for all \(\mathbf{x}\in int\,dom\,\phi\). The optimality condition implies that \(\partial\phi(\text{Prox}_{\alpha R}^{\phi}(\mathbf{x}))\) is nonempty, which automatically forces \(\text{Prox}_{\alpha P}^{\phi}(\mathbf{x})\in\text{int dom}\phi\). This completes the proof. \(\Box\) **Proof of Proposition 12** Note that \(\|\nabla\phi(\mathbf{x}^{+})-\nabla\phi(\mathbf{x})\|\leq L_{\phi}\|\mathbf{x}^{+}-\mathbf{x}\|\) and \(\|\nabla F(\mathbf{x}^{+})-\nabla F(\mathbf{x})\|\leq L_{F}L_{\phi}\|\mathbf{x}^{+}-\mathbf{x}\|\). By the definition of \(\mathbf{x}^{+}\), we have \[\nabla F(\mathbf{x}^{+})-\nabla F(\mathbf{x})+\frac{\nabla\phi(\mathbf{x})-\nabla\phi(\bm {x}^{+})}{\alpha}\in\nabla F(\mathbf{x}^{+})+\partial R(\mathbf{x}^{+}).\] Thus, we obtain \[\text{dist}\left(0,\partial\Phi(\mathbf{x}^{+})\right)\leq\left\|\nabla F(\mathbf{x}^ {+})-\nabla F(\mathbf{x})+\frac{\nabla\phi(\mathbf{x})-\nabla\phi(\mathbf{x}^{+})}{\alpha }\right\|\leq\left(L_{F}L_{\phi}+\frac{L_{\phi}}{\alpha}\right)\|\mathbf{x}^{+}- \mathbf{x}\|.\] Note that \(\|\mathbf{x}^{+}-\mathbf{x}\|=\alpha\|\mathcal{G}_{\alpha}(\mathbf{x})\|\), which completes the proof. \(\Box\) **Proof of Proposition 9** By the definition of \(\text{Prox}_{R}^{\phi}(\cdot)\), \(x_{i}\in\partial R(\mathbf{x}_{i}^{+})+\nabla\phi(\mathbf{x}_{i}^{+})\), \(i=1,2\). Since \(\partial R(\cdot)\) is monotone, then \(\langle\mathbf{x}_{1}-\mathbf{x}_{2}-(\nabla\phi(\mathbf{x}_{1}^{+})-\nabla\phi(\mathbf{x}_{2} ^{+}))\), \(\mathbf{x}_{1}^{+}-\mathbf{x}_{2}^{+}\rangle\geq 0\). From the \(\mu\)-strong convexity of \(\phi\), it follows that \(\langle\mathbf{x}_{1}-\mathbf{x}_{2},\,\mathbf{x}_{1}^{+}-\mathbf{x}_{2}^{+}\rangle\geq\langle \nabla\phi(\mathbf{x}_{1}^{+})-\nabla\phi(\mathbf{x}_{2}^{+})\), \(\mathbf{x}_{1}^{+}-\mathbf{x}_{2}^{+}\rangle\geq\mu\|\mathbf{x}_{1}^{+}-\mathbf{x}_{2}^{+}\|^ {2}\). Therefore, \(\|\mathbf{x}_{1}^{+}-\mathbf{x}_{2}^{+}\|\leq\frac{1}{\mu}\|\mathbf{x}_{1}-\mathbf{x}_{2}\|\). \(\Box\) ## Appendix B Additional Figures
2307.03377
Mitigating Negative Transfer with Task Awareness for Sexism, Hate Speech, and Toxic Language Detection
This paper proposes a novelty approach to mitigate the negative transfer problem. In the field of machine learning, the common strategy is to apply the Single-Task Learning approach in order to train a supervised model to solve a specific task. Training a robust model requires a lot of data and a significant amount of computational resources, making this solution unfeasible in cases where data are unavailable or expensive to gather. Therefore another solution, based on the sharing of information between tasks, has been developed: Multi-Task Learning (MTL). Despite the recent developments regarding MTL, the problem of negative transfer has still to be solved. Negative transfer is a phenomenon that occurs when noisy information is shared between tasks, resulting in a drop in performance. This paper proposes a new approach to mitigate the negative transfer problem based on the task awareness concept. The proposed approach results in diminishing the negative transfer together with an improvement of performance over classic MTL solution. Moreover, the proposed approach has been implemented in two unified architectures to detect Sexism, Hate Speech, and Toxic Language in text comments. The proposed architectures set a new state-of-the-art both in EXIST-2021 and HatEval-2019 benchmarks.
Angel Felipe Magnossão de Paula, Paolo Rosso, Damiano Spina
2023-07-07T04:10:37Z
http://arxiv.org/abs/2307.03377v1
# Mitigating Negative Transfer with Task Awareness for Sexism, Hate Speech, and ###### Abstract This paper proposes a novelty approach to mitigate the negative transfer problem. In the field of machine learning, the common strategy is to apply the Single-Task Learning approach in order to train a supervised model to solve a specific task. Training a robust model requires a lot of data and a significant amount of computational resources, making this solution unfeasible in cases where data are unavailable or expensive to gather. Therefore another solution, based on the sharing of information between tasks, has been developed: Multi-task Learning (MTL). Despite the recent developments regarding MTL, the problem of negative transfer has still to be solved. Negative transfer is a phenomenon that occurs when noisy information is shared between tasks, resulting in a drop in performance. This paper proposes a new approach to mitigate the negative transfer problem based on the task awareness concept. The proposed approach results in diminishing the negative transfer together with an improvement of performance over classic MTL solution. Moreover, the proposed approach has been implemented in two unified architectures to detect Sexism, Hate Speech, and Toxic Language in text comments. The proposed architectures set a new state-of-the-art both in EXIST-2021 and HatEval-2019 benchmarks. Multi-task Learning, Negative Transfer, Natural Language Processing, Deep Learning ## I Introduction Machine Learning has numerous applications in fields as diverse as Natural Language Processing (NLP) (e.g., named entity recognition and hate speech detection) [19, 26] or Computer Vision (CV) (e.g., object detection and object classification) [41]. Generally, a single model or an ensemble of models is trained to address all the desired tasks. These models are then fine-tuned and tweaked on the chosen task until they specialize, and their performance no longer increases. Despite producing satisfactory results, a Single-Task Learning (STL) strategy ignores knowledge that may be gathered from datasets of related tasks, allowing our model to generalize better on our original task. Furthermore, in many cases, more than the available data is needed to train a model robustly. Therefore, several strategies to transfer knowledge from one task to another have been developed [18]. Multi-Task Learning (MTL) [33, 49] is a new area of study that aims at exploiting the synergy between different tasks to reduce the amount of data or computational resources required for these activities. This approach aims at improving generalization by learning multiple tasks simultaneously. The _soft_[43, 47] or _hard parameter-sharing_[13, 14] strategies are two of the most commonly used methods for MTL employing neural networks. In soft parameter-sharing, task-specific networks are implemented, while feature-sharing methods handle cross-task communication to encourage the parameters to be similar. Since the size of the multi-task network grows linearly with respect to the number of tasks, an issue with soft parameter-sharing systems is given by scalability. In hard parameter-sharing, the parameter set is split into shared and task-specific operations. It is commonly implemented with a shared encoder and numerous task-specific decoding heads [49]. One of the benefits of this method is the minimization of overfitting [33]. Multilinear relationship networks [20] enhanced this architecture by imposing tensor normal priors on the fully connected layers' parameter set. The branching sites in the network are set ad-hoc in these works, which can result in inefficient job groupings. To address this limitation, tree-based approaches [22, 38] have been proposed. Despite the improvement introduced by those works, jointly learning multiple tasks might lead to _negative transfer_[39, 46] if noisy information is shared among the tasks. During training, the hard parameter-sharing encoder learns to construct a generic representation that focuses on extracting specific features from the input data. Nevertheless, a subset of these features may provide critical information for a given decoder head but introduces noise to another decoder to solve its respective task. Hence, negative transfer refers to situations in which the transfer of information results in a decrease in the overall model performance. In this work, we propose a new approach to overcome the negative transfer problem based on the concept of Task Awareness (TA). This approach enables the MTL model to take advantage of the information regarding the addressed task. The overarching goal is for the model to handle its internal weight for its own task prioritization. Unlike the State-Of-The-Art (SOTA) approaches (later presented in Section II), the proposed solution does not require a recursive structure, saving time and resources. Moreover, we designed two mechanisms based on the TA approach and implemented them in the creation of two Multi-Task Learning TA (MTL-TA) architectures to address SOTA challenges: Sexism, Hate Speech, and Toxic Language detection. The source code is publicly available.1 Footnote 1: [https://github.com/AngelFelipeMP/Mitigating-Negative-Transfer-with-TA](https://github.com/AngelFelipeMP/Mitigating-Negative-Transfer-with-TA) The main contributions of our work are as follows: * We propose the use of the TA concept to mitigate the negative transfer problem during MTL training. * Design of the Task-Aware Input (TAI) mechanism to grant the MTL models with task awareness ability to mitigate negative transfer and even improve results compared with traditional MTL models. * Design of the Task Embedding (TE) mechanism to give MTL models task recognition capability to diminish negative transfer and improve the results over classic MTL solutions. * Creation and validation of two unified architectures to detect Sexism, Hate Speech, and Toxic Language in text comments. * Our proposed method outperforms the SOTA on two public benchmarks for Sexism and Hate Speech detection: (i) EXIST-2021 and (ii) HatEval-2019 datasets. The rest of the paper is structured as follows. Section II presents the related works of transfer learning and MTL. Section III describes the details of our proposed method. Section IV illustrates the experiment setup. Section V discusses and evaluates the experimental results. Section VI presents the limitation of our approach. Finally, conclusions and future work are drawn in Section VII. ## II Related Work Transfer learning is a widespread technique in machine learning based on the idea that a model created for one task can be improved by transferring information from another task [27, 44]. Training a model from scratch requires a large quantity of data and resources, but there are some circumstances where gathering training data is prohibitively expensive or impossible. As a result, there is the need to construct high-performance learners trained with more easily accessible data from different tasks. Transfer learning techniques allow us to improve the results of target tasks through information extracted from related tasks. These techniques have been effectively used for a variety of machine learning applications, including NLP [31, 34, 42, 43] and CV [11, 18]. The MTL framework [33, 49], which seeks to learn many tasks at once even when they are distinct, is a closely related learning technique to transfer learning. This approach works well and can take advantage of sharing information among tasks. Still, if the tasks are not sufficiently related, it can lead to negative transfer. The problem of negative transfer consists of performance degradation caused by noisy information being shared between tasks. To solve this issue, several approaches for balancing learning between different tasks have been proposed based on a re-weighing of the losses (for instance, via Homoscedastic uncertainty [17], Gradient normalization [9] and Adversarial training [36]) or task prioritization [15, 35, 52]. Further recent approaches [48, 50, 51] make use of the initial predictions obtained through multi-task networks to improve, once or repeatedly, each task output, overcoming a characteristic of the previously mentioned methods that computed all the task outputs for a given input at once. Those last approaches culminate to be very time-consuming and require a lot of computational resources due to their recursive nature. This paper proposes two unified architectures to detect Sexism, Hate Speech, and Toxic Language in text comments. Abburi, Parikh, Chhaya, _et al._[1] represents the first semi-supervised multi-task approach for sexism classification. The authors addressed three tasks based on labels achieved through unsupervised learning or weak labeling. The neural multi-task architecture they proposed allows shared learning across multiple tasks via common weight and a combined loss function. The method outperforms several SOTA baselines. Wu, Fei, and Ji [47] proposed an MTL innovative approach to solve Aggressive Language Detection (ALD) together with text normalization. The authors propose a shared encoder to learn the common features between the two tasks and a single encoder dedicated to learning the task-relevant features. The proposed model achieved a significant improvement in performance concerning the ALD task. Those last approaches inspired the mechanism we propose in this paper. The main commonality is to have additional mechanisms added to the MTL models to improve the representation sent to the task heads. The main difference with respect to the TA approach we propose is that we enrich the model with the ability to discover by itself which task it will perform. It allows the MTL-TA models to create a suitable representation for each task head. In addition, the MTL-TA modes do not need to learn an auxiliary task, resulting in more efficiency. In fact, the TA approach allows the MTL models, at each step, to try to optimize over the task at hand. The key idea is to learn a task-relevant latent representation of the data, efficiently solving many NLP tasks [16, 43]. The resulting mechanisms are proposed in the following section. ## III Proposed Approach This section describes the details of the MTL-TA models. We first introduce the notion of TA and explain how it can be beneficial in diminishing the negative transfer [39, 46] for multi-task joint training [33]. Secondly, two different TA mechanisms are proposed in order to incorporate the task self-awareness capability into MTL models. The mainstream approach to supervised multi-task is the hard parameter-sharing method [49]. The model is composed of an encoder and \(N\) decoders or task heads, where \(N\) corresponds to the number of tasks the model is simultaneously trained [45]. During execution, the encoder receives input and creates a task-agnostic latent representation that is sent to a certain task head, which is in charge of producing the final prediction. The lack of a closer relationship between the latent representation generated by the encoder and the tasks degrades the overall MTL model performance [39]. For the same input, the optimal latent representation for task heads are likely to be different [14]. Furthermore, the encoder representation can get prone to more demanding tasks or with a larger data volume during training [33]. These model performance deteriorations are the reflex of the negative transfer phenomenon [39, 46], where a task head receives an inaccurate input representation to solve its respective task. We propose two TA mechanisms to mitigate negative transfer when solving multiple NLP tasks by applying the MTL approach [49]. These mechanisms tailor, depending on the specific task that is addressed, the input representation that is sent to its respective head. In addition, our proposed MTL model still takes advantage of the generalization improvements the multi-task joint training provided. Hence, the encoder and other MTL model parts located before the task heads are updated during training for every task. It should be noted that all our proposed MTL models belong to the MTL-TA class, and they follow the conventional MTL paradigm. Therefore, only the specific task head attached to the input data is considered during the task parameter updating. ### _Task-Aware Input_ The first mechanism we designed to introduce task awareness into MTL models is Task-Aware Input (TAI). To compel the encoder to generate a suitable representation for each task head, we proposed to modify the MTL conventional input for NLP tasks. The TAI includes a Text Snippet (TS) plus a Task Description (TD), as shown in Fig. 1. The TS is a text chunk whose length varies according to the task. It is usually the integral input for the MTL encoders. The TD is a piece of text describing what a specific head is dealing with, such as 'Sexism Detection' and 'Hate Speech Detection'. The new modified input provides context for the encoder to generate a task-centered representation. The MTL model endowed with the TAI mechanism is referred as MTL Task-Aware Input (MTL-TAI). ### _Task Embedding_ The second mechanism we designed to convey MTL models with the TA capability was named Task Embedding (TE). We proposed to insert an additional building block between the encoder and the task heads, which we call Task Embedding Block (TEB), as displayed in Fig. 2. It receives two inputs: (i) the Task Identification Vector (TIV) and (ii) the latent encoder representation. The TIV is a unidimensional one-hot vector whose size is proportional to the number of task heads. Each TIV location is related to one of the task heads. The TEB is composed of Learning Units (LU) that encompass a linear layer followed by a ReLU layer. The number of LUs is a hyperparameter that depends on the task and data, among other factors. The TEB objective is to generate a suitable representation for the task the MTL model is solving at a specific time. Hence, depending on the task, the TEB will retrieve a different output for the same exact encoder representation. It relies on the TIV to indicate for which task the TEB will generate a representation. The TIV has the number one in the location that corresponds to the task the model is about to solve. The remainder of the vector is populated with zeros, as Fig. 2 reflects. The MTL model equipped with the TE mechanism is referred as MTL Task Embedding (MTL-TE). ## IV Experimental Setup This section first describes the tasks and the datasets used to evaluate our approach. It then presents the implementation details and models for reference. Finally, we share the settings for the experiments. Fig. 1: Multi-Task Learning (MTL) model including Task-Aware Input (TAI) mechanism (MTL-TAI). Fig. 2: Multi-Task Learning (MTL) model including Task Embedding (TE) mechanism (MTL-TE). ### _Data_ Our approach for selecting the datasets for Sexism, Hate Speech, and Toxic Language detection was based on two requirements: (i) being publicly available; (ii) having been used to evaluate a high number of ML models. We use three datasets - EXIST-2021 [32], DETOXIS-2021 [37], and HateEval-2019 [2] - which we describe below. **EXIST-2021**[32]: The dataset was created for the sExism Identification in Social neTworks (EXIST) shared task at Iberian Languages Evaluation Forum (IberLEF) 2021. The dataset consists of 11345 annotated social media text posts in English and Spanish from Twitter and Gab.com (Gab), an uncensored social media platform. The dataset development was supervised and monitored by experts in gender issues. The EXIST was the first challenge on Sexism detection in social media, whose objective was to identify sexism in a wide sense, from explicit misogyny to more implicit sexist behaviors. The challenge received 70 official runs for the Sexism identification task. It is a binary classification where the samples belong to the Sexist class or the Not-Sexist class. The official evaluation metric was accuracy, and data was split into training and test sets. Table I shows the data distribution. **DETOXIS-2021**[37]: The dataset was collected for the DEtection of Toxicity in comments In Spanish (DETOXIS) shared task at IberLEF 2021. The objective of the shared task was toxic language detection in comments to various online news articles regarding immigration. The proposed annotation methodology focused on diminishing the subjectivity of toxicity labeling considering contextual information (e.g., linguistic features and conversational threads). The team that worked on the data annotation was composed of trained annotators and expert linguists. The dataset consists of 4354 text comments from Twitter in Spanish and provides labels for Toxic Language detection. The task is characterized as a binary classification where the samples are divided between the Toxic and Not-Toxic classes. More than 30 teams evaluated their machine learning model in the collected dataset in the participation for DETOXIS shared task. The official data evaluation metric was F1-score in the Toxic class, and the data were divided into training and test sets. Table II shows the data distribution. **HatEval-2019**[2]: The dataset was constructed for the Detection of Hate Speech Against Immigrants and Women in Twitter (HatEval) shared task, which was part of the International Workshop on Semantic Evaluation (SemEval) 2019. The dataset comprises 19600 tweets published in English and Spanish and supplies labels for Hate Speech detection. The data collection methodology employed different gathering strategies: (i) monitoring likely victims of hate accounts; (ii) downloading the records of recognized haters; (iii) filtering Twitter streams with keywords. The annotation was performed by experts and crowdsourced contributors tested for reliable annotation. The task was defined as a binary classification where the samples are associated with the Hateful class or the Not-Hateful class. The data is composed of training, development, and test sets, and the official evaluation metric was the F1-macro, which is the unweighted mean of the F1-score calculated for the two classes. HatEval was one of the most popular shared tasks in SemEval 2019, with more than 100 submitted runs for Hate Speech detection. We can see the dataset distribution in Table III. ### _Implementation Details_ The encoder was constructed using a popular BERT [10] version for Spanish called BETO [7], followed by max and mean pooling calculation over its output. BETO has 12 self-attention layers, each with 12 attention-heads, using 768 as the hidden size with around 110 million parameters. BETO receives a text sequence and returns a hidden representation dimensionally equivalent to its hidden size for each token that belongs to the sequence. The latent encoder representation is created by a concatenation of max pooling and mean pooling calculation on the entire 768-dimensional sequence of tokens returned by BETO. Regarding the TE approach, the TEB preserves the same dimension of the latent encoder representation. The task heads are linear classifiers whose input dimension corresponds to the latent encoder representation, and the output depends on the task. In the case of binary classification, the linear classifier returns two values, and the higher value corresponds to the predicted class. Furthermore, the TDs for the EXIST-2021 [32], DETOXIS-2021 [37], and HatEval-2019 [2] datasets are, respectively, the following pieces of text: 'Sexism detection', 'Toxic Language detection', and 'Hate Speech detection'. The models were trained using the optimization algorithm AdamW [21] with a linear decay learning rate schedule and a learning rate varying from 5e-6 to 1e-4. In the learning process, we trained our model for 15 epochs with a dropout of 0.3 and batch size of 64. Additionally, we experimented with 1 up to 3 LUs. Similar to the early stopping strategy [8], we adopted the model with the best performance within the epochs based on the task's official metric. ### _Comparison Models_ We compare our approach with two types of models: (i) Baselines and (ii) SOTA models. The baselines are the two models that we implemented: * **MTL** is the classic MTL model. It is constructed with the same architecture as the MTL-TA model (described in Section III), but it does not include the TAI mechanism. Therefore, the MTL model receives only the TS as input. * **STL** is the classic STL model. It has the same architecture as the MTL model, yet it encompasses only one task head. Hence, to compare this model type with the MTL models, it is necessary to train one model for each one of the addressed tasks. The SOTA are the models which currently achieved the best performance on the datasets considered in our experiments: * **AI-UV**[23]: is a deep learning architecture based on the combination of different Transformers models [40]. It takes advantage of ensemble methods and, during training, applies data augmentation mechanisms. It is the SOTA for EXIST-2021 [32]. * **SINAI**[30]: is a BERT base model [10] trained using the MTL hard parameter-sharing method. In spite of addressing five tasks and six datasets, the model was focused on Toxic Language detection, while the other tasks were used as auxiliary tasks. It is the SOTA for DETOXIS-2021 [37]. * **Atalaya**[28]: is a model based on Support Vector Machines [6]. It was trained on several representations computed from FastText [5] sentiment-oriented word vectors, such as tweet embeddings [24], bag-of-characters [5], and bag-of-words [4]. It is the SOTA for HatEval-2019 [2]. ### _Experimental Settings_ We conducted two experiments to evaluate our TA approach for mitigating negative transfer [39, 46], as described below. Cross-Validation ExperimentTo assess whether the TAI and TE mechanisms were capable of reducing the negative transfer during MTL training, we performed a cross-validation experiment. Therefore, for each one of the datasets described in Subsection IV-A, we aggregate the different sets that compose the dataset in a unique set. Then, we run 5-fold cross-validation on the STL, MTL, MTL-TAI, and MTL-TE models. Official Training-Test SplitIn order to compare our approach to the SOTA models [23, 28, 30] in the utilized datasets, we carried out an experiment using the official training-test split of the respective datasets. We trained our models with the training set or a combination of the training and development sets when the last was available. After that, we evaluated the models in the test partitions. In both experiments, we use only the data samples in the Spanish language and evaluate the models employing the dataset's respective official metrics (described in Section IV-A). We explored versions that combined two and three tasks for the MTL models. Furthermore, models whose results were the highest regarding the evaluation metrics were selected. Finally, we applied the t-test to calculate the 95% confidence interval for the experiments results. ## V Results and Analysis This section presents the experiment's results and the comparison among the evaluated models described in Section IV. ### _Cross-Validation Experiment_ Table IV shows the cross-validation results. It is organized into three parts in the following order: model type, model's task heads, and model's performance. Regarding the Baseline models (described in Section IV-C), results show that the MTL training approach suffered negative transfer on nearly all occasions. The MTL model showed improvement over the STL model only for the Sexism detection task when the model was trained for Sexism and Hate Speech detection and when it was trained on the three tasks. Apart from that, the STL model achieved superior performance in the rest of the explored combinations. It probably happened because the negative transfer restrained the learning process of the MTL model on all the other occasions. According to our results, the TA mechanisms worked well to diminish negative transfer. The MTL-TAI model equipped with the TA mechanism and the MTL-TE model equipped with the TE mechanism on all occasions achieved superior performance than the classic MTL model, as shown in Table IV. The MTL-TAI and MTL-TE models also overcame results obtained by the STL model for the three evaluated tasks. In general, the MTL-TE model performs better than the MTL-TAI model. ### _Official Training-Test Split_ Table V, following the same organization as Table IV, presents the experiment carried out on the three datasets using their respective official training-test split. We see in Table V that the MTL training was not beneficial for the classic MTL model when addressing the sexism detection task. The model achieved lower accuracy compared with the STL model. We believe it was again due to the negative transfer phenomenon. Nevertheless, because of the TA mechanisms, the MTL-TAI and MTL-TE models mitigated the negative transfer presented in the classic MTL training, achieving higher accuracy than the STL model and the EXIST-2021 SOTA (AI-UPV [23]). The MTL training improves the result for Toxic Language detection over the STL baseline for the training-test experiment. In general, the MTL, MTL-TAI, and MTL-TE models achieved similar results, meaning there were low negative transfer levels for this task during the formal MTL training. We see in Table V that for the training and test experiment, the MTL training improved the result of Hate Speech detection. The MTL model obtained a higher F1-macro than the HatEval-2019 SOTA (Atalaya [28]) and the STL Baseline. The MTL models with the TA mechanisms improved the results even more. They mitigate the negative transfer in the traditional MTL training, and both models achieved superior F1-macro than the conventional MTL model. ### _Overall Analysis_ Analyzing Tables IV and V, we see evidence that the STL model was a competitive baseline to compare our TA approach. Therefore, the STL models achieved close or better results than the SOTA models for the training-test experiment. The STL achieved the same results as the EXIST-2021 SOTA (AI-UPV [23]) and comparable results to the DETOXIS-2021 SOTA (SINAI [30]). Furthermore, the STL obtained better results than the HatEval-2019 SOTA (Attalaya [28]). Summarizing the results of the two experiments, the MTL-TA models (MTL-TAI & MTL-TEB) outperformed both the STL and the classic MTL models. It shows that our proposed TA approach could mitigate the negative transfer presented in the conventional MTL training. ## VI Limitations In this section, we mention the main limitations of our MTL-TA models. First, the two models depending on a powerful encoder to achieve good performance. It could be a problem for low-resource computation systems that cannot afford to use deep learning architectures such as Transformers [40] for the encoder. Secondly, dealing with a higher number of tasks means having more task heads - increasing the number of model parameters. Therefore, MTL-TA models will require more computational power to be fine-tuned. Finally, we wonder if the MTL-TA models have their ability to adapt to unseen tasks (e.g., few-shot learning and instruction-based prompts) reduced due to the fine-tuning process utilizing information about the tasks. ## VII Conclusion and Future Work We proposed the TA strategy to address the negative transfer [39] problem during MTL training. The proposed method has been translated into two mechanisms: TAI and TE. The TAI mechanism is the inclusion of the TD information to enrich the input of the MTL model encoder. The TE mechanism is the introduction of the TEB, an extra component that receives the representation generated by the encoder plus a TIV representation. The TD and the TIV provide information regarding the task the MTL model will perform at that precise moment. The objective of the TAI and TE is to enable the MTL model to construct task-dependent representations for the task heads to diminish negative transfer during MTL training and improve the MTL model performance. We proposed two MTL models, the MTL-TAI equipped with the TAI mechanisms and the MTL-TE that includes the TE mechanism. Our two experiments show that the TA capability reduces negative transfer during traditional MTL training and improves performance over standard MTL solutions. We achieved competitive results compared with SOTA for the two proposed MTL-TA models for the addressed tasks: Sexism, Hate Speech, and Toxic Language detection. In particular, the proposed models set a new SOTA on two public benchmarks: (i) EXIST-2021 [32] and (ii) HatEval-2019 [2] datasets, demonstrating a general performance improvement of the proposed approach with respect to both the STL and classic MTL model. The TA mechanisms proved to be a valid approach to mitigate the negative transfer [46] problem in the MTL training. This research demonstrated how an MTL approach equipped with TA mechanism leads to performance improvement in several NLP tasks. This approach has been demonstrated to be feasible in cases where we have a scarcity of labeled data. In future studies, it would be interesting to deepen the analyses to find out how many labeled samples or volumes of information it is worth applying MTL rather than using STL. Further analyses regarding the enrichment of the MTL model input with low-level task supervision are worth it. In this scenario, the decoder receives all or a subgroup of the encoder's hidden representations instead of just the last one. It would be interesting to analyze the impact of different encoder representations in an MTL model. We also plan to apply MTL with TA to other scenarios, such as sexism identification under the learning with disagreement regime [29], where it is necessary to learn from all the labels provided by the annotators rather than the aggregated gold label. This new paradigm is gaining importance in NLP, especially for tasks where often there is not only one correct label. Finally, we would like to research unsupervised techniques to improve the suggested models and tackle the same problems (detecting Hate Speech, Toxic Language, and Sexism). For instance, Latent Dirichlet Allocation [3], Self-Organizing Maps [25], and K-Means Clustering [12] could be considered. ## Acknowledgments Angel Felipe Magnossao de Paula has received a mobility grant for doctoral students by the Universitat Politecnica de Valencia. The work of Paolo Rosso was in the framework of the FairTransNLP-Stereotypes research project (PID2021-124361OB-C31) on Fairness and Transparency for equitable NLP applications in social media: Identifying stereotypes and prejudices and developing equitable systems, funded by MCIN/AEI/10.13039/501100011033 and by ERDF, EU A way of making Europe. Damiano Spina is the recipient of an Australian Research Council DECRA Research Fellowship (DE200100064).
2305.12403
Spatio-temporal Diffusion Point Processes
Spatio-temporal point process (STPP) is a stochastic collection of events accompanied with time and space. Due to computational complexities, existing solutions for STPPs compromise with conditional independence between time and space, which consider the temporal and spatial distributions separately. The failure to model the joint distribution leads to limited capacities in characterizing the spatio-temporal entangled interactions given past events. In this work, we propose a novel parameterization framework for STPPs, which leverages diffusion models to learn complex spatio-temporal joint distributions. We decompose the learning of the target joint distribution into multiple steps, where each step can be faithfully described by a Gaussian distribution. To enhance the learning of each step, an elaborated spatio-temporal co-attention module is proposed to capture the interdependence between the event time and space adaptively. For the first time, we break the restrictions on spatio-temporal dependencies in existing solutions, and enable a flexible and accurate modeling paradigm for STPPs. Extensive experiments from a wide range of fields, such as epidemiology, seismology, crime, and urban mobility, demonstrate that our framework outperforms the state-of-the-art baselines remarkably, with an average improvement of over 50%. Further in-depth analyses validate its ability to capture spatio-temporal interactions, which can learn adaptively for different scenarios. The datasets and source code are available online: https://github.com/tsinghua-fib-lab/Spatio-temporal-Diffusion-Point-Processes.
Yuan Yuan, Jingtao Ding, Chenyang Shao, Depeng Jin, Yong Li
2023-05-21T08:53:00Z
http://arxiv.org/abs/2305.12403v2
# Spatio-temporal Diffusion Point Processes ###### Abstract. Spatio-temporal point process (STPP) is a stochastic collection of events accompanied with time and space. Due to computational complexities, existing solutions for STPPs compromise with conditional independence between time and space, which consider the temporal and spatial distributions separately. The failure to model the joint distribution leads to limited capacities in characterizing the spatio-temporal entangled interactions given past events. In this work, we propose a novel parameterization framework for STPPs, which leverages diffusion models to learn complex spatio-temporal joint distributions. We decompose the learning of the target joint distribution into multiple steps, where each step can be faithfully described by a Gaussian distribution. To enhance the learning of each step, an elaborated spatio-temporal co-attention module is proposed to capture the interdependence between the event time and space adaptively. For the first time, we break the restrictions on spatio-temporal dependencies in existing solutions, and enable a flexible and accurate modeling paradigm for STPPs. Extensive experiments from a wide range of fields, such as epidemiology, seismology, crime, and urban mobility, demonstrate that our framework outperforms the state-of-the-art baselines remarkably. Further in-depth analyses validate its ability to capture spatio-temporal interactions, which can learn adaptively for different scenarios. The datasets and source code are available online: [https://github.com/tsinghua-fib-lab/Spatio-temporal-Diffusion-Point-Processes](https://github.com/tsinghua-fib-lab/Spatio-temporal-Diffusion-Point-Processes). Spatio-temporal point processes, Diffusion models, Co-attention + Footnote †: journal: Jingtao Ding is the corresponding author ([email protected], [email protected]). + Footnote †: journal: Jingtao Ding is the corresponding author ([email protected], [email protected]). + Footnote †: journal: Jingtao Ding is the corresponding author ([email protected], [email protected]). + Footnote †: journal: Jingtao Ding is the corresponding author ([email protected], [email protected]). + Footnote †: journal: Jingtao Ding is the corresponding author ([email protected], [email protected]). + Footnote †: journal: Jingtao Ding is the corresponding author ([email protected], [email protected]). ## 1. Introduction Spatio-temporal point process (STPP) is a stochastic collection of points, where each point denotes an event \(x=(t,s)\) associated with time \(t\) and location \(s\). STPP is a principled framework for modeling sequences consisting of spatio-temporal events, and has been applied in a wide range of fields, such as earthquakes and aftershocks (Brock et al., 2013; Chen et al., 2014), disease spread (Wang et al., 2015; Wang et al., 2016), urban mobility (Zhou et al., 2016; Wang et al., 2016; Wang et al., 2016), and emergencies (Wang et al., 2015; Wang et al., 2016). Spatio-temporal point processes have been widely studied in the literature (Brock et al., 2013; Chen et al., 2014; Chen et al., 2014; Chen et al., 2014; Chen et al., 2014; Wang et al., 2016) with rich theoretical foundations (Chen et al., 2014; Chen et al., 2014; Chen et al., 2014). Due to computational complexities, a general approach for STPPs is to characterize the event time and space with distinct models. Conventional STPP models (Chen et al., 2014; Chen et al., 2014; Chen et al., 2014) mainly capture relatively simple patterns of spatio-temporal dynamics, where the temporal domain is modeled by temporal point process models, such as Poisson process (Chen et al., 2014), Hawkes process (Huang et al., 2015), and Self-correcting process (Huang et al., 2015), and the spatial domain is usually fitted by kernel density estimators (KDE) (Wang et al., 2015). With the advance of neural networks, a series of neural architectures are proposed to improve the fitting accuracy (Chen et al., 2014; Chen et al., 2014; Chen et al., 2014). However, they still adopt the approach of separate modeling. For example, Chen et al. (Chen et al., 2014) use neural ODEs and continuous-time normalizing flows (CNFs) to learn the temporal distribution and spatial distribution, respectively. Zhou et al. (Zhou et al., 2016) apply two independent kernel functions for time and space, whose parameters are obtained from neural networks, to build the density function. However, for STPPs, the time and space where an event occurs are highly dependent and entangled with each other. For example, in seismology, earthquakes are spatio-temporal correlated due to crust movements (Wang et al., 2015), which occur with a higher probability close in time and space to previous earthquakes. Take urban mobility as another example, people are more likely to go to work during the day, while tend to go for entertainment at night. Therefore, it is crucial to learn models that can address the spatio-temporal joint distribution conditioned on the event history. However, it is non-trivial due to the following two challenges: 1. **Spatio-temporal joint distributions for STPPs usually have tremendous sample spaces, which are highly intractable.** Directly fitting requires huge training samples, which is prohibitive in practice. The general approach is to decompose the target distribution into conditionally dependent distributions (Beng et al., 2015; Chen et al., 2017), fitting the temporal density \(p^{*}(t)\)1 and conditional density \(p^{*}(s|t)\) separately. However, the characterization of \(p^{*}(s|t)\) is largely limited to certain model structures, such as KDEs and CNFs, which are less expressive. Footnote 1: We use the common star superscript to denote conditional dependence on the history. 2. **The occurrence of events is usually associated with complex coupling correlations between time and space.** Driven by different generation mechanisms, the occurrence of events exhibits distinct spatio-temporal dependencies across various fields. How to effectively capture the underlying dependence for an event still remains an open problem. Solving the above two challenges calls for a new modeling paradigm for STPPs. In this paper, we propose a novel parameterization framework, Spatio-Temporal Diffusion Point Processes (DSTPP), which is capable of leaning spatio-temporal joint distributions effectively. By leveraging denoising diffusion probabilistic modeling, we manage to decompose the original complex distribution into a Markov chain of multiple steps, where each step corresponds to a minor distribution change and can be modeled faithfully by a Gaussian distribution (Zhu and Chen, 2017; Zhu and Chen, 2017). The target distribution is learned throughout the combination of all steps, where the predicted joint distribution obtained from the previous step acts as the condition for the next-step learning. In this way, conditioned on the already predicted results, the modeling of time and space becomes independent at the current step, i.e., \(p^{*}(t_{\text{current}}|t_{\text{last}},\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrakmathfrak{\mathfrakmathfrakmathfrak{{ \mathfrakmathfrakmathfrakmathfrakmathfrak{{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\)))) \) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \}\ \ \}\}\\\\ \}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\ as longitude-latitude coordinates in continuous space. It can also be associated with discrete labels, such as the neighborhoods of crime events. Let \(x_{i}=(t_{i},s_{i})\) denotes the \(i_{th}\) spatio-temporal event written as the pair of occurrence time \(t\in\mathbb{T}\) and location \(s\in\mathbb{S}\), where \(\mathbb{T}\times\mathbb{S}\in\mathbb{R}\times\mathbb{R}^{d}\). Then a spatio-temporal point process can be defined as a sequence \(S=\{x_{1},x_{2},...,x_{L}\}\), and the number of events \(L\) is also stochastic. Let \(H_{t}=\{x_{i}|t_{i}<t,x_{i}\in S\}\) denote the event history before time \(t\), modeling STPPs is concerned with parameterizing the conditional probability density function \(p(t,s|H_{t})\), which denotes the conditional probability density of the next event happening at time \(t\) and space \(s\) given the history \(H_{t}\). **Discussion on shortcomings.** In existing methods for STPPs, given the event history, space and time are assumed to be conditionally independent (Grover and Leskovec, 2000; Grover and Leskovec, 2000; Grover and Leskovec, 2001; Grover and Leskovec, 2002; Grover and Leskovec, 2003) or unilaterally dependent (Grover and Leskovec, 2000; Grover and Leskovec, 2003) i.e., the space is dependent on the time by \(p(x|t)\). These dependence restrictions destroy the model's predictive performance on entangled space and time interactions conditioned on history. Besides, most approaches require integration operations when calculating the likelihood, or limit intensity functions to integrable forms, leading to a trade-off between accuracy and efficiency. We compare the shortcomings of existing approaches in Table 12, which motivate us to design a more flexible and effective model. Footnote 2: TPP models can be used for STPPs where the space acts as the marker. ### Denoising Diffusion Probabilistic Models Diffusion models (Hoh and Zhang, 2017) generate samples by learning a distribution that approximates a data distribution. The distribution is learned by a gradual reverse process of adding noise, which recovers the actual value starting from Gaussian noise. At each step of the denoising process, the model learns to predict a slightly less noisy value. Let \(x^{0}\sim q(x^{0})\) denote a multivariate variable from specific input space \(X\in\mathbb{R}^{D}\), and we consider a probability density function \(p_{\theta}(x^{0})\), which aims to approximate \(q(x^{0})\). Diffusion models are latent variable models, which are defined by two processes: the forward diffusion process and the reverse denoising process. Let \(X^{k}\) for \(t=1,2,...,K\) denote a sequence of latent variables of dimension \(\in\mathbb{R}^{D}\), the forward diffusion process is defined by a Markov chain: \[q(x^{1:K}|x^{0})=\prod_{k=1}^{K}q(x^{k}|x^{k-1})\enspace, \tag{1}\] where \(q(x^{k}|x^{k-1})\coloneqq\mathcal{N}(x^{k};\sqrt{1-\beta_{k}}x^{k}\) and \(\beta_{k}\mathbf{I})\), \(\beta_{1}\), \(...,\beta_{K}\in(0,1)\) is a given increasing variance schedule, representing a particular noise level, \(x^{k}\) can be sampled in a closed form as \(q(x^{k}|x^{0})=(x^{k};\sqrt{\overline{a}_{k}}x^{0},(1-\overline{a}_{k}) \mathbf{I})\), where \(a_{k}\coloneqq 1-\beta_{k}\) and \(\overline{a}_{k}=\prod_{k=1}^{K}a_{k}\). Then a noisy observation at the \(k_{th}\) step can be expressed as \(x^{k}=\sqrt{\overline{a}_{k}}x^{0}+(1-\overline{a}_{k})\epsilon\), where \(\epsilon\sim\mathcal{N}(0,\mathbf{I})\) and \(x^{0}\) is the clean observation. On the contrary, the reverse denoising process recovers \(x^{0}\) starting from \(x^{K}\), where \(x^{K}\sim\mathcal{N}(x^{K};0,\mathbf{I})\). It is defined by the following Markov chain with learned Gaussian transitions: \[\begin{split}& p_{\theta}(x^{0:K})\coloneqq p(x^{K})\prod_{k=1}^{ K}p_{\theta}(x^{k-1}|x^{k})\enspace,\\ & p_{\theta}(x^{k-1}|x^{K})\coloneqq\mathcal{N}(x^{k-1};\mu_{ \theta}(x^{k},k),\sigma_{\theta}(x^{k},k)\mathbf{I})\enspace,\end{split} \tag{2}\] \(p_{\theta}(x^{k-1}|x^{k})\) aims to remove the Gaussian noise added in the forward diffusion process. The parameter \(\theta\) can be optimized by minimizing the negative log-likelihood via a variational bound: \[\min_{\theta}\mathbb{E}_{q(x^{0})}\leq\min_{\theta}\mathbb{E}_{q(x^{k})}[- \text{log}p(x^{K})-\sum_{k=1}^{K}\text{log}\frac{p_{\theta}(x^{k-1}|x^{k})}{q( x^{k}|x^{k-1})}]\enspace. \tag{3}\] Ho et al. (Ho et al., 2017) show that the denoising parameterization can be trained by the simplified objective: \[\mathcal{E}_{x^{0}\sim q(x^{0}),\epsilon\sim\mathcal{N}(0,\mathbf{I})}[\| \epsilon-\epsilon_{\theta}(x_{k},k)\|^{2}]\enspace, \tag{4}\] where \(x^{k}=\sqrt{\overline{a}_{k}}x^{0}+(1-\overline{a}_{k})\epsilon\). \(\epsilon_{\theta}\) needs to estimate Gaussian noise added to the input \(x^{k}\), which is trained by MSE loss between the real noise and predicted noise. Therefore, \(\epsilon_{\theta}\) acts as the denoising network to transform \(x^{k}\) to \(x^{k-1}\). Once trained, we can sample \(x^{k-1}\) from \(p_{\theta}(x^{k-1}|x^{k})\) and progressively obtain \(x^{0}\) according to Equation (2). ## 3. Spatio-temporal diffusion point processes Figure 2 illustrates the overall framework of DSTPP, which consists of two key modules, the spatio-temporal self-attention encoder, and the spatio-temporal diffusion model. The spatio-temporal encoder learns an effective representation of the event history, then it acts as the condition to support the spatio-temporal denoising diffusion process. We first present the spatio-temporal encoder in Section 3.1. Then we formulate the learning of the spatio-temporal joint distribution as a denoising diffusion process, and introduce the diffusion process and inverse denoising process in Section 3.2. We describe how to train this model and perform sampling in Section 3.3. Finally, We demonstrate the detailed architecture of the denoising network parametrization in Section 3.4. Figure 2. The overview of the proposed DSTPP framework. ``` 0:\(h_{i-1}\) 0:\(x_{i}^{0}\sim q(x_{i}^{0})\), \(k\sim\text{Uniform}(1,2,...,K)\) \(\epsilon\sim\mathcal{N}(0,I)\) Take gradient descent step on \(\nabla_{\phi,\theta}\|\epsilon-\epsilon_{\theta}(\sqrt{\overline{\epsilon}_{k}} x_{i}^{0}+\sqrt{1-\overline{a}_{k}}\epsilon,h_{i-1},k)\|^{2}\) ``` 0: Converged ``` **Algorithm 1** Training for each spatio-temporal event \(x_{i}=(\tau_{i},s_{i})\) ### Spatio-temporal Encoder To model the spatio-temporal dynamics of events and obtain effective sequence representations, we design a self-attention-based spatio-temporal encoder. The input of the encoder is made up of events \(x=(t,s)\). To obtain a unique representation for each event, we use two embedding layers for the time and space separately. For the space \(s\in\mathbb{R}^{n}\), we utilize a linear embedding layer; for the timestamp, we apply a positional encoding method following (Zhu et al., 2017): \[[e_{t}]_{j}=\begin{cases}cos(t/10000^{\frac{t-1}{M}})&\text{if $j$ is odd}\\ sin(t/10000^{\frac{t-1}{M}})&\text{if $j$ is even}\end{cases}, \tag{5}\] where \(e_{t}\) denotes the temporal embedding and \(M\) is the embedding dimension. For the spatial domain, we use linear projection to convert continuous or discrete space into embeddings as follows: \[e_{s}=W_{e}s \tag{6}\] where \(W_{e}\) contains learnable parameters. We use \(W_{e}\in\mathcal{R}^{M\times D}\) if the space \(s\) is defined in the continuous domain \(\mathbb{R}^{D},D\in\{1,2,3\}\). We use \(W_{e}\in\mathcal{R}^{M\times N}\) if the spatial information is associated with discrete locations represented by one-hot ID encoding \(s\in\mathbb{R}^{N}\), where \(N\) is the number of discrete locations. In this way, we obtain real-value vectors \(e_{s}\) for both continuous and discrete spatial domains. For each event \(x=(t,s)\), we obtain the spatio-temporal embedding \(e_{st}\) by adding the positional encoding \(e_{t}\) and spatial embedding \(e_{s}\). The embedding of the \(S=\{(t_{i},s_{i})\}_{i=1}^{I}\) is then specified by \(E_{st}=\{e_{st,1},e_{st,2},...,e_{st,L}\}\in\mathbb{R}^{L\times M}\), where \(e_{st,i}=e_{s,i}+e_{t,i}\). In the meantime, we also keep the temporal embedding \(E_{t}=\{e_{t_{1}},e_{t,2},...,e_{t,L}\}\) and spatial embedding \(E_{s}=\{e_{s_{1}},e_{s_{2}},...,e_{s,L}\}\), respectively, with the goal of capturing characteristics of different aspects. If only spatio-temporal representation is available, the model may fail when dealing with cases where the temporal and spatial domains are not entangled. With learned representations from different aspects, we did not simply sum them together. Instead, we concatenate them and enable the model to leverage representations adaptively. After the initial spatial embedding and temporal encoding layers, we pass \(E_{st}\), \(E_{s}\), and \(E_{t}\) through three self-attention modules. Specifically, the scaled dot-product attention (Shen et al., 2017) is defined as: \[\begin{split}\text{Attention}(Q,K,V)=\text{Softmax}(\frac{QK^{T} }{\sqrt{d}})\\ S=\text{Attention}(Q,K,V)V\end{split}, \tag{7}\] where \(Q,K,\) and \(V\) represent queries, keys, and values. In our case, the self-attention operation takes the embedding \(E\) as input, and then converts it into three matrices by linear projections: \[Q=EW^{Q},K=EW^{K},V=EW^{V}, \tag{8}\] where \(W^{Q},W^{K}\), and \(W^{V}\) are weights of linear projections. Finally, we use a position-wise feed-forward network to transform the attention output \(S\) into the hidden representation \(h(t)\). For three embeddings \(E_{s},E_{t}\) and \(E_{st}\) containing information of different aspects, we all employ the above self-attentive operation to generate hidden spatial representation \(h_{s}(t)\), temporal representation \(h_{t}(t)\), and spatial-temporal representation \(h_{st}(t)\). As a result, the hidden representation \(h_{i-1}\) in Figure 2 is a collection of the three representations. ### Spatio-temporal Diffusion and Denoising Processes Conditioned on the hidden representation \(h_{i-1}\) generated by the encoder, we aim to learn a model of the spatio-temporal joint distribution of the future event. The learning of such distribution is built on the diffusion model (Han et al., 2017), and the values of space and time are diffused and denoised at each event. Specifically, for each event \(x_{i}=(\tau_{i},s_{i})\) in the sequence, where \(\tau_{i}\) denotes the time interval since the last event, we model the diffusion process as a Markov process over the spatial and temporal domains as \((x_{i}^{0},x_{i}^{1},...,x_{i}^{K})\), where \(K\) is the number of diffusion steps. From \(x_{i}^{0}\) to \(x_{i}^{K}\), we add a little Gaussian noise step by step to the space and time values until they are corrupted into pure Gaussian noise. The process of adding noise is similar to image scenarios, where the noise is applied independently on each pixel (Han et al., 2017). We diffuse separately on the spatial and temporal domains by the following probabilities: \[\begin{split} q_{st}(x_{i}^{k}|x_{i}^{k-1})\coloneqq(q(\tau_{i} ^{k}|x_{i}^{k-1}),q(s_{i}^{k}|s_{i}^{k-1}))\\ q(x^{k}|x^{k-1})\coloneqq\mathcal{N}(x^{k};\sqrt{1-\beta_{k}}x^{k },\beta_{k}K)\end{split}, \tag{9}\] where \(\alpha_{k}=1-\beta_{k}\) and \(\overline{a}_{k}=\prod_{s=1}^{k}\alpha_{k}\). On the contrary, we formulate the reconstruction of the point \(x_{i}=(\tau_{i},s_{i})\) as reverse denoising iterations from \(x_{i}^{K}\) to \(x_{i}^{0}\) given the event history. In addition to the history representation \(h_{i-1}\), the denoising processes of time and space are also dependent on each other obtained in the previous step. The predicted values of the next step are modeled in a conditionally independent manner, which is formulated as follows: \[\begin{split} p_{\theta}(x_{i}^{k-1}|x_{i}^{k},h_{i-1})=p_{\theta}( \tau_{i}^{k-1}|x_{i}^{k},s_{i}^{k},h_{i-1})p_{\theta}(s_{i}^{k-1}|r_{i}^{k},s_ {i}^{k},h_{i-1})\end{split}, \tag{10}\] (11) In this way, we manage to disentangle the modeling of spatio-temporal joint distribution into conditionally independent modeling, which enables effective and efficient modeling of the observed spatio-temporal distribution. The overall reverse denoising process is formulated as follows: \[p_{\theta}(x_{i}^{0,K}|h_{i-1})\coloneqq p(x_{i}^{K})\prod_{k=1}^{K}p_{\theta}(x _{i}^{k-1}|x_{i}^{k},h_{i-1})\enspace. \tag{11}\] For the continuous-space domain, the spatio-temporal distribution can be predicted by Equation 11. For the discrete-space domain, we add a rounding step at the end of the reverse process, \(p_{\theta}(s_{i}|s_{i}^{0})\), to convert the real-valued embedding \(s_{i}^{0}\) to discrete location ID \(s_{i}\). ### Training and Inference F.: For a spatio-temporal point process, the training should optimize the parameters \(\theta\) that maximize the log-likelihood: \[\sum_{i=1}^{L}\text{log}p_{\theta}(x_{i}^{0}|h_{i-1})\enspace, \tag{12}\] where \(L\) is the number of events in the sequence. Based on a similar derivation in the preliminary section, we train the model by a simplified loss function for the \(i_{th}\) event and diffusion step \(k\) as follows (Kang et al., 2017): \[\mathcal{L}=\mathbb{E}_{x_{i}^{k},e,k}[\|e-e_{\theta}(\sqrt{\overline{a}_{k}} x_{i}^{0}+\sqrt{1-\overline{a}_{k}}e,h_{i-1},k)\|^{2}]\enspace, \tag{13}\] where \(e\sim\mathcal{N}(0,I)\). Samples at each diffusion step k for each event are included in the training set. We train the overall framework consisting of ST encoder and ST diffusion in an end-to-end manner. The pseudocode of the training procedure is shown in Algorithm 1. InferenceTo predict future spatio-temporal events with trained DSTPP. We first obtain the hidden representation \(h_{i}\) by employing the spatio-temporal self-attention encoder given past \(i-1\) events. Then, we can predict the next event starting from Gaussian noise \(s_{i}^{K}\), \(\tau_{i}^{K}\sim\mathcal{N}(0,I)\) conditioned on \(h_{i}\). Specifically, the reconstruction of \(x_{i}^{0}\) from \(x_{i}^{K}=(s_{i}^{K},\tau_{i}^{K})\) is formulated as follows: \[\begin{split}& s_{i}^{k-1}=\frac{1}{\sqrt{a_{k}}}(s_{i}^{k}- \frac{\beta_{k}}{\sqrt{1-\overline{a}_{k}}}e_{\theta}(x_{i}^{k},h_{i-1},k))+ \sqrt{\beta_{k}}x_{i}\enspace,\\ &\tau_{i}^{k-1}=\frac{1}{\sqrt{a_{k}}}(\tau_{i}^{k}-\frac{\beta _{k}}{\sqrt{1-\overline{a}_{k}}}e_{\theta}(x_{i}^{k},h_{i-1},k))+\sqrt{\beta_ {k}}x_{i}\enspace,\end{split} \tag{14}\] where \(z_{s}\) and \(z_{t}\) are both stochastic variables sampled from a standard Gaussian distribution. \(e_{\theta}\) is the trained reverse denoising network, which takes in the previous denoising result \(x_{i}^{k}\), the hidden representation of the sequence history \(h_{i-1}\) and the diffusion step \(k\). Algorithm 2 presents the pseudocode of the sampling procedure. ### Co-attention Denoising Network We design a co-attention denoising network to capture the interdependence between spatial and temporal domains, which facilitates the learning of spatio-temporal joint distributions. Specifically, it performs spatial and temporal attention simultaneously at each denoising step to capture fine-grained interactions. Figure 3 illustrates the detailed network architecture. Each step of the denoising process shares the same structure, which takes in the previously predicted values \(s_{i}^{k+1}\) and \(\tau_{i}^{k+1}\), and the denoising step \(k\) with positional encoding. Meanwhile, the network also integrates the hidden representation \(h_{i-1}\) to achieve conditional denoising. Temporal attention aims to generate a context vector by attending to certain parts of the temporal input and certain parts of the spatial input, and so does spatial attention. We calculate the mutual attention weights, i.e., \(\alpha_{s}\) and \(\alpha_{t}\), for space and time based on the condition \(h_{i-1}\) and current denoising step \(k\) as follows: \[\begin{split}& e_{k}=\text{SinusoidalPosEmb}(k)\enspace,\\ &\alpha_{s}=\text{Softmax}(W_{sa}\text{Concat}(h_{i-1},e_{k})+b_{ sa})\enspace,\\ &\alpha_{t}=\text{Softmax}(W_{ta}\text{Concat}(h_{i-1},e_{k})+b_{ ta})\enspace,\end{split} \tag{15}\] where \(W_{sa},W_{ta},b_{sa},b_{ta}\) are learnable parameters. \(\alpha_{s}\) and \(\alpha_{t}\) measure the mutual dependence between time and space, which are influenced by the event history and current denoising step. Then we integrate the spatio-temporal condition \(h_{i-1}=\{h_{s,i-1},\)\(h_{t,i-1}\}\) into previously predicted values \(s_{i}^{k+1}\) and \(\tau_{i}^{k+1}\) by feed-forward neural networks, and each layer is formulated as follows: \[\begin{split}& x_{s,i}=\sigma(W_{s}s_{i}^{k+1}+b_{s}+W_{sh}h_{s,i-1}+b_{ sh}+e_{k})\enspace,\\ & x_{t,i}=\sigma(W_{t}\tau_{i}^{k+1}+b_{t}+W_{th}h_{t,i-1}+b_{th}+ e_{k})\enspace,\end{split} \tag{16}\] where \(W_{s}\in\mathbb{R}^{M\times D}\), \(W_{t}\in\mathbb{R}^{M\times 1}\), \(W_{sh},W_{th}\in\mathbb{R}^{M\times M}\), and \(b_{s},b_{t},b_{sh}\), \(b_{th}\in\mathbb{R}^{M\times 1}\) are learnable parameters of the linear projection, and \(\sigma\) denotes the ReLU activation function. Finally, the outputs of spatial attention and temporal attention are calculated as follows: \[\begin{split}& x_{i}=[x_{s,i},x_{t,i}]\enspace,\\ &\epsilon_{s,i}^{k}=\sum_{i}\alpha_{s}x_{i}\enspace,\epsilon_{s,i }^{k}=\sum_{i}\alpha_{t}x_{i},\end{split} \tag{17}\] Figure 3. Network architecture of the spatio-temporal co-attention mechanism. Each step in the denoising process shares the same network structure, with spatio-temporal hidden representations as conditions. where \(\epsilon^{k}_{s,i}\) and \(\epsilon^{k}_{t,i}\) are the predicted noise at step \(k\) for the \(i_{th}\) event. We can obtain the predicted values \(s^{k}_{i}\) and \(\tau^{k}_{i}\) at step \(k\) according to Equation (14). Then the predicted values \(s^{k}_{i}\) and \(\tau^{k}_{i}\) are fed into the denoising network again to iteratively predict the results towards the clean values of space and time. In this way, the interdependence between time and space is captured adaptively and dynamically, facilitating the learning of the spatio-temporal joint distribution. ## 4. Experiments In this section, we perform experiments to answer the following research questions: * **RQ1:** How does the proposed model perform compared with existing baseline approaches? * **RQ2:** Is the joint modeling of spatial and temporal dimensions effective for STPPs, and what's the spatio-temporal interdependence like during the denoising process? * **RQ3:** How does the total number of diffusion steps affect the performance? * **RQ4:** How to gain a deeper understanding of the reverse denoising diffusion process? ### Experimental Setup #### 4.1.1. **Datasets** We perform extensive experiments on synthetic datasets and real-world datasets in the STPP literature. All datasets are obtained from open sources, which contain up to thousands of spatio-temporal events. Varying across a wide range of fields, we use one synthetic dataset and three real-world datasets, including earthquakes in Japan, COVID-19 spread, bike sharing in New York City, and simulated Hawkes Gaussian Mixture Model process (Bradley et al., 2017). Besides, we use a real-world dataset, Atlanta Crime Data, the spatial locations of which are discrete neighborhoods. We briefly introduce them here, and further details can be found in Appendix A. **(1) Earthquakes.** Earthquakes in Japan with a magnitude of at least 2.5 from 1990 to 2020 recorded by the U.S. Geological Survey3. Footnote 3: [https://earthbuake.usgs.gov/earthbuakes/search/](https://earthbuake.usgs.gov/earthbuakes/search/) **(2) COVID-19.** Publicly released by The New York Times (2020), which records daily infected cases of COVID-19 in New Jersey state4. We aggregate the data at the county level. **(3) Citibike.** Bike sharing in New York City collected by a bike sharing service. The start of each trip is considered as an event. **(4)HawkesGMM5**. This synthetic data uses Gaussian Mixture Model to generate spatial locations. Events are sampled from a multivariate Hawkes process. **(6) Crime 6**. It is provided by the Atlanta Police Department, recording robbery crime events. Each event is associated with the time and the neighborhood. Footnote 4: [https://github.com/mytimes/covid-19-data](https://github.com/mytimes/covid-19-data) Footnote 5: [https://github.com/facebookresearch/neural_stpp/blob/main/toy_datasets.py](https://github.com/facebookresearch/neural_stpp/blob/main/toy_datasets.py) #### 4.1.2. **Baselines** To evaluate the performance of our proposed model, we compare it with commonly-used methods and state-of-the-art models. The baselines can be divided into three groups: spatial baselines, temporal baselines, and spatio-temporal baselines. It is common for previous methods to model the spatial domain and temporal domain separately, so spatial baselines and temporal baselines can be combined freely for STPPs. We summarize the three groups as follows6: Footnote 6: Appendix B provides more details of the used baselines. * **Spatial baselines:** We use conditional kernel density estimation (Condition KDE) (Bradley et al., 2017), Continuous normalizing flow (CNF), and Time-varying CNF (Bradley et al., 2017) (TVCNF) (Bradley et al., 2017). The three methods all model continuous spatial distributions. * **Temporal baselines:** We include commonly used TPP models. Classical TPP models include the Poisson process (Zhu et al., 2017), Hawkes Process (Han et al., 2017), and Self-correcting process (Krause et al., 2017). We also incorporate neural TPP models, including Recurrent Marked Temporal Point Process (RMTPP) (Bradley et al., 2017), Neural Hawkes Process (NHP) (Zhu et al., 2017), Transformer Hawkes Process (THP) (Zhu et al., 2017), Self-attentive Hawkes Process (SAHP) (Zhu et al., 2017). Besides, we also compare with intensity-free approaches: Log Normal Mixture model (LogNormMix) (Zhu et al., 2017), and Wasserstein GAN (WGAN) (Zhu et al., 2017). * **Spatio-temporal baselines.** We include state-of-the-art spatio-temporal baselines, including Neural Jump Stochastic Differential Equations (NJSDE) (Zhu et al., 2017), Neural Spatio-temporal Point Process (NSTPP) (Bradley et al., 2017), and DeepSTPP (Zhu et al., 2017). #### 4.1.3. **Evaluation Metrics** We evaluate the performance of models from two perspectives: likelihood comparison and event prediction comparison. We use negative log-loglikelihood (NLL) as metrics, and the time and space are evaluated, respectively. Although the exact likelihood cannot be obtained, we can write the variational lower bound (VLB) according to Equation (3) and utilize it as the NLL metric instead. Thus, the performance on exact likelihood is even better than the reported variational lower bound. The models' predictive ability for time and space is also important in practical applications (Zhu et al., 2017). Since time intervals are real values, we use a common metric, Root Mean Square Error (RMSE), to evaluate time prediction. The spatial location can be defined in \(D\)-dimensional space, so we use Euclidean distance to measure the spatial prediction error. We refer the readers to Appendix C.1 for more details of the used evaluation metrics. ### Overall performance Table 2 and Table 3 show the overall performance of models on NLL and prediction, respectively. Figure 4 shows the prediction performance of models in discrete-space scenarios. From these results, we have the following observations: * **Unreasonable parametric assumptions for point processes destroy the performance severely.** The worst performance of the self-correcting process indicates the assumption that the occurrence of past events inhibits the occurrence of future events, does not match realities. On the contrary, the Hawkes process, which assumes the occurrence of an event increases the probability of event occurrence in the future, outperforms other classical models (Poisson and Self-correcting), with an obvious reduction of temporal NLL. Nevertheless, the self-exciting assumption can still fail when faced with cases where previous events prevent subsequent events. Therefore, classical models that require certain assumptions, cannot cover all situations with different dynamics. * **It is necessary to capture the spatio-temporal interdependence.** NSTPP models the dependence of space on time by \(p(s|t)\) \begin{table} \begin{tabular}{c|c c|c c|c c|c c} \hline \hline & \multicolumn{2}{c|}{EarthJuake} & \multicolumn{2}{c|}{COVID-19} & \multicolumn{2}{c|}{Citibike} & \multicolumn{2}{c}{HawkesGMM} \\ \cline{2-9} Model & Spatial \(\downarrow\) & Temporal \(\downarrow\) & Spatial \(\downarrow\) & Temporal \(\downarrow\) & Spatial \(\downarrow\) & Temporal \(\downarrow\) & Spatial \(\downarrow\) & Temporal \(\downarrow\) \\ \hline Conditional KDE & 2.21\(\pm\)0.105 & -(1) & 2.31\(\pm\)0.084 & - & 2.74\(\pm\)0.001 & - & 0.236\(\pm\)0.001 & - \\ CNF & 1.35\(\pm\)0.000 & - & 2.05\(\pm\)0.014 & - & 2.15\(\pm\)0.000 & - & 0.427\(\pm\)0.002 & - \\ TVCNF & 1.34\(\pm\)0.008 & - & 2.04\(\pm\)0.004 & - & 2.19\(\pm\)0.025 & - & 0.431\(\pm\)0.008 & - \\ Possion & - & -0.146\(\pm\)0.000 & - & -0.876\(\pm\)0.0021 & - & -0.626\(\pm\)0.000 & - & 1.34\(\pm\)0.000 \\ Hawkes & - & -0.514\(\pm\)0.000 & - & -2.06\(\pm\)0.000 & - & -1.06\(\pm\)0.001 & - & 0.880\(\pm\)0.000 \\ Self-correcting & - & 13.8\(\pm\)0.533 & - & 7.13\(\pm\)0.062 & - & 7.11\(\pm\)0.010 & - & 4.59\(\pm\)0.135 \\ RMTPP & - & 0.0930\(\pm\)0.051 & - & -1.30\(\pm\)0.022 & - & 1.24\(\pm\)0.001 & - & 1.52\(\pm\)0.002 \\ NHP & - & -0.676\(\pm\)0.001 & - & -2.30\(\pm\)0.001 & - & -1.14\(\pm\)0.001 & - & 0.580\(\pm\)0.000 \\ THP & - & -0.976\(\pm\)0.011 & - & -2.12\(\pm\)0.002 & - & -1.49\(\pm\)0.003 & - & -0.402\(\pm\)0.001 \\ SAHP & - & -0.229\(\pm\)0.007 & - & -1.37\(\pm\)0.118 & - & -1.02\(\pm\)0.007 & - & -1.25\(\pm\)0.136 \\ LogNormMix & - & -0.341\(\pm\)0.071 & - & -2.01\(\pm\)0.025 & - & -1.06\(\pm\)0.005 & - & 0.630\(\pm\)0.004 \\ \hline NJSDE & 1.65\(\pm\)0.012 & 0.0950\(\pm\)0.203 & 2.21\(\pm\)0.005 & -1.82\(\pm\)0.002 & 2.63\(\pm\)0.001 & -0.804\(\pm\)0.059 & 0.395\(\pm\)0.001 & 1.77\(\pm\)0.030 \\ NJTPP & 0.885\(\pm\)0.037 & -0.623\(\pm\)0.004 & 1.90\(\pm\)0.017 & -2.25\(\pm\)0.002 & 2.38\(\pm\)0.053 & -1.09\(\pm\)0.004 & 0.285\(\pm\)0.011 & 0.824\(\pm\)0.005 \\ DeepSTTP & 4.92\(\pm\)0.007 & -0.174\(\pm\)0.001 & 0.361\(\pm\)0.01 & -1.09\(\pm\)0.01 & **-4.94\(\pm\)0.018** & -1.13\(\pm\)0.002 & 0.519\(\pm\)0.001 & 0.322\(\pm\)0.002 \\ \hline DSTPP (ours) & **0.413\(\pm\)0.006** & **-1.10\(\pm\)0.020** & **0.350\(\pm\)0.029** & **-2.66\(\pm\)0.003** & 0.529\(\pm\)0.011 & **-2.43\(\pm\)0.010** & **0.200\(\pm\)0.047** & **-1.63\(\pm\)0.002** \\ \hline \hline \end{tabular} \({}^{(b)}\) Spatial baselines and temporal baselines can be combined freely for modeling spatio-temporal domains. \end{table} Table 2. Performance evaluation for negative log-likelihood per event on test data. \(\downarrow\) means lower is better. Bold denotes the best results and underline denotes the second-best results. Figure 4. The performance of models on discrete-space datasets for both time and space of the next event. \begin{table} \begin{tabular}{c|c c|c c|c c|c c} \hline \hline & \multicolumn{2}{c|}{EarthJuake} & \multicolumn{2}{c|}{COVID-19} & \multicolumn{2}{c|}{Citibike} & \multicolumn{2}{c}{HawkesGMM} \\ \cline{2-9} Model & Spatial \(\downarrow\) & Temporal \(\downarrow\) & Spatial \(\downarrow\) & Temporal \(\downarrow\) & Spatial \(\downarrow\) & Temporal \(\downarrow\) & Spatial \(\downarrow\) & Temporal \(\downarrow\) \\ \hline Conditional KDE & 11.3\(\pm\)0.658 & - & 0.688\(\pm\)0.047 & - & 0.718\(\pm\)0.001 & - & 1.54\(\pm\)0.006 & - \\ CNF & 8.48\(\pm\)0.054 & - & 0.559\(\pm\)0.000 & - & 0.722\(\pm\)0.000 & - & 71663\(\pm\)0.0516 & - \\ TVCNF & 8.11\(\pm\)0.001 & - & 0.560\(\pm\)0.000 & - & 0.705\(\pm\)0.000 & - & 2.03\(\pm\)0.000 & - \\ Possion & - & 0.631\(\pm\)0.017 & - & 0.463\(\pm\)0.021 & - & 0.438\(\pm\)0.001 & - & 2.81\(\pm\)0.070 \\ Hawkes & - & 0.544\(\pm\)0.010 & - & 0.672\(\pm\)0.0088 & - & 0.534\(\pm\)0.011 & - & 2.63\(\pm\)0.002 \\ Self-correcting & - & 11.22\(\pm\)0.046 & - & 2.83\(\pm\)0.141 & - & 10.7\(\pm\)0.169 & - & 9.72\(\pm\)0.159 \\ RMTPP & - & 0.424\(\pm\)0.009 & - & 1.32\(\pm\)0.024 & - & 2.07\(\pm\)0.015 & - & 3.38\(\pm\)0.012 \\ NHP & - & 1.86\(\pm\)0.023 & - & 2.13\(\pm\)0.100 & - & 2.36\(\pm\)0.056 & - & 2.82\(\pm\)0.028 \\ THP & - & 2.44\(\pm\)0.021 & - & 0.611\(\pm\)0.003 & - & 1.46\(\pm\)0.009 & - & 5.35\(\pm\)0.002 \\ SAHP & - & 0.409\(\pm\)0.002 & - & 0.184\(\pm\)0.024 & - & 0.203\(\pm\)0.010 & - & 2.75\(\pm\)0.049 \\ LogNormMix & - & 0.593\(\pm\)0.005 & - & 0.165\(\pm\)0.011 & - & 0.350\(\pm\)0.013 & - & 2.79\(\pm\)0.021 \\ WGAN & - & 0.481\(\pm\)0.007 & - & 0.124\(\pm\)0.002 & - & -0.238\(\pm\)0.003 & - & 2.83\(\pm\)0.048 \\ \hline NJSDE & 9.98\(\pm\)0.024 & 0.465\(\pm\)0.009 & 0.641\(\pm\)0.009 & 0.137\(\pm\)0.001 & 0.707\(\pm\)0.001 & 0.264\(\pm\)0.005 & 1.624\(\pm\)0.003 & 2.25\(\pm\)0.007 \\ NJTPP & 8.11\(\pm\)0.000 & 0.547\(\pm\)0.01 also achieves remarkably significant improvement across various datasets. In terms of models' predictive power, our model also achieves optimal performance, with remarkable improvements compared to the second-best model. In addition, as Figure 4 shows, DSTPP delivers better predictive performance compared with other solutions in modeling discrete-space scenarios. The flexible framework that requires no parameter assumptions and MC estimations enables DSTPP to achieve superior performance. ### Analysis of Spatio-temporal Interdependence To gain a deeper understanding of the spatio-temporal interdependence in the denoising process, we perform an in-depth analysis of co-attention weights. Specifically, the analysis is conducted on two representative datasets: Earthquake and Synthetic-Independent, where the Earthquake dataset is highly spatio-temporal entangled, and the Synthetic-Independent dataset is totally spatio-temporal independent. Appendix A provides the generation details of the synthetic dataset. We use these two datasets to validate whether the designed co-attention mechanism can learn different interdependence between time and space. At each step of the denoising process, we calculate attention weights of the temporal and spatial dimensions on themselves and each other. Figure 6 shows how attention weights change as denoising proceeds. As shown in Figure 6(a), at the early stage, temporal and spatial domains do not assign attention weights to each other, and the attention weights on themselves are close to one. At the final stage (step \(\geq 150\)), the two domains start to assign attention weights to each other. At last, for the temporal domain, the attention weights on time and space are approximately 0.83 and 0.17; for the spatial domain, the attention weights are close to evenly divided (0.52 and 0.48), suggesting that the spatial domain is more dependent on the temporal domain. In the later stage of the denoising iterations, the model learns a distribution closer to the real case; thus, it is reasonable that the spatial and temporal domains assign more attention weights to each other. Figure 6(b) displays different results: the two domains share almost no attention weights to each other, indicating that the model has successfully learned the independent relationship. Figure 6(a) and (b) together validate the effectiveness of the co-attention mechanism, which can adaptively learn various interaction mechanisms between time and space. ### Ablation Studies **Co-attention Mechanism.** In order to examine the effectiveness of the co-attention mechanism, we degrade our DSTPP into a base framework, DSTPP-Ind, which models the distributions of space and time independently in the denoising process. To be specific, we replace \(p_{\theta}(t_{i}^{k-1}|t_{i}^{k},s_{i}^{k},h_{i-1})\) and \(p_{\theta}(s_{i}^{k-1}|t_{i}^{k},s_{i}^{k},h_{i-1})\) in Equation (10) with \(p_{\theta}(t_{i}^{k-1}|t_{i}^{k},h_{i-1}),p_{\theta}(s_{i}^{k-1}|s_{i}^{k},h_{ i-1})\), where space and time are not conditionally dependent on each other. Figure 5 shows the performance comparison of DSTPP and DSTPP-Ind in continuous-space settings. We can observe that DSTPP trained by incorporating the joint modeling of time and space performs consistently better than DSTPP-Ind with independent modeling. These results indicate the necessity to capture the interdependence between time and space, and meanwhile, validate the effectiveness of the spatio-temporal co-attention design. Due to the space limit, we leave other results in Appendix D. ### Analysis of Reverse Diffusion Processes To gain a deeper understanding of the denoising process, We visualize the spatial distribution during the reverse denoising iterations in Figure 7. As we can observe, at the beginning of the denoising process, the spatial distribution displays a Gaussian noise. With Figure 5. Ablation study on the joint spatio-temporal modeling. DSTPP-Ind denotes the degraded version of DSTPP, where spatial and temporal domains are independent. Figure 6. Spatial and temporal attention weights in the denoising iterations for two datasets with different spatio-temporal interdependence. Best viewed in color. progressive denoising iterations, the data distribution deforms gradually and becomes more concentrated. Finally, at the last step, the spatial distribution fits perfectly with the ground truth distribution. It indicates that our DSTPP is able to learn the generative process of spatial distribution successfully. Besides, the denoising process is not a linear change, where the distribution changes during the last 50 steps are more significant than the previous steps. Combined with results in Section 4.3, where the interdependence between spatial and temporal domains is effectively captured in the latter stage, it is reasonable that the denoising effect is improved significantly during this period. ## 5. Related Work **Spatio-temporal Point Processes.** Temporal point process models (Gross and Hinton, 2006; Goyal et al., 2017; Goyal et al., 2018; Goyal et al., 2019; Goyal et al., 2019) can be directly used for STPPs, where the space is considered as the event marker. Kernel density estimation methods are also used to model continuous-space distributions in STPP models (Gross and Hinton, 2006; Goyal et al., 2017; Goyal et al., 2019; Goyal et al., 2019; Goyal et al., 2019). Most existing solutions follow an intensity-based paradigm, and their main challenge is how to choose a good parametric form for the intensity function. There exists a trade-off between the modeling capability of the intensity function and the cost to compute the log-likelihood. Some intensity-free models (Goyal et al., 2019; Goyal et al., 2019) are proposed to tackle this problem; however, the probability density function either is unavailable (Goyal et al., 2019) or still has certain model restrictions (Goyal et al., 2019). Another drawback of existing models is that they can only model either the continuous-space domain or the discrete-space domain, which largely limits their usability in real-world scenarios. Recently, a line of advances have been developed for the generative modeling of point processes. For example, generative adversarial networks (Goyal et al., 2019; Goyal et al., 2019) are used to learn to generate point processes in a likelihood-free manner. Reinforcement learning (Goyal et al., 2019; Goyal et al., 2019) approaches and variational autoencoders (Goyal et al., 2019; Goyal et al., 2019) are also included to explore the generative performance of TPPs. Some works also use noise contrastive learning (Goyal et al., 2019; Goyal et al., 2019) instead of MLE. We are the first to learn point processes within the paradigm of diffusion models, which successfully address limitations in previous existing solutions. **Denoising Diffusion Probabilistic Models.** Denoising diffusion probabilistic models (DDPM) (Goyal et al., 2019; Goyal et al., 2019; Goyal et al., 2019), are a class of deep generative models, which are inspired by non-equilibrium thermodynamics. Due to their powerful generative capabilities, diffusion models have been used in a wide range of applications including image generation (Gross and Hinton, 2006; Goyal et al., 2019; Goyal et al., 2019; Goyal et al., 2019), time series prediction and imputation (Goyal et al., 2019; Goyal et al., 2019), audio generation (Goyal et al., 2019; Goyal et al., 2019; Goyal et al., 2019), text generation (Goyal et al., 2019; Goyal et al., 2019; Goyal et al., 2019), 3D point cloud generation (Goyal et al., 2019; Goyal et al., 2019), and trajectory generation (Goyal et al., 2019; Goyal et al., 2019). In this paper, we first introduce the diffusion model to the domain of spatio-temporal point processes. ## 6. Conclusion In this paper, we propose a novel framework to directly learn spatio-temporal joint distributions with no requirement for independence assumption and Monte Carlo sampling, which has addressed the structural shortcomings of existing solutions. The framework also poses desired properties like easy training and closed-form sampling. Extensive experiments on diverse datasets highlight the impact of our framework against state-of-the-art STPP models. As for future work, it is promising to apply our model in urban system (Goyal et al., 2019; Goyal et al., 2019) as well as large-scale natural systems, such as climate changes and ocean currents, which are concerned with highly complex spatio-temporal data. ###### Acknowledgements. This work was supported in part by the National Key Research and Development Program of China under grant 2020YFA0711403, the National Nature Science Foundation of China under U22B2057, 61971267, and U1936217, and BNRist. Figure 7. Visualization of the spatial distribution at different stages in the denoising process (the first five columns in blue color). The last column in red color presents the real distribution. Starting from Gaussian noise, our DSTPP model gradually fits the spatial distribution of ground truth. Best viewed in color.
2307.13164
Malware Resistant Data Protection in Hyper-connected Networks: A survey
Data protection is the process of securing sensitive information from being corrupted, compromised, or lost. A hyperconnected network, on the other hand, is a computer networking trend in which communication occurs over a network. However, what about malware. Malware is malicious software meant to penetrate private data, threaten a computer system, or gain unauthorised network access without the users consent. Due to the increasing applications of computers and dependency on electronically saved private data, malware attacks on sensitive information have become a dangerous issue for individuals and organizations across the world. Hence, malware defense is critical for keeping our computer systems and data protected. Many recent survey articles have focused on either malware detection systems or single attacking strategies variously. To the best of our knowledge, no survey paper demonstrates malware attack patterns and defense strategies combinedly. Through this survey, this paper aims to address this issue by merging diverse malicious attack patterns and machine learning (ML) based detection models for modern and sophisticated malware. In doing so, we focus on the taxonomy of malware attack patterns based on four fundamental dimensions the primary goal of the attack, method of attack, targeted exposure and execution process, and types of malware that perform each attack. Detailed information on malware analysis approaches is also investigated. In addition, existing malware detection techniques employing feature extraction and ML algorithms are discussed extensively. Finally, it discusses research difficulties and unsolved problems, including future research directions.
Jannatul Ferdous, Rafiqul Islam, Maumita Bhattacharya, Md Zahidul Islam
2023-07-24T23:25:06Z
http://arxiv.org/abs/2307.13164v1
# Malware-Resistant Data Protection in Hyper-connected Networks: A survey ###### Abstract Data protection is the process of securing sensitive information from being corrupted, compromised, or lost. A hyper-connected network, on the other hand, is a computer networking trend in which communication occurs over a network. However, what about malware? Malware is malicious software meant to penetrate private data, threaten a computer system, or gain unauthorized network access without the user's consent. Due to the increasing applications of computers and dependency on electronically saved private data, malware attacks on sensitive information have become a dangerous issue for individuals and organizations across the world. Hence, malware defense is critical for keeping our computer systems and data protected. Many recent survey articles have focused on either malware detection systems or single attacking strategies variously. To the best of our knowledge, no survey paper demonstrates malware attack patterns and defense strategies combinedly. Through this survey, this paper aims to address this issue by merging diverse malicious attack patterns and machine learning (ML) based detection models for modern and sophisticated malware. In doing so, we focus on the taxonomy of malware attack patterns based on four fundamental dimensions: the primary goal of the attack, method of attack, targeted exposure and execution process, and types of malware that perform each attack. Detailed information on malware analysis approaches is also investigated. In addition, existing malware detection techniques employing feature extraction and ML algorithms are discussed extensively. Finally, it discusses research difficulties and unsolved problems, including future research directions. Data Protection; Malware Analysis; Malware Attack; Malware Detection; Feature Extraction; Machine Learning Algorithms. ## 1 Introduction In this digital world, preserving the security of sensitive data can be challenging for internet users due to the threat of unauthorized computer system access by malware attacks. Malicious programs or malware are any undesirable programs or files that are secretly injected into other hosts and are committed to destroying the operating system or network and leading to the exfiltration of private data and other harmful effects. Cyber attackers utilize various malicious programs, such as ransomware, spyware, adware, rootkit, worm, horse, botnets, trojans, and viruses at different times to convert the contents of a computer's file system without the victim's consent. Malware attacks can compromise a computer system using various techniques, including spreading from compromised systems, deceiving users into downloading malicious files and attracting users to access malicious websites. Malware targets include end-user computers, network equipment (such as routers and switches), servers, and logic controllers. Today's modern internet is afflicted by the growth and intelligence of an ever-increasing quantity of malware [1]. The digital revolution has grown increasingly important in our daily lives due to its better efficiency, rapid communication, and incomparable simplicity. People can share information and transfer money with just a click. Unfortunately, dealing with malware attacks remains a challenge for the netizen, even with advances in technology and cybersecurity, because the threat actors are always trying to discover new ideas for making cash or rising trades by thieving private data, bank accounts, and credit statements from many public and private organizations. This increases the considerable safety risk of data privacy for the user. According to Cybersecurity Statistics, 653% of malicious activity was claimed in July 2020 alone, compared to the same month in 2019 [2]. Another report by the US FBI has shown that in 2021, the number of malicious threats climbed by around 300% due to the growing number of internet users, particularly during the COVID -19 pandemic [3]. Therefore, this creates a significant potential for vast financial losses globally. Cybersecurity Ventures estimated that in 2021 the worldwide fiscal loss due to cybercrime was about $6 trillion. Moreover, to smooth data accessibility and the distribution of computer resources, many organizations, governments, and enterprises usually gather and keep sensitive data in a host machine. If malware attacks infect an organization's host computers, they may share sensitive data and many things to blend in during the execution processes. Hence, identifying harmful run-time attempts and other attacks is critical to protecting sensitive data while sharing in hyper-connected networks. Researchers have presented many solutions to control and mitigate malware attacks using machine learning or other techniques to defend the security and confidentiality of private information. However, these processes can be challenging because today's malware threats are sneaker, more detrimental, and spread to other hosts silently. Hence, it is more difficult to spot and eliminate than earlier generations of malware [4]. Furthermore, cyber attackers use emerging sophisticated and different obfuscation techniques nonstop to design advanced malware variants, such as polymorphism, metamorphism, oligotrophic, etc., to fool security experts [5]. They develop sophisticated programming in such a method that disturbs the operation process. In addition, while executing in a controlled environment, advanced malware may detect and bypass anti-malware tools and hide destructive features. Hence, malware developers and anti-malware detection systems are locked in a never-ending arms race [6]. Furthermore, we regularly observe a rising number of zero-day attacks due to the dynamic nature of malware attacks. Zero-day attacks are attacks that have never been seen before. All the above factors are making the malware detection process more difficult. However, although many surveys in malware research have already been done [5],[7]-[16], these are either outdated or limited in scope. More specially, no comprehensive survey identifies both diverse malware attacks and their detection methods, which is the central issue of this review paper. This paper has conducted comprehensive and in-depth surveys of the existing papers focused on an overview of malware, the taxonomy of malware attack patterns, and three malware detection techniques, namely static-based, dynamic-based, as well as hybrid-based. Various feature extraction procedures and classification algorithms are discussed and reviewed to find an effective and robust method for classifying and identifying malicious programs. This survey is important because it is a concise framework that encompasses all areas of malware and gives vast relevant information. ### Contributions Malware is considered one of the leading threats among internet users today, particularly during this COVID-19 pandemic. Therefore, it is crucial to analyze and detect malware to defend against malicious attacks and stop their detrimental acts. The key contributions of this paper are listed below: * A taxonomy of malware attack patterns is developed that divides them into categories such as Polymorphic & Metamorphic attacks, Ransomware attacks, Fileless malware attacks, Advanced Persistence Threats (APT), and Zero-day attacks based on the targeted exposure, the attack techniques, execution process, and the forms of malware used to carry out the attack. * A comparative analysis has been presented using static, dynamic, and hybrid analysis methods and their tools and techniques to capture malware behavior. * Various types of static and dynamic feature extraction methods are discussed and reviewed. * A comparative evaluation of various machine-learning methods used in malware classification has been presented. * We have explored various research difficulties and challenges in this study, all of which are important in the performance of malware classifiers. ### Scope Selecting an appropriate and efficient malware detection approach is an open challenge. Hence, new approaches and experimental studies are very crucial in the anti-malware community. As a result, this survey paper will serve as a benchmark for developing a prototype for detecting cyber-attacks and responding effectively. In addition, this paper is designed to help cybersecurity experts who are keen on using ML techniques to automate the malware analysis process. ### Organization Before surveying this study, Section 2 illustrates an overall comparison of our survey with previous surveys, followed by Section 3 on the basics of malware. Sections 4 and 5 provide a broad review of existing research, including malware attack taxonomy and malware detection procedures, respectively. Section 6 summarizes and compares the papers on malware detection methods based on the surveyed input features and ML classifiers. Section 7 focuses on the research issues and challenges, which effectively encapsulate the strong and weak points of the findings or an assessment of the survey's excellence. It also discusses some limitations that remain unsolved. Finally, Section 8 brings this paper to a conclusion, including future research directions. Figure 1 presents the complete structure of this paper. ## 2 Comparison with related surveys This section summarizes the reviewed literature on malware analysis from 2011 to 2021 and analyzes the shortcomings that we aim to address in our work shown in Table 1. It will help researchers to construct a baseline for developing a method to counter such attacks. There has already been much research done on the use of machine-learning approaches for malware classification. Shabtai et al. (2009) produced the first study on this issue. Souri and Hosseini (2018) [7] offered an arrangement of machine learning-based malware-hunting techniques. Their study varies from our proposed paper in that they do not explore which features are taken seriously. Bazrafshan et al. (2013) [8] and Basu (2016) [9] concentrated on malware detection using machine learning and other techniques; however, they used a restricted number of feature types, whereas this paper suggests a higher number of feature types, Figure 1: Complete outline of the paper highlighting the broader scope of our research. A complete investigation of the development and present scenario for the detection of malicious code based on ML techniques was proposed by Singh and Singh (2021) [10], Sihwail et al. (2018) [11], Aslan and Samet (2020) [5], Abijah Roseline and Geetha (2021) [12], and Abusitta et al. (2021) [13]. However, they did not mention anything about the attack pattern. Other studies, for example, Gandotra et al. (2014) [14], Singla et al. (2015) [15], Yu et al. (2018) [16], Choudhary and Sharma (2020) [17], and Caviglione et al. (2021) [6] examined various machine learning methods for malware classification based on static and dynamic detection techniques, whereas our survey focused on hybrid malware detection including static and dynamic approaches. Moreover, some other surveys focused on only various dynamic analysis tools and techniques and malware classification taxonomy like Egele et al. (2012) [18], Or-Meir et al. (2019) [19], and Talukder and Talukder (2020) [20]. However, all researchers explained only a single malicious attack pattern. For example, Berrueta et al. (2019) [22], Harun Oz et al. (2021) [23], and Moussaileb et al. (2021) [24] mainly focused on the classification and detection of cryptographic ransomware employing ML algorithms. Sharma and K. Sahay (2014) [25] proposed a detailed survey for detecting and classifying polymorphic and metamorphic malware to protect data from their corresponding attacks. Kaur and Singh (2014) [27] conducted a detailed survey on zero-day attacks. Sibi Chakravararthy et al. (2019) [26] concentrated on innovative Advanced Persistent Threat (APT) and the study of Sudhakar and Kumar (2020) [28] examines all fileless malware's behavior. It is apparent from the literature that most recent studies do not provide in-depth research or offer remedies in a particular area. To tackle this problem, this paper presents detailed information on feature extraction, ML algorithms, and malware detection techniques to make our study as simple and informative as possible for the reader. However, most of the survey articles focus solely on the same attack in different ways. For instance, some researchers focus on ransomware attacks, others emphasize zero-day attacks and so on for other attacks. Our survey addresses this gap by combining and simplifying their work's diverse attack patterns. Thus, our survey \begin{table} \begin{tabular}{l| is comprehensive and unique as it provides wide-ranging insights into various attack patterns and malware detection systems, as opposed to other surveys. ## 3 Overview of malware This section provides an overview of malware, including how malware is defined, malware categories, malware spreading mediums, and sources of collecting malware datasets. ### What is malware? Malware, termed "malicious software" is a type of computer program with the intent of harming or exploiting another software application or compatible devices. Malware began to appear in the 1980s and the very first computer malware, dubbed Brain, was launched in 1986 [29]. A malicious program can be distributed in various formats, including executables, binaries, shellcodes, scripts, and software. In this paper, we use the terms "binary code", "malignant scripts" or "malicious program" to represent malware [19]. Malware attacks can break security holes, penetrate deeper into devices, propagate across networks, and interrupt an organization's main ongoing operations. Malware is the primary cause of most attacks, including significant data breaches that result in widespread fraud and identity theft. Ransomware attacks, which cost thousands of dollars, are also driven by malware. Cybercriminals target individuals, businesses, and even government agencies with malicious attacks [30]. ### Categories of malware Malware can be categorized into several forms depending on its aims and the distribution of information systems. Table 2 depicts a brief description of diverse sorts of malware. _Classification of malware by malicious behavior_ The classifications and concepts listed above are suitable when describing malware to non-professionals. However, for analysis, it is more crucial to concentrate on malware behavior rather than malware type which is as follows [19]. _Stealing information_-Theft of information is the most typical malicious action, which can involve financial information, personal information, passcodes, or access credentials. According to the CIA triad, data confidentiality is undermined by information theft, which is most associated with malware like trojans, spyware, etc. _Creating a vulnerability_-Malware can generate new vulnerabilities by disabling anti-virus software, installing spyware, altering usernames, modifying firewall policies, degrading software to an outdated version, and other methods. This activity endangers the system's security and is linked to RATs (Remote Access Trojans) and Bots. _Denying service-_ A denial-of-service attack can be risky when services are frequently visited, as they are now because it reduces service availability. Hackers can refuse service in a variety of ways. _Executing commands from the C&C_- Malware developers occasionally use the C&C server to send and receive information to and from the victim's computer which allows it to carry out malicious activities. This type of behavior, which is commonly linked with bots, ransomware, and RATs, threatens the system's integrity. _Deceiving the user-_ Fraud can be exploited by bad actors to enter secured systems and/or manipulate data for their benefit. This activity is linked to Trojans, RATs, and scareware, and it put at risk the integrity and confidentiality of the system. _Stealing computing resources-_ Apart from doing the computations required to mine bitcoins, crypto miners generally enable the system to run normally and do not interact with the system's information. This activity puts at risk the system's integrity and availability. _Spreading_-A common behavior seen in worms and viruses is spreading. Figure 2 depicts the malicious behavior that each type of malware exhibits. The link between the standard malware taxonomy and the behavioral taxonomy we presented here is depicted in this diagram. ### Propagation medium of malware Malware propagates throughout host systems due to human actions, both directly and indirectly, and cybercriminals are constantly devising new ways to infiltrate a victim's system to access sensitive data. Some of the most popular sources for malware propagation are as follows: _Drive-by Download_ - Anyone connected to the network and engaging in surfing websites is a possible victim. Users can upload malware-infected-pirated programs that appear genuine [1]. _Network-_ Malicious actors sometimes use the network as a source of malicious code to perform the attack operation because malware can propagate throughout any network [34]. _Hacked websites_-Hackers can upload harmful code to a legitimate site if the user is unaware of system vulnerabilities in the web's configuration. _Backdoors-_ Backdoors are means of access to a computer through which malware programs are installed. A well-known backdoor is FinSpy. Once implemented on a system, it allows an attacker to upload and execute malicious scripts on the machine remotely [6]. _Ads-_ Advertisements are another tricky source of malicious attacks for cybercriminals. A user can infect the computer with malware by clicking on advertisements on relevant websites. _Phishing-_ Phishing is a popular source of malware to conduct the attack procedure. The attackers present themselves as legitimate entity and urge you to offer private information. Users may become infected with malware by clicking \begin{table} \begin{tabular}{l l l} \hline \hline Malware & Brief description & Examples \\ types & & \\ \hline Virus [21] & It carries out harmful operations such as removing or altering system or user data files. & Sector virus, Brain boot, Elk Cloner, etc. \\ Worm [31] & A worm is computer software that copies itself and spreads over networks. & Morris, Blaster, Melissa, Stuxnet, My doom, Sasser, etc. \\ Rootkit [31] & A rootkit is a piece of malware that owns a victim’s machine from a distance without being detected by the user. & NTRootkit, SONY \\ Trojan & A phishing program that tries to pass itself off as a horzess [21] & BMG, Copy Protection \\ Horse [21] & A phishing program that tries to pass itself off as a harmless application, a good thing, or even fun to stay hidden and carry out its nefarious activities. & Trojan-Banker, \\ & Adware displays advertising to the user automatically. & Plankton, Fireball, Dollar \\ Adware [21] & Adware can inject this advertising into other computer software or web links; in certain situations, it can even replace a prevailing advertisement with a new one. & Finisher, internet optimizer, Look2Me, etc. \\ Spyware [31] & Spyware monitors user activity invisibly and without the user’s knowledge. & Finisher, internet optimizer, Look2Me, etc. \\ Botnet [32] & Bot malware compromises computer systems to exploit their resources. & Agobot, Mirai, Conficker, Zeus, Waledac, etc. \\ Backdoor [33] & A computer program is created to avoid a computer’s security features and implant them on a device, allowing a threat actor to enter the computer. & – \\ Ransomw & The ransomware code locks the users’ personal information are [31] & WannaCry, CryptoLocker, Cryptowall, etc. \\ \hline \hline \end{tabular} \end{table} Table 2: Classification of malware depending on the purpose and information-sharing system. Figure 2: Correlation between malware types with malicious behavior. on these kinds of links or movies [30]. _Removable drives_- Flash drives and hard disks are examples of removable drives. These are the most common methods for malware propagating from one machine to another. Removable drives can distribute any malware, including viruses, worms, and ransomware [6]. _Software downloads_- Software downloads can also be a source of malware as users can obtain a wide range of useful software via the internet. ### Malware datasets To understand malware's tricks and strategies, researchers need to collect malware datasets. One method of gathering samples is by online sites of anti-malware projects like MalShare, Malware DB, VirusShare, etc. Additionally, some specific organizations and research workgroups sometimes try sharing their malware sample data to address the lack of publicly available data sources, for example, Microsoft, Ember, etc. Table 3 presents some open-source links that can be used to gather malware samples. ## 4 Taxonomy of attack pattern An attack pattern is a conceptualization technique that outlines how a specific sort of detected threat is carried out. It helps security experts and designers to understand how their systems can be exploited and how to protect them \begin{table} \begin{tabular}{l l l} \hline \hline Sources & Description & Dataset weblink \\ \hline Microsoft & It was released by Microsoft and contained more than 20,000 pieces & [https://www.kaggle.com/c](https://www.kaggle.com/c) \\ Malware & of malware from nine different malware types with a combination of & Malware-classification \\ Classification & 10,868 Bytes files and 10,868 ASM files [35]. & \\ Challenge & & \\ \hline EMBER & Endgame Malware Benchmark for Research has 1 million binary & [https://github.com/endgame](https://github.com/endgame) \\ dataset (2018) & samples, including 900K training data and 200k test data. The dataset is available for developing models that can identify malicious & Windows PE files [36]. \\ VirusShare & The VirusShare dataset is a great publicly available source for researching malware [37]. & VirusShare.com \\ Malshare & The MalShare Project is a collaborative venture that offers public & MalShare \\ & access to malware samples and allows 2000 calls per day. This project benefits everyone by providing free resources and permitting & \\ & high-volume use [38]. & \\ SoReL-20M & Sophos and Reversing Labs have compiled a database containing 20 & [https://github.com/sophos-](https://github.com/sophos-) \\ & million malicious files as well as 10 million disabled malware & ai/SOREL-20M \\ & applications, in response to the lack of reliable data [39]. & \\ SANDBOX & The Cuckoo Sandbox produced malicious public files for computer & [https://github.com/ocata](https://github.com/ocata) \\ & security researchers, based on a study of Windows API calls [40]. & k/malware.api class \\ MalwareBazaar & The MalwareBazaar classifies specimens according to the date, & MalwareBazaar | Malware \\ & data type, signature, and other information [41]. & \\ InQuest & This malware database provides a variety of malicious apps and & \\ Malware & information about their analysis [42]. & \\ Contagio & Contagio is a collection of the newest malicious files, attacks, & \\ Malware & discoveries, and assessments [43]. & \\ Dump & & \\ theZoo & A project called “The Zoo” was created to make malware detection & [https://github.com/vtisf/](https://github.com/vtisf/) \\ & publicly accessible [44]. & theZoo \\ VirusSign & VirusSign provides 100,000 different types of malware every day for & [https://www.virussign.c](https://www.virussign.c) \\ & researchers [45]. & \\ \hline \hline \end{tabular} \end{table} Table 3: List of some publicly available open sources for collecting malware samples. successfully. The key points should be included in an attack pattern namely-Name and classification of the attack pattern, targeted exposures, or vulnerabilities, attacking method, attacker goal, and consequences, and how malware attack works [46]. Figure 3 shows the taxonomy of malware attack patterns. It is helpful to generate a taxonomy to characterize the huge range of malicious attacks systematically. Depending on the attack pattern, malware includes a broad range of threats or attacks such as: * Polymorphic and metamorphic attack * Ransomware attack * Fileless malware attack * Advanced Persistence Threat (APT) * Zero-day attack and much more. ### Polymorphic and metamorphic attack A polymorphic attack is a stealth strategy used by malware to create an unlimited number of new, distinct types of malwares such as trojans, viruses, worms, bots, or keyloggers which modifies itself with new variations for each attack [47]. This makes it harder for anti-malware software to detect and stop the attacks. Operating systems, server applications, and network services are the main targeted exposure of this attack. Attack execution process - Polymorphic attacks can be implemented in a variety of diverse ways, such as Exploit mutation and shellcode polymorphism. In general, a polymorphic attack has three main elements [48] which are outlined below: _Attack vector-_ An attack vector is used to exploit the vulnerability of the target's host to get malware onto the computer or to build a mutated gene. The mutation or polymorphism attempts to conceal its true purpose, and in some cases, to make it more harmful. _Attack body-_ After the flaw has been exploited, the malicious code performs the targeted damage using the shellcode polymorphism technique. Shellcode is an element of the payload used to exploit a software flaw, and it typically contains assembly instructions that enable the attacker's remote connection. The most frequent directive in shellcode is to run a shell, which allows the code to execute commands on the computer. Other common functions include creating a privileged user profile, initiating a reversing connection to the attacker's computer, and performing various destructive activities, such as memory shifting, instruction swapping, command reordering, garbage insertions, and finally encryption. _Polymorphic Decryptor-_ This section contains the program that decodes the shellcode. It decrypts the encrypted assault body and takes control of it. The Decryptor is polymorphic and can be obfuscated in a variety of ways. Metamorphic attack- Metamorphic malware is more complex than polymorphic malware because it employs dynamic code concealment instead of the encryption key, where the code alters with each repetition of the malicious process [49]. This continual modification makes it extremely difficult for anti-virus programs to recognize, isolate, and remove this malware[50]. There are several types of polymorphic and metamorphic families such as VirLock, Locky, Cerber, Crysis, Kelihos Botnet, Beebone, etc. ### Ransomware attack Ransomware is a type of virus that typically targets consumers, the public, and corporate organizations to demand ransom money from the victims. The primary purpose of ransomware is to encrypt all data on a device, making it impossible for the victim to access the data. Cybercriminals use a variety of tactics to gain access to consumers' or institutions' documents and assets to extort a ransom. These ransom demands can come in the form of payments in real-time, or the data will be permanently destroyed if the victim does not pay up [51]. The attacker accepts payments from clients in virtual currencies such as Bitcoin, making it harder to track their names and location. Cybercriminals often use email phishing and brute force attacks to get an early footing on sensitive information and then use ransomware to take control of systems [52]. Attack Execution process- Figure 4 demonstrates the five stages of the ransomware attack chain. The following are the major stages of a ransomware attack chain: _Infection-_ The first phase is when the attacker uses various attack methods to try and get malware onto a target device. This could be done through ads, hacked websites, drive-by downloads, etc.[53], [23]. Infection can occur physically or virtually to back up data and computer memories. Attack Execution process- Figure 4 demonstrates the five stages of the ransomware attack chain. The following are the major stages of a ransomware attack chain: _Infection_- The first phase is when the attacker uses various attack methods to try and get malware onto a target device. This could be done through ads, hacked websites, drive-by downloads, etc.[53, 23]. Infection can occur physically or virtually to back up data and computer memories. _Key generation_- Once infected, the ransomware communicates with a remote command and control server to receive instructions from the attackers on how to carry out its malicious activities. This includes retrieving a secret key and other information about the victim machine. _Scanning_- In this phase, the malware looks for files to encrypt on the local machine and networked devices. Figure 3: Taxonomy of malware attack pattern _Encryption_- Ransomware now conducts attacks, which include encrypting data or blocking computers to restrict the users from using their contents or computer. _Extortion_- Finally, the ransomware shows a ransom letter to notify the victim of the attack. The ransom note reveals the facts of the attack and payment instructions. Developing picture, textual, or web pages is a typical way to keep track of notes [51],[23]. Reveton, 2012; CryptoLocker, 2013; Petya, 2016; WannaCry, 2017; PureLocker, 2018; LockerGoga, 2019; Corona ransomware, 2020; RaaS Ransomware, 2021 are some examples of ransomware families that occurred in the corresponding years. ### Fileless malware attack Fileless malware does not require the traditional form of malicious executables to be placed on the victim's system. This payload is delivered through alternative means such as command scripts (e.g., JavaScript, PowerShell, batch commands) or Remote Desktop Protocol (RDP) connections [54]. Fileless malware is so named because it does not produce extra files like standard malware that uses files to infect a host, instead, it inserts malicious software into the main memory. Hence, it is also called memory-based malware. As a result, fileless malware attacks are extremely effective and successful [55]. An updated report disclosed that fileless malware infections increased by 888% in 2020 [56]. The memory, documents, and Windows registry are all potential targets for fileless malware. The attacker aims to gain access to sensitive data by exploiting vulnerabilities in the unencrypted version of the victim organization's software. They use two widely used Windows tools or programs - scripts and installation processes - to inject malicious code into memory without being detected [57] namely: PowerShell scripts and Windows Management Instrumentation (WMI). PowerShell is a program that can be used to translate simple text files into commands for Windows. Since it has access to all the files on a computer, malicious PowerShell operations are difficult to detect. The Windows Management Instrumentation (WMI) is another native tool that can be applied to launch fileless assaults. WMI is used to pass instructions to PowerShell [58]. SQL Slammer, Kovter, Phase Bot, Poweliks, Lurk Trojan, etc., are some examples of fileless malware families. Attack execution process-Figure 5 shows the three stages attack chain of fileless malware which are as follows: _The entry phase_-To begins, the attacker uses a variety of attack vectors, such as an infected system, a suspicious URL, a vulnerable webpage, or a malignant attachment in a malicious or spoofing email, in which the hacker tells their victims to click on a link or an attachment to obtain access to their system. Figure 4: Ransomware attack chain Figure 5: Attack chain stages of fileless malware _Execution_- Second, malicious code can try to stay alive by creating plugins or using WMI and Jscript to create a backdoor. Alternatively, the code could install malicious scripts directly into memory to stay active. _Compromised system to exfiltrate data_-Third, PowerShell can be used to install and implement malware or backdoors into memory without leaving any traces on the computer, which can be used to hijack data from targets [28]. ### Advanced Persistent Threat (APT) An advanced persistent threat (APT) is an attack that uses persistent malware (e.g., Stuxnet, Flame, Duqu, and Project Sauron) to gain access to a system or network over an extended period [26]. APT malware is usually built to last a long time, thus the label "persistent" [59]. It mainly focuses on stealing data from an organization, rather than damaging the system or network [60]. The majority APTs are built for specific hacking attacks and leverage sophisticated attacking methodologies such as zero-days and social engineering like Water holing and Spear phishing, spam email, etc. _Attack execution process_- Figure 6 represents the five stages of Advance Persistence Threat (APT) to perform the data-stealing process [61],[61] which are as follows: _Infiltration_- In the first phase, cybercriminals most commonly access their victim's computer system using social engineering, spear phishing, or zero-day malware. _Installation_- The attacker implant malware and certain other remote management tools on the victim's computer, which allows them to control the computer remotely. _Expansion_- At this stage, cybercriminals have gained direct control over other workspaces, server software, and network components, which gives them access to login details, such as account usernames and passwords, to gain access to crucial business information. _Encryption_- In this phase, the attackers steal resources and information from the victim's computer, encrypting and compressing it for future exfiltration. _Exfiltration_- Finally, the cyber attacker exfiltrates the stolen data from the victim's network. They'll then try to remove any forensic proof of the data transmission. ### Zero-day attack A zero-day attack is a type of cyberattack that uses vulnerabilities in operating systems and presents a severe hazard to internet security. Polymorphic worms, viruses, Trojans, and other malware can be used in zero-day attacks [62]. Using this method, attackers can take useful info, such as legal papers and company information. Before a weakness in hardware and software is patched, zero-day attacks are carried out. No fingerprints are left behind by zero-day assaults, making them impossible to identify [63]. Cyber attackers apply various methods for launching and executing zero-day attacks [64] including: [MISSING_PAGE_POST] * Spam emails and phishing * Implanting exploit tools in advertisements and malicious sites * Infecting a computer, networks, or servers. Attack execution process- Poor computer or security settings, anti-virus software faults, and programming mistakes by professionals are the targets for zero-day attacks. Figure 7 demonstrates a zero-day attack that performs its execution in four phases [65], which are discussed below: _Searching vulnerabilities-_ Criminals look for vulnerabilities in software to exploit them, and hackers look for ways to attack essential systems or users before the developers can fix the problem. _Exploiting-_ One way to exploit a web browser vulnerability is to send emails to people, trying to trick them into visiting websites that are infected with viruses or other malware. _Launching attack-_ The zero-day exploit is a vulnerability that a hostile party has discovered, which can be used to launch attacks. The malware used in these attacks is usually complicated to detect. _Execution and exfiltration-_ The zero-day assault is executed once the zero-day exploit has been installed onto devices. In this phase, malware can harvest sensitive information like user credentials and passwords, destroy data, and ultimately take control of the computer. ## 5 Malware detection Identifying threats or malware by scanning computers and other files is called malware detection. Malware detection consists of several phases to identify and categorize malware. First, malware analysis is performed statically or dynamically to check whether the suspected file is malicious. Then, features are gathered. Following that, feature selection and representation are completed. Lastly, the malware classifiers are trained using classification methods. A schematic representation of the malware detection process employing ML methods is shown in Figure 8. ### Malware analysis The key objective of malware analysis is to check whether a specified file or system is malevolent [66]. Three significant approaches perform malware analysis namely- static analysis, dynamic analysis, and hybrid analysis. #### 5.1.1 Static analysis Static malware analysis doesn't involve running the code. It uses signatures (series of bytes) to identify malware [67]. A static analysis examines the static characteristics of a suspected file. This includes characters, passwords, signatures, and information. Signature-based detection approaches are widely preferred in cyberspace due to their simplicity, user-friendliness, low false-positive rates, and minimal processing complexity [12]. However, they require more human interaction and can detect only known malware. Different disassemblers are used to convert the malicious or binary files into assembly code such WinDbg, IDA Pro, capstone, and Ollydbg [68]. Table 4 gives a quick summary of static analysis tools. Figure 7: Zero-day attack chain #### 5.1.2 Dynamic analysis Dynamic analysis is an effective tool for observing the behaviour of a program at run-time, identifying errors and anomalies that would be difficult to detect in static code review [77]. This method generates a behaviour report on the malware-infected file's behaviours, such as its interface with the network, registry, and file system. To avoid damage to the host Windows OS, this technique conducts the study in a distinct virtual system. After the execution, it keeps track of specific features like instructions, API calls, and system calls for individual executables. HookMe and Microsoft's Detours techniques in a sandbox are used to generate the logs of the run-time behaviour of malware. The log profiles are taken outside the sandbox to further process and calculate the frequencies and parameters of API calls [78]. Monitoring techniques for dynamic analysis- Analysis techniques are a type of investigation procedure that can be used in a specific tool [79]. The following techniques are used in the run-time behavioral study [19]: _Function call analysis-_ All processes rely on function calls to execute their functions. A basic instruction that uses a function by calling its name is known as a function call. The hooking mechanism can be used to grab function calls. _Execution control-_ After the malware is run, it should check for updates to see if the malware has changed or if the operating system has changed. If there is a problem with the malware, it can be stopped before it does any damage using various techniques including debugging, binary instrumentation, stealth breakpoints, and so on [19]. _Information flow tracking-_ Behavioral analysis tools use a technique called information flow tracking to record the flow of data inside malware during operation. This is also known as taint analysis because it uses tainted data [80]. \begin{table} \begin{tabular}{l l} \hline \hline Name of the tool & Explanation \\ \hline IDA Pro [69] & IDA Pro displays information about malicious software, which can help identify hackers. \\ PeView [70] & This tool provides information about the operating system files, including the headers that identify different types of software. The PE header information is then used in malware analysis to differentiate between harmful and malignant programs. \\ Yara [71] & The Yara tool can be used to identify strings in executable files that may be indicative of malicious behavior. \\ PEid [72] & This tool can determine if the malware is encrypted and, if so, which packer toolkit was used to encrypt it. \\ Radare [73] & Radare is a toolkit that can be used to reverse engineer various types of software, including Linux, Android, Windows, and macOS. \\ IOC Finder [74] & Indicator of Compromise (IOC) gives details on the computers that have been hacked. It creates a log report in MS Word or Web page format with details on a particular host’s network and device data. \\ OlyDump [75] & OlyDump is a tool for extracting code from the system database. This method is beneficial for analyzing packaged binaries that are hard to deconstruct. \\ CFF Explorer [76] & This program displays detailed information about the executable file. \\ \hline \hline \end{tabular} \end{table} Table 4: Static analysis tools. Figure 8: Schematic representation of the malware detection process employing ML methods. #### Capturing the traced report- Tracing is a way to analyze the behavior of malware after it's been executed, and it can provide lots of useful information. Monitoring tools for dynamic analysis- Various monitoring tools and different control environments are used to perform the techniques mentioned above and better understand the trace file. Table 5 provides a short explanation of dynamic malware investigation tools. #### 5.1.3 Hybrid analysis The hybrid approach combines dynamic and static investigation elements to produce a more detailed understanding of malicious files. To provide full disassembled explanations and supplemental strings/API call sequences, it combines real-time information with in-depth static analysis of code dumping. ### Feature extraction The first step in malware detection is to obtain malicious software files. This is done through static and dynamic analysis of executable files [90]. Feature extraction is used to transform large, ambiguous data into a feature set that impacts the system's productivity, resilience, and precision. Figure 9 displays the taxonomy of feature extraction techniques. The feature extraction process is classified into two categories: \(\bullet\) Static approach and \begin{table} \begin{tabular}{l l} \hline \hline **Tools** & **Description** \\ \hline Process Explorer [14] & Process Explorer is a useful application for monitoring and managing processes on a Windows system, providing precise information on the system’s running activities. \\ Capture [14] & Capture is an analysis tool that uses three monitors to provide supplementary data about system activity including the file system, the registry, and the processes monitor respectively. It is a system memory or kernel-mode program. \\ VAMPiRE [81] & VAMPiRE is a tool used to halt malware execution occasionally and observe the behavior of malware and the operating system. \\ Wireshark, & Wireshark and Tshark are used to examine network traffic. They can capture all incoming and outgoing packets between the computer and other devices on the network. \\ Vis [83] & Vis is a tool to trace the report left by the malicious process using the volatile memory acquisition technique that changes the OS slightly to dodge detection by malware. \\ Regshot [84] & The Regshot utility is employed to log the registry modifications produced by the executing sample. \\ TQana [85] & TQana is a framework capable of detecting a malicious Internet Explorer browser that helps to study the dynamic behavior of spyware. \\ Memoryze [86] & Memoryze is a computer forensics tool that runs on commands. It can handle the entire memory dump. \\ TCPview [87] & A type of Windows networking device that displays information about all a computer’s UDP and TCP connections. \\ ApateDNS [88] & This device allows the analyst to collect DNS queries performed by malware without being instructed. It spoofs DNS replies for a certain IP address. \\ Sandboxes [89] & A sandbox is a software tool that can be used to analyze malware. It employs a variety of static and dynamic analysis approaches to produce a complete report. Different sandboxes include Cuckoo, CwSandbox, Anubis, GFI Sandbox, Parsa, etc. Additionally, sandboxes are often combined with other tools to retrieve various attributes from malware such as tcp dump for network activity and volatile for memory dumping. \\ \hline \hline \end{tabular} \end{table} Table 5: Tools for dynamic analysis Static feature extraction method- Static analysis is used to extract static features from binary files. These features can be used to identify malicious behavior. Two data sets are used to gather this information: the logic configuration of the application program or the machine language file recovered by changing and deconstructing the application code. There are several approaches to extracting features from evaluation articles to aid in malware detection as follows: _Byte sequence n-grams model-_ N-grams are a common technique for creating sequences of bytes from binary files. They can also be defined as byte codes and can be included in an executable file's characteristics, code, or information. Many researchers have used this approach to improve malware detection and categorization accuracy. For example, Saxe and Berlin's (2015) [91] study found that their algorithm using byte sequences was able to accurately identify malware with a 95% accuracy rate, and 0.1% FPR. Nataraj et al. (2011) [92] found that malware from the same family often shares similar visual characteristics, which can be detected using their technique. Yajamanam et al. (2018) [93] added an investigation into this, and the obtained accuracy was 92%. The same technique was applied by Rhodia et al. (2019) [94][85], where the binary is transformed into images, and found that it was very accurate. In several aspects, this study broadens and improves the strategy implemented by Yajamanam et al. For instance, the performance is compared between image-based and non-image-based analysis. Lin et al. (2015) [95] suggested a genetic approach that used about 790,000 n-grams to classify malware and obtained 90% efficiency. Furthermore, many other works rely on n-gram features for classifying malware [96], [97], [98], [99], [100],[101]. However, in most circumstances, byte sequences are unreliable. _Opcode n-gram features-_ Opcodes are short pieces of code that tell a computer what to do. They are like machine code, but they are easier to understand and can be preprocessed to provide extra information about a program (such as its name). Malware scripts are often locked up so that it's hard to figure out the sequence of bytes that make up the code, but opcodes make this easier [13]. An opcode performs numerical, analytical, and manipulation of data and can be used to determine the difference between malicious and genuine software. For example, Shabtai et al. (2012) [102] suggested an opcode n-gram feature-based malware detection framework, with n extending from 1 to 6. Anderson et al. (2012) [97] also use the transition matrix from one opcode to the other as a feature. Moreover, Santos et al. (2013) [103] used normalized opcode n-gram frequency bands to characterize executables. According to Yuxin et al. (2019) [104], the malware was identified using a deep learning algorithm that relied on static patterns and bytecode. Also, malware can be classified using static analyses by studying the opcode in the articles [101], [100], [105], [106], [107]. _Portable Executables (PE)-_ The features of the PE method can be collected from the metadata saved in PE file types on a Windows system, and static analysis of PE structural information can help determine whether a file has been tampered with or corrupted to carry out malicious actions. To tackle encrypted malware, Wang and Wu (2011) [72] proposed a general Packing Detection Framework (PDF) that can be used to analyze Portable Executable (PE) files to determine if they are compressed or packed and was successful in 94.5% of packing detection cases based on analyzing 3784 non-packed executables and 1056 packed executables. The study by Kim et al. (2016) [108] found Figure 9: Taxonomy of feature extraction methods that using PE headers as features improved classification performance. A PE's static analysis can offer much useful information, including sectors, importers, keywords, and compilers [109], [110], [91], [111]. _String analysis-_ String features are retrieved from executables using clear text from program files such as window frames, message boxes, get versions, libraries, etc. These strings are readable characters encrypted in PE and non-PE executable code. Static analysis of a PE can be used to check for strings, including code fragments, creator signatures, data types, and system-relevant data [100], [91]. Printable strings are binary features that indicate whether a string is present in an executable. Dahl et al. (2013) [112] and Huang and Stokes (2016) [113] extracted uninitialized objects spilled from pictures of a folder in storage as usable characters. Islam et al. (2013) [114] used the string program in IDA Pro5 to get understandable words or strings from the entire file. However, most malicious programs do not depend on printed strings to perform tasks. _Image-based analysis-_ Nataraj et al. (2011) [92] pioneered a technique for visualizing malware binaries, by converting every value into a digital image with counts 0-255. This digital image is then used to characterize a malware image. The technique is approximately 40 times faster than traditional methods, and its accuracy is 98 percent. Furthermore, Le et al. (2018) [115] found a malicious program that can transform an entire executable into a sequence of images. The method was tested on a large sample of binaries and yielded an accuracy of 98.8%. A similar technique was used by Rhodia et al. (2019) [94] and looked at ways to improve the strategy used by Yajamanam et al. (2018) by comparing its performance between image-based and non-image-based analyses. _Function Call Graph (FCG)-_ A graph of function calls within a software program is created by static analysis. IDA Pro or Radare2 can later retrieve this information [33]. Some articles use the function call feature to detect and classify malware. For example, Kinable et al. (2011) [116] proposed a way to identify malicious code based on the structural similarities between its function call graphs. The method was used to compare the graphs of 1050 different malware samples. Islam et al. (2013) [114] found that the length of an executable's code can be used to identify different malware variants. They counted the sum of code bytes in executables to get this information. Furthermore, Hassen and Chan (2017) [117] suggested a rapid and accurate malware classification approach based on feature vector extraction from the Function Call Graph model achieves an overall accuracy of 0.979 on a smaller dataset. _Control Flow Graph-_ A graph of the program's flow is composed of nodes, which represent system calls and API calls. The control flow graph is used to represent the program's behavior and to identify the relationships between the different parts of the program [66]. Control flow graphs are used in various articles to identify malicious software. For example, Eskandari and Hashemi (2011) [118] described a technique for detecting metamorphic malware using control flow graphs and achieved 97% accuracy in identifying these types of malware. Later, Faruki et al. (2012) [119] looked at PE files to see if there were any abnormal patterns in the API calls, they contained They then used a CFG to reconstruct API calls and used n-gram processing to transform these API calls into input vectors ranging from 1 to 4. _Entropy-_The entropy of a byte's series measures its numerical variation in terms of information theory concepts. Zero entropy indicates that the same characters have been reused throughout the code block, while a byte with high entropy contains many different values. To spot malware, Sorokin and Jun (2011) [120] investigated how entropy varies among folders by comparing them to a training set. Baysa et al. (2013) [121] improved on earlier research by using wavelet methods to detect places where the entropy levels changed significantly. In addition, the paper by Wojnowicz et al. (2016) [122] calculated the entropy of a document's wavelet-based energy distribution, and then used a variety of logistic regression models to see how much of a change in entropy would make the document seem malicious.04 Dynamic (run-time) feature extraction method- Dynamically analyzed function calls or behavior data types are stated as dynamic features like APIs and system calls, function parameter analysis, Control Flow graphs (CFG), network activity, registry activity, and file system which are as follows: _APIs and system calls-_ APIs and system calls represent malware behavior, and most run-time behavioral analyses trust the usage of API calls as the key feature in identifying malicious process activities [123], [100], [85]. In addition, using an emulator, Bai et al. (2014) [110] and Santos et al. (2013) [103] extracted the API calls method dynamically. Similarly, by performing an executable in a virtual machine, Islam et al. (2013) [114] and Dahl et al. (2013) [112], Uppal et al. (2014) [124] captured API function calls and associated variables for malware classification. In addition, a behavior-based model was projected by Galal et al. (2015) [125], Ki et al. (2015) [126], Liang et al. (2016) [127], and Xiaofeng et al. (2019) [128] to identify the malware's run-time activities. However, the combined model outperformed the separate models by 96.7 percent. Few other authors employed system calls as features to investigate the malware samples. For example, the study by Kolosnjaji et al. (2016) [129] looked at how to find out which file systems or operating systems were initiated by an application program. Moreover, Anderson et al. (2012) [97] and Huang and Stokes (2016) [113] classify operating systems into large groups, each representing a functionally related set of system operations, including display paintings or file writing. The same features are used to classify the malware in other papers, e.g., Elhadi et al. (2013) [130] and Mao et al. (2015) [131]. _Network activity-_ Monitoring the PE's communication with the networks can provide useful information, such as how to communicate with a central server. Information on used networks, TCP/UDP channels, database queries, and DNS-level interactions can all be useful tools. Many reviewed articles collected this type of information using dynamically extracted network activities as a feature set [132], [133], [127], [134]. In addition, Bekerman et al. (2015) [135] developed a framework that analyzes traffic on the network to identify malicious programs. Arivudainambi et al. (2019) [136] focused on building a model that was used to detect malware samples from network artifacts and achieved 99% accuracy. In addition, a Probabilistic Neural Network (PNN) framework was proposed by Rabbani et al. (2020) [137] for identifying malicious activity in network attacks. which resulted in a 96.5% detection accuracy. _Memory and Registry activity-_ At runtime, the contents of the computer's main memory can be used to infer the behavior of a computer program. In addition, the data saved in various registers during execution can provide valuable information about the context of a program, and it is one of the most important ways for a program to communicate with the Windows OS. Ghiasi et al. (2015) [138] proposed a method for detecting behavior similarity between two sets of data by analyzing the memory and registering data. Yucel et al. (2020) [139] also developed strategies for capturing malicious activities based on executable file memory pictures. Liu (2020) [140] proposed a way to mitigate the effects of Adversarial Examples (AE) using adversarial training and data visualization. Vasan (2020) [141] also proposed an ensemble of CNN models to classify malware based on images. Furthermore, the research of Singh and Singh (2020) [32] and Escudero Garcia and DeCastro-Garcia, (2021) [142] focused on behavior-based malware detection methods using optimum feature sets. The researchers detected malware with 99.54 % and 98% accuracy. _Instruction traces or run-time traces-_A dynamic run-time trace is a sequence of CPU commands that are executed while the code is running. This differs from static instruction traces, which are arranged as they exist in the binary format. Anderson et al. (2011) [97] proposed an infection recognition system relying on examining maps built from the code tracing acquired during the targeted executable's run-time. Again, Storlie et al. (2014) [143] demonstrated malicious file findings depending on automatically generated instruction traces investigation. Carlin et al. (2017) [144] also described a method for extracting program run-time traces both from legitimate and malignant executable files using dynamic response on virtual computers. Moreover, Ali et al. (2017) [89] developed a machine-learning algorithm to identify malicious files, and their results showed that it was 99% accurate. Alaeiyan et al. (2019) [145] proposed a method for detecting anomalies that relied on run-time variables. ### ML-based malware classification There has been increased interest in using machine learning techniques to predict and categorize malware over the last decade. A workflow in an ML model is an ongoing procedure that includes obtaining available data, purifying, and formatting it, constructing models, evaluating them, and putting them into operations. Figure 10 shows the workflow of machine learning that illustrates how the ML model works for automatic detection and classification. Numerous machine learning classifiers were used to train the system in the literature. A classification algorithm and the training data or selected features develop a machine learning model. A quick comparison of the different ML classifiers is presented in Table 6. \begin{table} \begin{tabular}{l l l l} \hline \hline ML algorithms & Key idea & Strength & Weakness \\ \hline SVM (Support & An SVM is a popular machine learning algorithm used & Provides better & The \\ Vector & to classify data such as ”malignant” and ”benign.” This & classification accuracy and & training \\ Machine) & approach relies on finding the best set of points that & can handle large datasets & time in \\ [146] & divides the data into these two groups. & and nonlinear patterns. & SVM \\ & & becomes & very long. \\ \hline \hline \end{tabular} \end{table} Table 6: Different machine learning algorithms, their strengths, and weaknesses. This section provides a survey of the various machine-learning algorithms used by analyzed articles to identify and categorize malware based on its various attributes. Nataraj et al. (2011) [92], Le et al. (2018) [115], and Rhodia et al. (2019) [94] all offered methods for malware families classification utilizing neural networks and grayscale images as static features, but somewhat distinct classifiers were used for malware classification. In this regard, Nataraj et al. proposed a K-NN classifier to identify malware images and detected 98% of images from 25 families. The Convolutional Neural Network (CNN) algorithm was applied by Le et al. to predict malicious programs using 10568 binary data and was found to be 98.8% accurate. Rhodia et al. compared the outcomes of natural image-dependent deep learning (DL) to a basic K-NN technique and found that a basic K-NN technique performed better than deep learning when it came to classifying malware. Euh (2020) [151] proposed an approach to identify malicious programs using a tree-based ensemble model depending on n-grams of bytes and opcodes, APIs, and a WEM (Window Entropy Map). The same feature extraction techniques as n-gram of byte and opcode sequence were used in other studies Lin et al. (2015) [95], Shabtai et al. (2012) [102], Santos et al. 2013 [103], Raff et al. (2018) [152], and Yuxin et al. (2019) [104] but these are distinct from each other in terms of using different classifiers and the achieved accuracy levels. For example, Santos et al. (2013) developed an n-grams-based file signature using an SVM classifier. The study by Shabtai et al. (2012) looked at eight different classifiers and found that the best performance was achieved with a 96% accuracy rate and a 0.1% false-positive rate. Yuxin et al., (2019), used DT, SVM, and the K-NN algorithms as classifiers, and found that the DT algorithm was the most effective, with an overall 95.8% accuracy. Kim et al. (2016) [108] described an approach that uses static features like PE headers and ML classifiers like SVM and Stochastic Gradient Descent (SGD) to detect malware. It was 99 percent accurate and had a 0.2 percent false-positive rate. Nagano and Uda (2017) [153] also suggested a mechanism to spot malware with the help of K-NN and SVM classifiers which yielded an efficiency of 99%. Besides these, different classifiers were used in different papers using static features. For example, Raff et al. (2017) [99] used Logistic Regression, Wang and Wu (2011) [72] used SVM, Faruki et al. (2012) [119] used RT, DT, NB, and Elhadi et al. (2013) [130] employed Graph Matching Algorithm to notice harmful threats. Galal et al. (2015) [125] and Ki et al. (2015) [126] presented a behavior-based strategy to identify run-time malware by analyzing API call sequences. Several classification techniques, for example, SVM, RF, and DT algorithms were used in the study by Galal et al., and the Multiple Sequence Alignment algorithm (MSA), and Longest Common Subsequence (LCS) techniques were used by Ki et al. to detect malware. Their achieved accuracy level was 97.19% and 99.8% respectively. In addition, Xiaofeng et al. (2019) [128] used the API call technique to extract the features and RF algorithm to train and classify the malicious program. Mohaisen et al. (2015) [133], Pektas Acarman (2017) [154], Singh and Singh (2020) [32], and Escudero Garcia and DeCastro-Garcia (2021) [142] presented different approaches to classify malware families that relied on different attributes like network, registry, and file system activities applying different ML algorithms. In this case, Singh and Singh (2020) used ensemble machine learning methods and attained the greatest level of accuracy of 99.54%. Pektas and Acarman (2017) developed online machine-learning algorithms (CW, ARROW, PA-I & II, NHERD) to classify malware. Hyperparameter optimization algorithms (Bayesian and Random type) were used by Escudero Garcia and DeCastro-Garcia (2021) and the accuracy was greater than 98%. Arivudainambi et al. (2019) [136] focused on building a model for malware detection utilizing Neural Network (NN), Principal Component Analysis (PCA), and Convolutional neural network (CNN with an accuracy of 99%. In addition, a Probabilistic Neural Network (PNN) framework was suggested by Rabbani et al. (2020) [137] for identifying malicious activity in network attacks which resulted in a 96.5% performance measure. Namavar et al. (2020) [155] applied the ensemble learning approach based on behavioral features and have a 99.65% accuracy rate. In addition, Vasan, (2020) [141] and Damasevicius (2021) [156] used ensemble learning and achieved an accuracy of 99% and 99.99 % respectively. But Amer and Zelinka (2020) [157] used Markov chain representation and K-means. Their detection performance was 99.9 % and an FP rate of 0.010. A malware classification system is also based on a hybrid framework in the articles by Islam et al. (2013) [114], Shijo et al. (2015) [158], Mangialardo et al. (2015) [159], Kolosnjaji et al. (2016) [129], Ali et al. (2017) [89], Huda et al. (2018) [78], Kumar et al. (2019) [160], and Gupta & Rani (2020) [161], Damodaran et al. (2017) [162], Han et al. (2019b) [163] that relied on dissimilar ML algorithms like SVM, RF, DT, IB1, CNN, XGBoost, K-NN, etc. The maximum precision obtained from most of the experimental findings was more than 99%. For example, the study of Huda et al. (2018) used multiple classifiers such as MR+SVM, Fisher+SVM, and MRED+SVM and demonstrated an accuracy of 99.49%. Kumar et al. (2019) obtained 99.74 % accuracy for the proposed hybrid strategy, and Gupta and Rani (2020) achieved a 99.5% accuracy rate using a variety of classifiers, including Neural Networks, Random Forests, Decision Trees, and XGBoost. ## 6 Comparative analysis and findings This section provides a comparative analysis of different methods for detecting malware, with a focus on developing an effective and novel machine-learning model with the lowest false-positive rate. To select such a model, we have done extensive literature reviews to find the best methods and summarized what is known about them. Table 7 shows which malware detection algorithms are most used, as well as the variety of malware features that are used. In addition, Table 8 presents the summary of the presented papers based on some important factors such as malware analysis methods, feature extraction strategies, and ML classifiers. Figure 14 shows the proportion of data analysis techniques that have been used in studies, with dynamic analysis being the most popular (46%). The hybrid analysis came in second (29%), followed by static analysis (25%). Figure 15 depicts how well different malware detection methods work. The SVM (23%) is the most successful, followed by RF (18%), DT (16%), KNN (14%), Boost (9%), NB (5%), and NN (4%). The SMV (static-based) strategy is the most accurate. \begin{table} \begin{tabular}{l l l l l} \hline Authors & Feature types & No. of samples & Classification & Accuracy \\ & & (M = Malware, & algorithms & \\ & & B = Benign) & & \\ \hline \multicolumn{5}{c}{The static-based malware detection approach} \\ \hline Le et al. (2018) [115] & Grayscale image & M = 10568 & CNN & 98.8\% \\ Bhodia et al. (2019) & Grayscale image & – & K-NN, DT & 99.60\% \\ [94] & Lin et al. (2015) [95] & N-gram features & M = 4288 & SVM & 95\% \\ Yuxin et al. (2019) & N-grams opcode & M= 400 & DBN, SVM, and & 95.8\% \\ [104] & & k–means & & \\ Euh (2020) [151] & N-gram of bytes and opcodes, APIs, and WEM & M = 122,963 & XGBoost, random & 98.5\% \\ & & APIs, and WEM & & forest, AdaBoost, & \\ Kim et al. (2016) [108] & PE headers & M = 27,000 and & SVM, CART, and SGD & 99\% \\ & & B = 11,000 & & \\ Nagano and Uda & DLL import, assembly code, & M = 3600 & SVM and & 99\% \\ (2017) [153] & and hex dump. & & K-NN & \\ Eskandari and Hashemi & Control Flow Graphs & M = 2140, & RF & 97 \%. \\ (2011) [119] & & B = 2305 & & \\ Ahmadi et al. (2016) & Strings, opcode, API function calls, frequency of keywords & M = 21741 & XGBoost, & 99.76 \% \\ Hassen and Chan (2017) [117] & Function Call Graph model & Small dataset & Minha’s signature & 97.9\% \\ \hline \multicolumn{5}{c}{The dynamic-based malware detection approach} \\ \hline Galal et al. (2015) & API call sequences & M = 2000, B = & DT, RF, SVM & 97.19\% \\ [125] & & 2000 & & \\ Ki et al. (2015) [126] & API call sequences & M = 23080 & MSA and LCS & 99.8 \% \\ Xiaofeng et al. (2019) & API call sequences & M = 1430, B = & RF and RNN & 96.7\% \\ [128] & & 1352 & & \\ Mohaisen et al. (2015) [136] & API calls, network, & M = & SVM, logical & 98\% \\ & registry, memory, and & (400-115,0) & regression, and & \\ & system files. & 00) & hierarchical clustering & \\ Singh and Singh (2020) & API calls, PSI, file operations, & M=16489, B= & Ensemble & 99.54 \% \\ [32] & registry, and network activities. & 8422 & machine & \\ Escudero Garcia and DeCastro-Garcia, (2021) [142] & API calls, network traffic, file system, and registry. & M = 9999, & Bayesian and & 98\% \\ & & B = 9995 & Random type & \\ & & & optimization & \\ Ali et al. (2017) [89] & Run-time features & M = 150000, B & SVM, DT, and & 99\% \\ & & = 87000 & Boosted DT & \\ \hline \end{tabular} \end{table} Table 7: A side-by-side comparison of the most recent reviewed papers on malware detection based on feature types, number of samples, classification models, and accuracy [K-NN = k-nearest neighbor, Classification and Regression Tree = CART, Stochastic Gradient Descent = SGD, Random Forest = RF, logical regression = LRA, Principal Component Analysis = PCA, Support Vector Machine = SVM, Decision Tree = DT]. Furthermore, the different features used by different malware classification tools can impact malware detection accuracy. Figure 16 demonstrates the feature selection process in three types of malware detection techniques. Some researchers prefer to use specific or single features to develop a malware detection technique, while others use multiple features. The study found that the author Ki et al. used single API calls to achieve the highest accuracy of 99.8%. Ahmadi et al. achieved the highest accuracy of 99.76 % using multiple static features, while Damasevicius achieved the highest accuracy using dynamic features. This study looked at the accuracy of malware detection methods and found that most of the recent studies have used SVM with other classifiers and ensemble learning models to achieve great accuracy levels as shown in Figures 17 and 18 respectively. It was also found that a hybrid malware detection method with ML methods can be more effective than using just one type of detection method. For this technique, Kumar et al. achieved a maximum accuracy of 99.74 % as shown in Figure 19. Figure 16: Comparison of feature selection methods Figure 17: Accuracy comparing chart in static methods Figure 14: Comparison of analysis techniques Figure 15: Comparison of ML algorithms Figure 18: Accuracy comparing chart in dynamic malware detection methods \begin{table} \begin{tabular}{c|c Finally, it has been found that SVM and API calls are the most used classifiers and feature types for static, dynamic, and hybrid malware detection methods. Furthermore, a single classification technique is ineffective for constructing a model with multiple malware features. Nevertheless, SVM outperformed the others in static analysis. Ensemble methods also performed exceptionally well during dynamic and hybrid analysis. Hence, the ensemble of algorithms would be most appropriate for finding and classifying malware in terms of accuracy and economic analysis. This method also can attain more effective feature selection to overcome encryption issues. ## 7 Research issues and challenges Malware detection is a never-ending procedure. It's getting more difficult day by day. As the number of computer users grows, criminals are building sophisticated malicious activities that are hard to notice. Along with the sophistication of malware, the rate at which it is generated is a huge concern in preventing malware attacks. As a result, the conflict between malicious actors and experts becomes more intense as technology advances. However, after analyzing various malware detection approaches, some major research concerns or limitations have been discovered. Some specific challenges in malware detection that remain unsolved include: * Real-time monitoring is a continuing contest. Several recent studies have used data for detecting anomalous files that are not ideal for monitoring. * The parameterization of used algorithms is another important component in the effectiveness of malware classifiers. It is a significant contributor to the malware classifier's accuracy. This issue is not treated in-depth in the proposed method that has been reviewed. * Malware attackers create adversarial strategies to compel the classification model to mislead the training data (e.g., by feeding it incorrect data). The model should understand adversarial tactics, resulting in a more efficient and reliable detection scheme. * Most malware detection methods are vulnerable to False-Positive Rates (FPs) and False Negative Rates (FNs). Some features and signatures in malicious and benign files can be similar, raising FPs and FNs. * ML algorithms are sensitive to overfitting and bias in practice. This results in lower Detection Rates (DRs) and higher FPs ## 8 Conclusion and future work When sensitive data is shared through hyper-connected networks, a run-time malware attack may compromise data privacy. To protect data from malware threats, this paper presents an exhaustive literature review of attack pattern taxonomy and machine learning-based malware detection and classification methods. These different types of attacks pattern are presented broadly in section 4. This provides information on how devices can be compromised, and how to protect them. In addition, the malware detection and classification techniques have been discussed in section 5. A summary of the most recent reviewed papers on malware detection based on some important factors is also presented in Table 7 and Table 8. According to the findings, SVM and API calls are extensively used in classification models and feature types respectively for static, dynamic, and hybrid malware attack detection. We found that the SVM method has the highest accuracy among machine learning-based static malware detection approaches. However, when building a model with multiple malware features, a single classification technique is unproductive. During dynamic and hybrid analysis, ensemble methods consistently performed well. Moreover, Table 1 demonstrates the summary and comparison of our works with others. Finally, this paper explores the most pressing research dilemmas that researchers face. However, focusing future research on the parameterization of utilized algorithms could be beneficial in malware classification effectiveness. This issue is not explored in full in the proposed method. In addition, special emphasis on real-time malware identification and attack protection on a vast dataset to avoid scalability may develop future research on accurate malware detection because existing solutions only focus on a limited amount of data. ## Acknowledgments This research study was supported by the CSCRC (Cyber Security Cooperative Research Centre Limited), which is funded partially by the Australian Government's Cooperative Research Centre's Program.
2303.01097
Small-scale dynamo with finite correlation times
Fluctuation dynamos occur in most turbulent plasmas in astrophysics and are the prime candidates for amplifying and maintaining cosmic magnetic fields. A few analytical models exist to describe their behaviour but they are based on simplifying assumptions. For instance the well-known Kazantsev model assumes an incompressible flow that is delta-correlated in time. However, these assumptions can break down in the interstellar medium as it is highly compressible and the velocity field has a finite correlation time. Using the renewing flow method developed by Bhat and Subramanian (2014), we aim to extend Kazantsev's results to a more general class of turbulent flows. The cumulative effect of both compressibility and finite correlation time over the Kazantsev spectrum is studied analytically. We derive an equation for the longitudinal two-point magnetic correlation function in real space to first order in the correlation time $\tau$ and for an arbitrary degree of compressibility (DOC). This generalised Kazantsev equation encapsulates the original Kazantsev equation. In the limit of small Strouhal numbers $St \propto \tau$ we use the WKB approximation to derive the growth rate and scaling of the magnetic power spectrum. We find the result that the Kazantsev spectrum is preserved, i.e. $M_k(k)\sim k^{3/2}$. The growth rate is also negligibly affected by the finite correlation time; however, it is reduced by the finite magnetic diffusivity, and the DOC together.
Yann Carteret, Dominik Schleicher, Jennifer Schober
2023-03-02T09:30:28Z
http://arxiv.org/abs/2303.01097v2
# Small-scale dynamo with finite correlation times ###### Abstract Fluctuation dynamos occur in most turbulent plasmas in astrophysics and are the prime candidates for amplifying and maintaining cosmic magnetic fields. A few analytical models exist to describe their behaviour but they are based on simplifying assumptions. For instance the well-known Kazantsev model assumes an incompressible flow that is delta-correlated in time. However, these assumptions can break down in the interstellar medium as it is highly compressible and the velocity field has a finite correlation time. Using the renewing flow method developed by Bhat and Subramanian (2014), we aim to extend Kazantsev's results to a more general class of turbulent flows. The cumulative effect of both compressibility and finite correlation time over the Kazantsev spectrum is studied analytically. We derive an equation for the longitudinal two-point magnetic correlation function in real space to first order in the correlation time \(\tau\) and for an arbitrary degree of compressibility (DOC). This generalised Kazantsev equation encapsulates the original Kazantsev equation. In the limit of small Strouhal numbers \(St\propto\tau\) we use the WKB approximation to derive the growth rate and scaling of the magnetic power spectrum. We find the result that the Kazantsev spectrum is preserved, i.e. \(M_{k}(k)\sim k^{3/2}\). The growth rate is also negligibly affected by the finite correlation time; however, it is reduced by the finite magnetic diffusivity, and the DOC together. ## I Introduction The vast majority of the baryonic matter is in a plasma state, and therefore a complete description of the Universe needs to include a proper treatment of the electromagnetic force [2]. From observations it is known that the Universe is highly magnetised. Indeed, magnetic fields are observed in almost all astrophysical bodies as for instance in asteroids [3], planets [4], stars [5; 6], galaxies [7; 8] or the intergalactic medium [9; 10; 11]. Due to the broad range of objects, the typical strength and correlation length of these magnetic fields are distributed over several orders of magnitude. As an example in Milky-Way like galaxies, the observed magnetic fields are of a few tens \(\mu\)G in strength and correlated on kilo-parsec scales [12]. The most popular mechanism to explain the observed magnetic fields is the dynamo process which converts the kinetic energy of the flow to magnetic energy. In the absence of large-scale motions, small-scale or fluctuation dynamos [13] amplify the initial magnetic field exponentially [14]; a process which is most efficient on the smallest scales of the system. In the kinematic stage of the dynamo, the magnetic field lines are frozen into the plasma. Due to the turbulent motion of the flow, the action of the small-scale dynamo is to randomly twist, stretch, and fold these lines which makes the magnetic field strength grow. However, activating the dynamo requires an already existing seed field. Although unclear, it is generally assumed that these seed fields were generated in the early Universe [15] or through astrophysical processes such as the Biermann battery [16]. Schober _et al._[17] also highlighted that the small-scale dynamo can only amplify the magnetic field for magnetic Reynolds numbers \(R_{\rm M}\sim UL/\eta\) (\(U\) and \(L\) are respectively the typical velocity and length scale of the system) larger than a few hundred. In the non-linear regime after saturation on the smallest scales, the peak of the magnetic energy shifts from smaller to larger scales and the magnetic energy increases following a power-law [18]. The exact behaviour of the dynamo depends on the magnetic Prandtl number \(P_{\rm M}=\nu/\eta\) and on the type of turbulence [18; 19; 20]. The small-scale dynamo is a key process in astrophysics. Indeed, the strength of the magnetic fields predicted from the early Universe is not consistent with the observed typical value of a few \(\mu\)G in the inter-cluster medium [21], and references therein] or in high redshift galaxies [22]. Small-scale dynamos could then provide an explanation for the fast amplification of magnetic fields in the radiation-dominated phase of the early Universe [23], in young galaxies [24], and galaxy clusters [14] as they can act on time-scales much shorter than the age of the system. In the context of supernova-driven turbulence, it is expected to give rise to the far-infrared-radio correlation in galaxies [25] and potentially even dwarf galaxies [26]. Small-scale dynamos might also be involved in the formation of the first stars [27; 28; 29] and black holes [30; 31; 32]; and thus could also affect the epoch of reionization. An early theoretical description of the small-scale dynamo is given by Kazantsev [33]. His equation describes the time evolution of the two-point magnetic correlation function under the assumption of a Gaussian incompressible flow that is delta-correlated in time. Its derivation indicates that the magnetic power spectrum scales as \(M_{k}(k)\sim k^{3/2}\) for \(q\ll k\ll k_{\eta}\), where \(k_{\eta}\) is the wavenumber above which diffusion of the magnetic field dominates. Following Kazantsev's work many authors have tried to extend this model [see e.g. 1, 34; 35; 36]. Although some astrophysical objects host plasma that is well described by an incompressible flow (as for instance neutron stars [37]); Kazantsev's assumptions strongly simplify the behaviour of most astrophysical bodies. Indeed the majority of the plasma in the Universe is highly compressible as indicated by observations of compressive interstellar turbulence [38]. Moreover, in realistic flows the correlation time \(\tau\) should be of the order of the smallest eddy turnover time. Thus the assumptions involved in the Kazantsev [33] derivation do not allow for an accurate description of all types of fluctuation dynamos. In this work we aim to study the small-scale dynamo for the general case of a flow that is compressible and with finite correlations in time. Zeldovich _et al._[39] pointed out that the so-called renovating flows represent a solvable analytical model to study the impact of the correlation time on small-scale dynamos. In this context, Bhat and Subramanian [1] developed a method to study the dynamo of incompressible flows. They found that the Kazantsev spectrum was not strongly affected by a finite correlation time, i.e. \(M_{k}(k)\sim k^{3/2}\). However, the growth rate of the dynamo is reduced. On the other hand, Schekochihin _et al._[40] found that a compressible flow that is delta-correlated in time also preserves the Kazantsev spectrum where compressibility also reduces the growth rate of the dynamo. As far as we know, although there are clues that the Kazantsev spectrum should be preserved in the interstellar medium (compressible and correlated in time flow), there is no previous theoretical study that demonstrates formally that the combined actions have no effect on the \(M_{k}(k)\sim k^{3/2}\) spectrum. Rogachevskii and Kleeorin [41] used a path integral method to solve the induction equation and show that a dynamo can be activated for compressible flows that are correlated in time. Their results admit solutions consistent with the Kazantsev spectrum. The present work assumes a simplified random flow that is compressible and correlated in time. We present here a generalisation of the previous work by Bhat and Subramanian [1] by including the effect of compressibility. The paper is organised as follows: in Sec. II we briefly review the original Kazantsev theory. In Sec. III we present the renewing flow method used by Bhat and Subramanian [1]. In Sec. IV we give the derivation of the original Kazantsev equation (incompressible and delta-correlated in time flow) with the use of the renovating flow method. In Sec. V we present our generalisation of the Kazantsev equation for a compressible flow that is correlated in time and study the WKB solutions in Sec. VI. Finally, we insert our results in the current context and draw our conclusions in Sec. VII. ## II Kazantsev theory Dynamos in the context of an isotropic flow have been hypothesised since the fifties [see e.g. 42; 43]; however the first one to give a complete theoretical framework was Kazantsev [33]. In his work an isotropic and homogeneous flow that is delta-correlated in time was proposed. In this section we review the basics of the derivation of the Kazantsev equation and its results, in particular we follow Subramanian [35] for the formalism. We rewrite the velocity field as \[\mathbf{u}=\langle\mathbf{u}\rangle+\delta\mathbf{u}, \tag{1}\] where \(\langle\mathbf{u}\rangle\) is the mean and \(\delta\mathbf{u}\) the fluctuations. If we assume the fluctuations to be isotropic, homogeneous, Gaussian random with zero mean and delta-correlated in time we can set the correlation function to be \[T_{ij}(r)\delta(t_{1}-t_{2})=\langle\delta u_{i}(\mathbf{x},t_{1})\delta u_{j}( \mathbf{y},t_{2})\rangle, \tag{2}\] with \(r=|\mathbf{x}-\mathbf{y}|\). Any two-point correlation function can be expressed through longitudinal and transverse components [44] as \[T_{ij}(r)=\hat{r}_{ij}T_{\rm L}(r)+\hat{P}_{ij}T_{\rm N}(r), \tag{3}\] with \(\hat{r}_{ij}=r_{i}r_{j}/r^{2}\) and \(\hat{P}_{ij}=\delta_{ij}-\hat{r}_{ij}\). For a divergence-free vector field (in the case of velocity: an incompressible flow \(\nabla\cdot\mathbf{u}=0\)) we can even show that the two components are related by \[T_{\rm N}=T_{\rm L}+\frac{r}{2}\frac{\rm d}{{\rm d}r}T_{\rm L}. \tag{4}\] A similar decomposition can be performed for the magnetic field. Since \(\mathbf{B}\) is divergence-free, the magnetic correlation function can be expressed as \[M_{ij}(r) = \langle\delta B_{i}(\mathbf{x})\delta B_{j}(\mathbf{y})\rangle, \tag{5}\] \[= \big{(}\hat{r}_{ij}+\hat{P}_{ij}\big{)}M_{\rm L}+\hat{P}_{ij} \frac{r}{2}\frac{\rm d}{{\rm d}r}M_{\rm L}.\] The time derivative of the two-point magnetic correlation function is thus given by \[\frac{\partial M_{ij}}{\partial t}=\left\langle\frac{\partial B_{i}}{ \partial t}B_{j}\right\rangle+\left\langle\frac{\partial B_{j}}{\partial t}B_{ i}\right\rangle-\frac{\partial\langle B_{i}B_{j}\rangle}{\partial t}. \tag{6}\] Inserting this expression in the induction equation \[\frac{\partial\mathbf{B}}{\partial t}=\mathbf{\nabla}\times(\mathbf{u}\times\mathbf{B}-\eta \nabla\times\mathbf{B}), \tag{7}\] and using the averaged induction equation \[\frac{\partial\langle\mathbf{B}\rangle}{\partial t}=\mathbf{\nabla}\times(\langle\mathbf{v }\rangle\times\langle\mathbf{B}\rangle-[\eta+T_{\rm L}(0)]\nabla\langle\mathbf{B} \rangle), \tag{8}\] Subramanian Subramanian (1983) found an equation for the time evolution of the longitudinal two-point magnetic correlation function \[\frac{\partial M_{\rm L}}{\partial t} = 2\kappa_{\rm diff}M_{\rm L}^{\prime\prime}+2\bigg{(}\frac{4\kappa _{\rm diff}}{r}+\kappa_{\rm diff}^{\prime}\bigg{)}M_{\rm L}^{\prime} \tag{9}\] \[+\frac{4}{r^{2}}\bigg{(}T_{\rm N}-T_{\rm L}-rT_{\rm N}^{\prime}- rT_{\rm L}^{\prime}\bigg{)}M_{\rm L}.\] In this expression \(\kappa_{\rm diff}\equiv\eta+T_{\rm L}(0)-T_{\rm L}(r)\) and a prime denotes a derivative with respect to \(r\). If we further suppose that the time and spatial dependencies are independent, we can use the ansatz \[M_{\rm L}(r,t)=\frac{1}{r^{2}\sqrt{\kappa_{\rm diff}}}\psi(r)e^{2\Gamma t}. \tag{10}\] This form is convenient as it highlights a formal similarity to quantum mechanics. We insert the ansatz into Eq. (9) and find \[-\kappa_{\rm diff}\frac{\mathrm{d}^{2}\psi}{\mathrm{d}r^{2}}+U(r)\psi=-\Gamma\psi. \tag{11}\] This equation has the form of a Schrodinger equation and is often referred to as the Kazantsev equation in the literature, however in this work we will refer to Eq. (9) as the Kazantsev equation instead. The function \(U(r)\) is equivalent to a potential and is given by \[U(r)\equiv\frac{\kappa_{\rm diff}^{\prime\prime}}{2}-\frac{(\kappa_{\rm diff}) ^{\prime}}{4\kappa_{\rm diff}}+\frac{2\kappa_{\rm diff}}{r^{2}}+\frac{2T_{\rm N }^{\prime}}{r}+\frac{2(T_{\rm L}-T_{\rm N})}{r^{2}}. \tag{12}\] Note that in the derivation of this equation we did not assume at any point that the flow is incompressible. Schekochihin _et al._Schekochihin _et al._ (1997) studied the Kazantsev equation in Fourier space in the sub-diffusion limit such that \(k_{f}\ll k\ll k_{\eta}\), with \(k_{f}\) being the forcing scale for a single scaled flow and \(k_{\eta}\) the Fourier conjugate of the magnetic diffusion length scale (the scale at which the magnetic diffusion is important). If incompressibility is assumed, the Kazantsev equation can be rewritten as (see e.g. (Steintein _et al._, 1997; Schekochihin _et al._, 2000)) \[\frac{\partial M_{k}}{\partial t}=\frac{\gamma}{5}\bigg{(}k^{2}\frac{\partial^ {2}M_{k}}{\partial k^{2}}-2k\frac{\partial M_{k}}{\partial k}+6M_{k}\bigg{)} -2\eta k^{2}M_{k}, \tag{13}\] where \(\gamma\) is a constant that characterises the flow and \(M_{k}(k,t)\) represents the magnetic power spectrum. Compared to \(M_{\rm L}(r)\) it characterises the magnetic correlation function in Fourier space, formally we have the following relation \[\langle\hat{B}_{i}(\mathbf{k},t)\hat{B}_{j}^{*}(\mathbf{k}^{\prime},t^{ \prime})\rangle=(2\pi)^{3}\hat{M}_{ij}(k,t)\delta^{3}(\mathbf{k}-\mathbf{k}^{\prime}) \delta(t-t^{\prime})\] \[\qquad=(2\pi)^{3}\frac{M_{k}(k,t)}{4\pi k^{2}}\bigg{(}\delta_{ij} -\frac{k_{i}k_{j}}{k^{2}}\bigg{)}\delta^{3}(\mathbf{k}-\mathbf{k}^{\prime})\delta(t-t ^{\prime}), \tag{14}\] with \(\hat{A}^{*}\) being the complex conjugate of the Fourier transform \(\hat{A}\). The solution of the Fourier space Kazantsev equation is given by \[M_{k}(k,t)=M_{0}e^{\gamma\lambda t}k^{3/2}K_{\rm Mc}(k/k_{0}), \quad{\rm Mc}=\sqrt{5(\lambda-\frac{3}{4})}, \tag{15}\] where \(K_{\rm Mc}\) is the Macdonald function, \(\lambda\) the normalised growth rate and \(k_{0}=(\gamma/10)^{1/2}\). The magnetic power spectrum thus scales mostly as \(M_{k}(k)\sim k^{3/2}\) in the sub-diffuse limit, which we refer to as the Kazantsev spectrum. The magnetic spectrum grows exponentially in time, with a growth rate given by \(3\gamma/4\) for an incompressible flow that is delta-correlated in time. ## III The renewing flow method The renewing or renovating flow method was firstly proposed by Steenbeck and Krause Steenbeck and Krause (1983). Zeldovich _et al._ (1983) highlighted that it provides an alternative to the unphysical assumption of velocities that are delta-correlated in time but remains analytically solvable. Several authors have used the method to obtain relevant results with finite correlation times (e.g. (Steintein _et al._, 1997; Schekochihin _et al._, 1998; Schekochihin _et al._, 1999; Schekochihin _et al._, 2000)). In this work we employ the operator splitting method, used by Gilbert and Bayly Gilbert and Bayly (1992) to recover the mean-field dynamo equations. Following the approach of Bhat and Subramanian Bhat and Subramanian (1983), in a non-helical flow, we impose a velocity field of the form \[\mathbf{u}=\mathbf{a}\sin{(\mathbf{q}\cdot\mathbf{x}+\psi)}. \tag{16}\] We split the time into intervals of length \(\tau\) which is the correlation time of the flow. In each of these \(\tau\)-intervals we draw randomly \(\mathbf{a}\), \(\mathbf{q}\) and \(\psi\) such that the flow is overall isotropic, homogeneous, and with a zero mean1. Note that the flow is static only in intervals of the type [\((n-1)\tau,n\tau]\) (\(n\) being an integer) and renovates for each \(\tau\)-interval. Footnote 1: The time-dependent flow is defined as \(\tau=\tau_{\rm max}\), where \(\tau_{\rm max}\) is the time-dependent flow. In order to apply the operator splitting method we further divide the \(\tau\)-intervals into two sub-intervals of duration \(\tau/2\). In the first one we consider that the diffusion of the magnetic field is zero but the velocity is doubled, in the second one the velocity is now set to zero and diffusion acts as twice its original value. Using the induction equation (Eq. 7) we need to solve the following problem \[\frac{\partial\mathbf{B}}{\partial t} = \mathbf{\nabla}\times 2\mathbf{u}\times\mathbf{B},\ \ t\in[(n-1)\tau,(n-1)\tau+\tau/2],\] \[\frac{\partial\mathbf{B}}{\partial t} = -2\eta\mathbf{\nabla}\times\mathbf{\nabla}\times\mathbf{B},\ \ t\in[(n-1)\tau+\tau/2,n\tau]. \tag{17}\] The validity and convergence of the operator splitting method is beyond the scope of this work, we refer interested readers to Holden _et al._Holden _et al._ (1997). **First sub-interval:** we consider only the ideal induction equation. In this case, due to the magnetic flux freezing, the magnetic field is given by the standard Cauchy solution (see Sec.3.3 of Schekochihin _et al._, 1997) \[B_{i}(\mathbf{x},t)=\frac{J_{ij}(\mathbf{x}_{0})}{|J_{ij}|}B_{j}(\mathbf{x}_{0},t_{0}), \tag{18}\] where we define \(\mathbf{x}(\mathbf{x}_{0},t_{0})\) to be the Lagrangian position at a time \(t\) of a fluid element with an initial position \(\mathbf{x}_{0}\) at time \(t_{0}\). The matrix \(J_{ij}\) is given by the coordinate transformation, namely \[J_{ij}=\frac{\partial x_{i}}{\partial x_{0,j}}, \tag{19}\] and \(|\cdot|\) denotes the determinant of the matrix. **Second sub-interval:** we consider only the diffusion of the magnetic field. It is straightforward to solve the equation of diffusion in Fourier space where we denote the Fourier transform of \(A\) by \(\hat{A}\). We find the solution \[\hat{B}_{i}(\mathbf{k},t)=e^{-\eta\mathbf{k}^{2}\tau}\hat{B}_{j}(\mathbf{k},t_{1}), \tag{20}\] with \(t_{1}=t_{0}+\tau/2\). We express the total magnetic field evolution in Fourier space, from Eq. (18) and Eq. (20), as \[\hat{B}_{i}(\mathbf{k},t)=e^{-\eta\mathbf{k}^{2}\tau}\int e^{-i\mathbf{k}\cdot\mathbf{x}}\frac {J_{ij}(\mathbf{x}_{0})}{|J_{ij}|}B_{j}(\mathbf{x}_{0},t_{0})\ \mathrm{d}^{3}\mathbf{x}, \tag{21}\] which describes the successive evolution through the two sub-intervals. We are now ready to give an expression for the two-point correlation function of the magnetic field in Fourier space \[\Big{\langle}\hat{B}_{i}(\mathbf{k},t)\hat{B}_{h}^{*}(\mathbf{p},t) \Big{\rangle}=e^{-\eta\tau(\mathbf{k}^{2}+\mathbf{p}^{2})}\bigg{\langle}\int\frac{J_{ ij}(\mathbf{x}_{0})}{|J_{ij}|}\frac{J_{hl}(\mathbf{y}_{0})}{|J_{hl}|}\] \[\times B_{j}(\mathbf{x}_{0},t_{0})B_{l}(\mathbf{y}_{0},t_{0})e^{-i(\mathbf{k} \cdot\mathbf{x}-\mathbf{p}\cdot\mathbf{y})}\ \mathrm{d}^{3}\mathbf{x}\mathrm{d}^{3}\mathbf{y}\bigg{\rangle}, \tag{22}\] where \(\langle\cdot\rangle\) denotes an average over the parameter space of the velocity flow and \(A^{*}\) is the complex conjugate. We can change the integration variables \(\{\mathbf{x},\mathbf{y}\}\rightarrow\{\mathbf{x}_{0},\mathbf{y}_{0}\}\) such that the determinants of the two Jacobian matrices cancel. We can also argue that the initial magnetic field is no longer correlated with the renewing flow in the next sub-interval, which allows us to split the averages. The final expression is then given by \[\Big{\langle}\hat{B}_{i}(\mathbf{k},t)\hat{B}_{h}^{*}(\mathbf{p},t) \Big{\rangle}=e^{-\eta\tau(\mathbf{k}^{2}+\mathbf{p}^{2})}\int\langle B_{j}(\mathbf{x}_{0 },t_{0})B_{l}(\mathbf{y}_{0},t_{0})\rangle\] \[\times\Big{\langle}J_{ij}(\mathbf{x}_{0})J_{hl}(\mathbf{y}_{0})e^{-i(\mathbf{ k}\cdot\mathbf{x}-\mathbf{p}\cdot\mathbf{y})}\Big{\rangle}\ \mathrm{d}^{3}\mathbf{x}_{0}\mathrm{d}^{3}\mathbf{y}_{0}. \tag{23}\] Note that in this expression \(\mathbf{x}\) and \(\mathbf{y}\) are functions of the initial positions. As the flow is overall isotropic and homogeneous we expect that for an initial state of the magnetic field, which is also isotropic and homogeneous, these properties are conserved. Under such assumptions the two-point magnetic correlation function takes the following form \[\langle B_{i}(\mathbf{x},t)B_{j}(\mathbf{y},t)\rangle=M_{ij}(r,t), \tag{24}\] where \(r=|\mathbf{x}-\mathbf{y}|\). We can further introduce a new set of integration variables \(\{\mathbf{x}_{0},\mathbf{y}_{0}\}\rightarrow\{\mathbf{r}_{0}\equiv\mathbf{x}_{0}-\mathbf{y}_{0}, \mathbf{y}_{0}\}\). We rewrite the exponential part inside the integral as \[-i\left[\mathbf{k}\cdot(\mathbf{x}-\mathbf{x}_{0})-\mathbf{p}\cdot(\mathbf{y}-\mathbf{y}_{0})+\mathbf{k} \cdot\mathbf{r}_{0}+(\mathbf{k}-\mathbf{p})\cdot\mathbf{y}_{0}\right]. \tag{25}\] For now we assume that the evolution tensor, that is given by \[R_{ijhl}\equiv\Big{\langle}J_{ij}(\mathbf{x}_{0})J_{hl}(\mathbf{y}_{0})e^{-i[\mathbf{k} \cdot(\mathbf{x}-\mathbf{x}_{0})-\mathbf{p}\cdot(\mathbf{y}-\mathbf{y}_{0})]}\Big{\rangle}\,, \tag{26}\] is independent of \(\mathbf{y}_{0}\); which is convenient as we can rewrite Eq. (23) in the following form \[\Big{\langle}\hat{B}_{i}(\mathbf{k},t)\hat{B}_{h}^{*}(\mathbf{p},t)\Big{\rangle}=(2 \pi)^{3}\delta^{3}(\mathbf{p}-\mathbf{k})e^{-2\eta\tau\mathbf{p}^{2}}\] \[\times\int e^{-i\mathbf{p}\cdot\mathbf{r}_{0}}R_{ijhl}M_{jl}(r_{0},t_{0}) \ \mathrm{d}^{3}\mathbf{r}_{0}, \tag{27}\] once the integration over \(\mathrm{d}^{3}\mathbf{y}_{0}\) is performed. Note that the Dirac-Delta function appears from the integration over \(\mathbf{y}_{0}\) since the exponential is the only dependency on \(\mathbf{y}_{0}\) and can be taken out of the flow parameters average. We assumed that \(R_{ijhl}\) only depends on \(\mathbf{r}_{0}\) because this form of the equation is more compact; we will show in further sections (see Sec. IV.3) that this assumption is valid, at least for the cases we consider. ## IV Kazantsev equation from the renewing flow method In his initial work, Kazantsev considered a flow that is delta-correlated in time and incompressible. This case is the easiest to treat with equations that are more or less tractable. We use this simplified treatment to present a detailed calculation in the framework of the renewing flow method. With the renewing flow method we consider the velocity field to be known; which constitutes the main difference with previous works on the topic. ### Velocity flow parameters The first step is to give a suitable parametrisation of \(\mathbf{a}\), \(\mathbf{q}\) and \(\psi\) to ensure the statistical isotropy and homogeneity of the flow. We further impose an incompressible flow, which translates here to the requirement that \(\mathbf{a}\) and \(\mathbf{q}\) are orthogonal to each other. **Homogeneity:**: we draw \(\psi\) in each \(\tau\)-interval from a uniform distribution in the range \([0,2\pi]\). **Isotropy:**: we fix the value of \(q\) which is the norm of \(\mathbf{q}\). The wavenumber \(\mathbf{q}\) is randomly drawn from a sphere of radius \(q\). The velocity orientation \(\mathbf{a}\) is randomly drawn in the plane perpendicular to \(\mathbf{q}\) such that \(\langle\mathbf{u}\rangle\)=0. In order to simplify the computations we change the average ensemble. Instead of averaging over the direction of \(\mathbf{a}\) we prefer to use a new vector \(\mathbf{A}\) which has a fixed norm and a direction drawn randomly. Then \(\mathbf{A}\) and \(\mathbf{q}\) define a plane where we can project the component of \(\mathbf{A}\) that is orthogonal to \(\mathbf{q}\). This is performed by \[\tilde{P}_{ij}\equiv\delta_{ij}-\hat{q}_{i}\hat{q}_{j},\qquad\qquad a_{i}= \tilde{P}_{ij}A_{j}, \tag{28}\] where \(\hat{q}_{i}\equiv q_{i}/q\) is the normalised component of \(\mathbf{q}\). Note also that we adopt the Einstein summation rule. Since \(\mathbf{A}\) and \(\mathbf{q}\) are two independent vectors this parametrisation ensures \(\left\langle\mathbf{u}\right\rangle=0\). We directly see that \(a\) is not fixed in this context, however we can evaluate it from \(A\) as \[\left\langle a^{2}\right\rangle = \left\langle a_{i}a_{i}\right\rangle=\left\langle\tilde{P}_{il}A _{l}\tilde{P}_{ih}A_{h}\right\rangle, \tag{29}\] \[\underset{\text{average of }\mathbf{A}}{=} \frac{A^{2}}{3}\left\langle\tilde{P}_{il}\tilde{P}_{ih}\delta_{lh} \right\rangle\underset{\text{average of }\mathbf{q}}{=}\frac{2A^{2}}{3},\] where we used the fact that \(\left\langle A_{i}A_{j}\right\rangle=\delta_{ij}/3\) for a random vector. ### Two-point velocity correlation functions In order to reconstruct the original Kanzatsev equation 9 we only need to compute the second-order velocity correlator. We use the definition of Bhat and Subramanian [1] \[T_{ij}=\frac{\tau}{2}\left\langle u_{i}(\mathbf{x})u_{j}(\mathbf{y})\right\rangle \underset{\text{average of }\psi}{=}\frac{\tau}{4}\left\langle a_{i}a_{j}\cos \left(\mathbf{q}\cdot\mathbf{r}\right)\right\rangle. \tag{30}\] The factor \(\tau/2\) is required here as the flow is correlated in time. It also ensures that in the limit \(\tau\to 0\) we recover the Kazantsev equation. The initialisation of \(\mathbf{a}\) and \(\mathbf{q}\) allows us to give an exact formula for this correlator. Using Eqs. (28) and (29) we average over the directions of \(\mathbf{A}\) to eliminate it, the remaining average is thus only over the directions of \(\mathbf{q}\) with \[T_{ij} = \frac{\tau}{4}\left\langle\tilde{P}_{il}A_{l}\tilde{P}_{jh}A_{h} \cos\left(\mathbf{q}\cdot\mathbf{r}\right)\right\rangle=\frac{A^{2}\tau}{12}\left\langle \tilde{P}_{ij}\cos\left(\mathbf{q}\cdot\mathbf{r}\right)\right\rangle \tag{31}\] \[= \frac{a^{2}\tau}{8}\left[\delta_{ij}+\frac{1}{q^{2}}\partial_{i} \partial_{j}\right]\left\langle\cos\left(\mathbf{q}\cdot\mathbf{r}\right)\right\rangle,\] where we use the following notation \(\partial_{i}\equiv\partial/\partial r_{i}\). If we recall the proper definition of the average we can write \[\left\langle\cos\left(\mathbf{q}\cdot\mathbf{r}\right)\right\rangle \equiv \frac{1}{4\pi}\int_{0}^{2\pi}\int_{0}^{\pi}\sin\left(\theta\right) \cos\left(\mathbf{q}\cdot\mathbf{r}\right)\,\mathrm{d}\theta\mathrm{d}\phi \tag{32}\] \[= \frac{1}{2}\int_{0}^{\pi}\sin\left(\theta\right)\cos\left(qr\cos \left(\theta\right)\right)\,\mathrm{d}\theta\] \[= j_{0}(qr),\] where \(j_{0}(x)\) is the spherical Bessel function. ### Computation of the evolution tensor In order to evaluate \(R_{ijhl}\) we first need to have an expression for \(J_{ij}\). As we required that \(\mathbf{a}\) and \(\mathbf{q}\) are orthogonal we have \[\frac{\mathrm{d}(\mathbf{q}\cdot\mathbf{x}+\psi)}{\mathrm{d}t}\equiv\frac{\mathrm{d} \phi}{\mathrm{d}t}=2\mathbf{q}\cdot\mathbf{u}=0, \tag{33}\] such that \(\phi\) is constant along the trajectory of a fluid element. So the equation \(\mathrm{d}\mathbf{x}/\mathrm{d}t=2\mathbf{u}\) can be easily integrated[56] and gives \[x_{i}-x_{0,i}=a_{i}\tau\sin\left(\mathbf{q}\cdot\mathbf{x}_{0}+\psi\right) \tag{34}\] for the Lagrangian positions. Using Eq. (19), it is straightforward to evaluate \(J_{ij}\); from the last relation \[J_{ij}(\mathbf{x}_{0})=\delta_{ij}+\tau a_{i}q_{j}\cos\left(\mathbf{q}\cdot\mathbf{x}_{0}+ \psi\right). \tag{35}\] Bhat and Subramanian [1] motivated an expansion of the exponential of the evolution tensor (Eq. 26) in the limit of small Strouhal numbers \(St=qa\tau\ll 1\). In the context of small-scale turbulent dynamos the magnetic spectrum in the kinematic regime peaks around the resistive scale [57; 58; 59] which can be evaluated to be \(r_{\eta}\sim(l_{0}/R_{\mathrm{M}}^{1/2})\) with \(l_{0}\) being the integral scale of the flow[60]. In the case considered here, the flow has only one typical scale \((1/q)\), thus \(r_{\eta}\sim 1/(qR_{\mathrm{M}}^{1/2})\). We used \(R_{\mathrm{M}}\sim a/(q\eta)\) for the magnetic Reynolds number which is usually very high in astrophysical objects [see Tab. 1 of 55] such that \(qr_{\eta}\) is very small and hence \(\sin\left(\mathbf{q}\cdot\mathbf{r}_{\eta}\right)\sim\mathbf{q}\cdot\mathbf{r}_{\eta}\). The phase of the exponential in Eq. (26) is then given by \(aq\tau p_{\eta}r_{\eta}\sim qa\tau=St\). Since the terms in the vicinity of the resistive scale will contribute more to the magnetic spectrum, the expansion of \(\sin\left(\mathbf{q}\cdot\mathbf{x}_{0}+\psi\right)-\sin\left(\mathbf{q}\cdot\mathbf{y}_{0}+ \psi\right)=\sin\left(\mathbf{q}\cdot\mathbf{r}_{0}/2\right)\cos\left(\mathbf{q}\cdot(\bm {x}_{0}+\mathbf{y}_{0})+\psi\right)\) is reasonable. In this section, we only keep terms up to second order in \(\tau\) and we will see that it leads to the original Kazantsev equation 9. The equation 26 for \(R_{ijhl}\) can then be rewritten in the form \[R_{ijhl}=\bigg{\langle}J_{ij}(\mathbf{x}_{0})J_{hl}(\mathbf{y}_{0})\big{[}1-i\tau\beta \sigma-\frac{\tau^{2}\beta^{2}\sigma^{2}}{2!}\big{]}\bigg{\rangle}, \tag{36}\] where \(\beta=\sin\left(\mathbf{q}\cdot\mathbf{x}_{0}+\psi\right)-\sin\left(\mathbf{q}\cdot\mathbf{y}_{0 }+\psi\right)\) and \(\sigma=\mathbf{a}\ \cdot\ \mathbf{p}\). To continue further we make use of the average over \(\psi\) and we also introduce the notation \(\phi_{x_{0}}=\mathbf{q}\cdot\mathbf{x}_{0}+\psi\). In fact if we try to average a function of the type \(\cos\left(n\phi_{x_{0}}+m\phi_{y_{0}}\right)\) or \(\sin\left(n\phi_{x_{0}}+m\phi_{y_{0}}\right)\) with \(n\) and \(m\) being two integers, we find that it always goes to zero except when \(n=-m\). In particular it highlights the fact, as we hypothesised in Sec. III, that \(R_{ijhl}\) is only dependent on \(\mathbf{r}_{0}=\mathbf{x}_{0}-\mathbf{y}_{0}\). Term by term evaluation of the average over \(\psi\) of Eq. (36), leads to the following expression \[R_{ijhl}=\bigg{\langle}\delta_{ij}\delta_{hl}+\frac{\tau^{2}a_{i}q_{j }a_{h}q_{l}}{2}\cos\left(\mathbf{q}\cdot\mathbf{r_{0}}\right)\] \[-i\frac{\tau^{2}\sigma}{2}\sin\left(\mathbf{q}\cdot\mathbf{r_{0}}\right)( \delta_{hl}a_{i}q_{j}+\delta_{ij}a_{h}q_{l})\] \[-\frac{\tau^{2}\sigma^{2}}{2}(1-\cos\left(\mathbf{q}\cdot\mathbf{r_{0}} \right))\delta_{ij}\delta_{hl}\bigg{\rangle}. \tag{37}\] Each term can then be matched with Eq. (30) to obtain \[R_{ijhl}=\delta_{ij}\delta_{hl}-2\tau\partial_{l}\partial_{j}T_{ ih}+2i\tau p_{m}(\delta_{hl}\partial_{j}T_{im}\] \[+\delta_{ij}\partial_{l}T_{hm})-2\tau p_{n}p_{m}\delta_{ij}\delta _{hl}(T_{nm}(0)-T_{nm}), \tag{38}\] where we replaced \(q_{i}\) by suitable derivatives with respect to the components of \(\mathbf{r}_{0}\) and \(\sigma\) by \(a_{m}p_{m}\). This expression for \(R_{ijhl}\) cannot be simplified further and we need to go back to Eq. (27) and perform the integration. ### Derivation of the Kazantsev equation The original Kazantsev equation (9 describes the evolution of the two-point magnetic field correlation function in real space. Instead of evaluating \(\hat{M}_{ih}(\mathbf{p},t)\) we take its inverse Fourier transform. Formally we get \[M_{ih}(r,t)=\int e^{-2\eta\tau\mathbf{p^{2}}}e^{i\mathbf{p}(\mathbf{r}-\mathbf{r}_{0})}R_{ijhl }M_{jl}(r_{0},t_{0})\ \frac{\mathrm{d}^{3}\mathbf{r}_{0}\mathrm{d}^{3}\mathbf{p}}{(2\pi)^{3}}. \tag{39}\] In order to further simplify this expression we assume that \(\eta\) is small, such that the exponential can also be expanded giving \(\exp(-2\eta\tau\mathbf{p^{2}})\sim 1-2\eta\tau\mathbf{p^{2}}\). This expansion is justified in the context of negligible \(\eta\) or large \(R_{\mathrm{M}}\). Terms like \(\eta\tau^{2}\) are also ignored, so the part \(-2\eta\tau\mathbf{p}^{2}\) only contributes from the \(\delta_{ij}\delta_{hl}\) term in the expression of \(R_{ijhl}\) as it is the only term that does not depend on \(\tau^{2}\). Once again we rewrite components of the wavevector (here \(\mathbf{p}\)) as derivatives with respect to the position (here \(\mathbf{r}\)) such that \(p_{j}\to-i\partial_{j}\). We adopt the notation \([\cdot]_{ij}\) for partial derivatives with respect to \(r_{i}\) and \(r_{j}\). In the limit \(\tau\to 0\) we can divide both sides by \(\tau\) and replace \((M_{ih}(r,t)-M_{ih}(r,t_{0}))/\tau\to\partial M_{ih}(r,t)/\partial t\) such that from Eq. (39) we arrive at [see 61, for a detailed calculation] \[\frac{\partial M_{ih}(r,t)}{\partial t}=2\left[M_{il}T_{jh}\right] _{jl}+2\left[M_{jh}T_{il}\right]_{jl}-2\left[M_{ih}T_{jl}\right]_{jl}\] \[-2\left[M_{jl}T_{ih}\right]_{jl}+2\left[M_{ih}(T_{\mathrm{L}}(0)+ \eta)\right]_{jj}. \tag{40}\] Note that \(T_{\mathrm{L}}(0)\) appears from \(T_{nm}(0)=\delta_{nm}T_{\mathrm{L}}(0)\). This result is very important for the formalism as we started from equation 37 that tracks the evolution of an initial state to the equation 40 for the two-point magnetic field correlation function that depends on other quantities evaluated at the same space-time positions. We can even simplify the computation further by contracting Eq. (40) with \(\hat{r}_{ih}\) on both sides in order to get an equation for \(M_{\mathrm{L}}(\mathbf{r},t)\). We would like to refer the reader to Tab. 1 for detailed expressions of different contractions that enter the computation. Using incompressibility we finally find \[\frac{\partial M_{\mathrm{L}}(r,t)}{\partial t}=\frac{2}{r^{4}} \partial_{r}\left(r^{4}(\eta+T_{\mathrm{L}}(0)-T_{\mathrm{L}})\partial_{r}M_{ \mathrm{L}}\right)\] \[-\frac{2}{r}\left(r\partial_{r}^{2}T_{\mathrm{L}}+4\partial_{r}T_ {\mathrm{L}}\right)M_{\mathrm{L}}, \tag{41}\] which is exactly the incompressible Kazantsev equation 9 in the limit of a flow that is delta-correlated in time. In comparison to previous works [e.g. 34; 41], the input here is the velocity field that is used to solve directly the induction equation. ## V Generalised Kazantsev equation In this section we will derive the equivalent of the Kazantsev equation in the context of the renewing flow method. Previous studies have analysed separately the effects of the finite correlation time [1] and the compressibility [40] of the flow. By generalised we mean that we relax the incompressibility assumption used in the previous work of Bhat and Subramanian [1]. Our new equations then include the contributions from the time correlation of the flow as well as its degree of compressibility. ### Lagrangian positions In the case of an incompressible flow \(\xi\equiv\mathbf{a}\cdot\mathbf{q}\) was set to \(0\) (see Sec. IV). We can introduce a degree of compressibility by relaxing this condition; allowing \(\xi\) to be non-zero with \(\xi\in[-aq;aq]\). This allows us to include the nontrivial contribution from the compressibility of the flow. We can no longer apply the same reasoning as before (see Sec. IV.3) since this time we have \(\mathrm{d}\phi/\mathrm{d}t=2\xi\sin\left(\phi\right)\). If we integrate this expression over the first sub-interval, we find \[|\tan\left(\phi/2\right)|=e^{\xi\tau}|\tan\left(\phi_{0}/2\right)|. \tag{42}\] We defined \(\phi=\mathbf{q}\cdot\mathbf{x}+\psi\) to be the phase of the velocity field at the final position (after a time \(\tau/2\)) and \(\phi_{0}=\mathbf{q}\cdot\mathbf{x}_{0}+\psi\) to be the phase of the initial position. Furthermore by integrating the velocity field we get \[x_{i}-x_{0,i}=\int\frac{\mathrm{d}x_{i}}{\mathrm{d}t}\ \mathrm{d}t=\frac{a_{i}}{\xi}( \phi-\phi_{0}). \tag{43}\] However this formula cannot be inverted. The idea is thus to use Eq. (42) to isolate \(\phi\) in order to plug it into Eq. (43) such that we get an expression of the Lagrangian positions \(\mathbf{x}\) that depends only on the initial position \(\mathbf{x}_{0}\). We have imposed a peculiar velocity field that is periodic with respect to the variable \(\phi\) with a period of \(2\pi\). It is then expected that the displacement \(\mathbf{x}-\mathbf{x}_{0}\) also possesses this periodicity. Furthermore, the velocity field is static in a \(\tau\)-interval which means that fluid elements are permanently pushed in the direction of \(\mathbf{a}\) until they reach a zero of the velocity field and stop moving. As a result a fluid element with initial position \(\phi_{0}\in[n\pi,(n+1)\pi]\) will have a position after a time \(\tau/2\) such that \(\phi\in[n\pi,(n+1)\pi]\) where \(n\) is an integer. Eq. (42) can thus be inverted, leading to \[\phi/2-\pi\lfloor\phi/(2\pi)+1/2\rfloor=\arctan\big{(}\epsilon^{\xi\tau}\tan \left(\phi_{0}/2\right)\big{)}. \tag{44}\] Recall that in Sec. IV.3 we motivated an expansion with respect to a small Strouhal number \(St\). We motivate the same idea here as \(|\xi\tau|=|aq\tau\cos\left(\gamma\right)|<St\) with \(\gamma\) being the angle between \(\mathbf{a}\) and \(\mathbf{q}\). With a similar argument we can show that any new term depends directly on \(St\) raised to some higher powers. In order to include effects due to finite correlation times we keep terms up to fourth order in \(\tau\). The expansion of the right hand side of Eq. (44) also harbours a floor function that will cancel the one on the left hand side. We are now ready to plug the expression of \(\phi\) from this expansion into Eq. (43) \[x_{i} = x_{0,i}+a_{i}\tau\bigg{(}\sin\left(\phi_{0}\right)+\frac{\xi\tau }{4}\sin\left(2\phi_{0}\right) \tag{45}\] \[+\frac{\xi^{2}\tau^{2}}{12}\big{(}\sin\left(3\phi_{0}\right)- \sin\left(\phi_{0}\right)\big{)}\] \[+\frac{\xi^{3}\tau^{3}}{96}\big{(}3\sin\left(4\phi_{0}\right)-4 \sin\left(2\phi_{0}\right)\big{)}\bigg{)},\] which has the desired limit for \(\xi\to 0\). It is straightforward to show that the Jacobian is then given by \[J_{ij} = \delta_{ij}+a_{i}q_{j}\tau\bigg{(}\cos\left(\phi_{0}\right)+ \frac{\xi\tau}{2}\cos\left(2\phi_{0}\right) \tag{46}\] \[+\frac{\xi^{2}\tau^{2}}{12}\big{(}3\cos\left(3\phi_{0}\right)- \cos\left(\phi_{0}\right)\big{)}\] \[+\frac{\xi^{3}\tau^{3}}{24}\big{(}3\cos\left(4\phi_{0}\right)-2 \cos\left(2\phi_{0}\right)\big{)}\bigg{)}.\] ### Fourth order velocity two-point function In order to include finite correlation times we have to consider terms up to the fourth order in \(\tau\). The evolution tensor \(R_{ijhl}\) is then given by \[R_{ijhl}=\bigg{\langle}J_{ij}J_{hl}[1-i\tau\beta\sigma-\frac{\tau^{2}\beta^{ 2}\sigma^{2}}{2!}+i\frac{\tau^{3}\beta^{3}\sigma^{3}}{3!}+\frac{\tau^{4}\beta ^{4}\sigma^{4}}{4!}]\bigg{\rangle}, \tag{47}\] where the Jacobian matrices are given in Eq. (46) and \[\beta = \sin\left(\phi_{x}\right)-\sin\left(\phi_{y}\right)+\frac{\tau \xi}{4}\big{(}\sin\left(2\phi_{x}\right)-\sin\left(2\phi_{y}\right) \tag{48}\] \[+\frac{\tau^{2}\xi^{2}}{12}\big{(}\sin\left(3\phi_{x}\right)-\sin \left(3\phi_{y}\right)-\sin\left(\phi_{x}\right)+\sin\left(\phi_{y}\right)\big{)}\] \[+\frac{\tau^{3}\xi^{3}}{96}\big{(}3\sin\left(4\phi_{x}\right)-3 \sin\left(4\phi_{y}\right)-4\sin\left(2\phi_{x}\right)+4\sin\left(2\phi_{y} \right)\big{)}.\] The evolution tensor now has dependencies on \(a_{i}a_{j}a_{h}a_{l}\) due to the inclusion of \(\tau^{3}\) and \(\tau^{4}\) terms. It motivates the introduction of fourth order two-point correlators that are defined in Bhat and Subramanian [61] by \[T_{ijhl}^{x^{2}y^{2}} = \tau^{2}\langle u_{i}(\mathbf{x})u_{j}(\mathbf{x})u_{h}(\mathbf{y})u_{l}(\bm {y})\rangle,\] \[T_{ijhl}^{x^{3}y} = \tau^{2}\langle u_{i}(\mathbf{x})u_{j}(\mathbf{x})u_{h}(\mathbf{x})u_{l}(\bm {y})\rangle,\] \[T_{ijhl}^{x^{4}} = \tau^{2}\langle u_{i}(\mathbf{x})u_{j}(\mathbf{x})u_{h}(\mathbf{x})u_{l}(\bm {x})\rangle, \tag{49}\] where the factor \(\tau^{2}\) is included due to the time correlation of the flow. We can carry out the average over \(\psi\) such that the fourth order correlators are given by \[T_{ijhl}^{x^{2}y^{2}} = \frac{\tau^{2}}{8}\langle a_{i}a_{j}a_{h}a_{l}(\cos\left(2\mathbf{q} \cdot\mathbf{r}\right)+2)\rangle,\] \[T_{ijhl}^{x^{3}y} = \frac{3\tau^{2}}{8}\langle a_{i}a_{j}a_{h}a_{l}\cos\left(\mathbf{q} \cdot\mathbf{r}\right)\rangle,\] \[T_{ijhl}^{y^{4}} = \frac{3\tau^{2}}{8}\langle a_{i}a_{j}a_{h}a_{l}\rangle. \tag{50}\] Note that \(\mathbf{r}\) is still given by \(\mathbf{r}=\mathbf{x}-\mathbf{y}\). Similarly to the second order velocity two-point correlation function we would like an expression for the fourth order correlators in the case of an isotropic, homogeneous and non-helical velocity field. Following the ideas of De Karman and Howarth [44], Batchelor [62], Landau and Lifshitz [63], it can be shown that [64] \[T_{ijhl}(r)=\hat{r}_{ijhl}\overline{T}_{\rm L}(r)+\hat{P}_{(ij}\hat{P}_{hl)} \overline{T}_{\rm N}(r)+\hat{P}_{(ij}\hat{r}_{hl)}\overline{T}_{\rm LN}(r), \tag{51}\] where \(\hat{r}_{ijhl}=r_{i}r_{j}r_{h}r_{l}/r^{4}\) and \(\hat{P}_{ij}=\delta_{ij}-r_{i}r_{j}/r^{2}\). This formula has been derived and used by Bhat and Subramanian [1]. The bracket (\(\cdot\)) operator denotes here the summation over all the different terms, formally \(\hat{P}_{(ij}\hat{P}_{hl)}=\hat{P}_{ij}\hat{P}_{hl}+\hat{P}_{ih}\hat{P}_{jl}+ \hat{P}_{il}\hat{P}_{jh}\) and \(\hat{P}_{(ij}\hat{r}_{hl)}=\hat{P}_{ij}\hat{r}_{hl}+\hat{P}_{ih}\hat{r}_{jl}+ \hat{P}_{il}\hat{r}_{jh}+\hat{P}_{hl}\hat{r}_{ij}+\hat{P}_{hl}\hat{r}_{ij}+ \hat{P}_{jl}\hat{r}_{ih}+\hat{P}_{jh}\hat{r}_{il}\). Knowing that it is straightforward to show that in the case of an incompressible flow the transverse, longitudinal and mixed terms are related by \[6\overline{T}_{\rm LN}=2\overline{T}_{\rm L}+r\partial_{r}\overline{T}_{\rm L}, \quad 4\overline{T}_{\rm N}=4\overline{T}_{\rm LN}+r\partial_{r}\overline{T}_{\rm LN}. \tag{52}\] These two relations will be especially useful to check if our generalised Kazantsev equation has the right form when assuming incompressibility. ### Generalised equation The compressibility effects are characterised by the introduction of \(\xi\) and \(\xi^{2}\) in the evolution tensor and the Jacobian matrices. These factors are not necessarily fixed between two \(\tau\)-intervals. To treat them we just need to recall that \(\xi=a_{i}q_{i}\), such that the methodology explained in Secs. IV.3 & IV.4 can still be applied. Surprisingly we find that the compressibility only affects the fourth order correlators, and Eq. (40) still holds for a velocity field that is delta-correlated in time in the compressible case. The resulting equation is given by \[\frac{\partial M_{ih}}{\partial t} = 2[M_{jh}T_{il}]_{jl}-2[M_{ih}T_{jl}]_{jl}+2[M_{il}T_{jh}]_{jl}-2[M_ {jl}T_{ih}]_{jl}+2[M_{ih}T_{\rm L}(0)]_{jj}\ \ \xi^{0}\tau^{2}\ {\rm terms}\] (53) \[+2\left[M_{ih}\eta\right]_{jj}\right\}\ {\rm term\ due\ to\ resistive\ exponential\] \[+\tau\bigg{(}[M_{jl}\widetilde{T}_{ihmn}]_{mnjl}+[M_{ih}( \widetilde{T}_{mnst}+T_{mnst}^{x^{4}}/12)]_{mnst}-[M_{jh}\widetilde{T}_{imns}] _{mnsj}-[M_{il}\widetilde{T}_{hmns}]_{mnsl}\bigg{)}\ \ \xi^{0}\tau^{4}\ {\rm terms}\] \[\ \ \[+M_{\rm L}\bigg{(}-\frac{4}{r}\partial_{r}T_{\rm L}-\frac{4}{r} \partial_{r}T_{N}-\frac{4}{r^{2}}T_{\rm L}+\frac{4}{r^{2}}T_{N}+\tau\big{\{} \frac{4}{3r}\partial_{r}^{3}\overline{T}_{\rm L}+\frac{4}{3r}\partial_{r}^{3} \overline{T}_{\rm LN}+\frac{20}{3r^{2}}\partial_{r}^{2}\overline{T}_{\rm L}\] \[\qquad\qquad-\frac{12}{3r^{2}}\partial_{r}^{2}\overline{T}_{\rm LN }-\frac{16}{3r^{2}}\partial_{r}^{2}\overline{T}_{\rm N}+\frac{8}{3r^{3}} \partial_{r}\overline{T}_{\rm L}-\frac{104}{3r^{3}}\partial_{r}\overline{T}_{ \rm LN}+\frac{48}{3r^{3}}\partial_{r}\overline{T}_{\rm N}-\frac{8}{3r^{4}} \overline{T}_{\rm L}-\frac{56}{3r^{4}}\overline{T}_{\rm LN}\] \[\qquad\qquad+\frac{80}{3r^{4}}\overline{T}_{\rm N}-\frac{10}{48r} \partial_{r}^{3}T_{\rm L}^{x^{2}y^{2}}-\frac{10}{48r}\partial_{r}^{3}T_{\rm LN }^{x^{2}y^{2}}-\frac{50}{48r^{2}}\partial_{r}^{2}T_{\rm L}^{x^{2}y^{2}}+\frac {30}{48r^{2}}\partial_{r}^{2}T_{\rm LN}^{x^{2}y^{2}}+\frac{40}{48r^{2}} \partial_{r}^{2}T_{\rm N}^{x^{2}y^{2}}\] \[\qquad\qquad-\frac{20}{48r^{3}}\partial_{r}T_{\rm L}^{x^{2}y^{2}} +\frac{260}{48r^{3}}\partial_{r}T_{\rm LN}^{x^{2}y^{2}}-\frac{120}{48r^{3}} \partial_{r}T_{\rm N}^{x^{2}y^{2}}+\frac{20}{48r^{4}}T_{\rm L}^{x^{2}y^{2}}+ \frac{140}{48r^{4}}T_{\rm LN}^{x^{2}y^{2}}-\frac{200}{48r^{4}}T_{\rm N}^{x^{2} y^{2}}\Big{)},\] where \(K=5C(0)/48\) is a constant with \(C(r)=\partial_{r}^{2}(T_{\rm L}^{x^{2}y^{2}})+4\partial_{r}(T_{\rm L}^{x^{2}y^{2}})/r-10\partial_{r}(T_{\rm LN }^{x^{2}y^{2}})/r+2T_{\rm LN}^{x^{2}y^{2}}/r^{2}-22T_{\rm L}^{x^{2}y^{2}}/r^{2} +22T_{\rm L}^{x^{2}y^{2}}/r^{2}+22T_{\rm N}^{x^{2}y^{2}}/r^{2}\). This equation has the most generic form if we assume only isotropy, homogeneity and non-helicity of the velocity flow in the vicinity of small \(St\). In order to solve this equation we should define the boundary conditions. The magnetic field correlation function should go to zero for infinitely large space scales. Also we would require \(M_{\rm L}\) to be finite in \(r=0\) such that the auto-correlation of the magnetic field is a local maxima. These two conditions can be summarised by \[\lim_{r\to 0}\partial_{r}M_{\rm L}(r,t)=0,\qquad\lim_{r\to\infty}M_{\rm L}(r,t)=0. \tag{56}\] Note that if we assume incompressibility in Eq. (55) we retrieve Eq. (17) in Bhat and Subramanian [1]. Except for its length, the general aspect of the equation is unchanged for an arbitrary degree of compressibility (DOC). The most interesting difference arises in terms that depend on \(M_{\rm L}\). In the incompressible case, these terms cancel perfectly but not when the DOC is non-zero. We can already get the intuition that these terms will control the time growth rate of the magnetic correlation function. ### Small-scale limit In this section we discuss the limit of length scales much smaller than the turbulent forcing scale (i.e. \(z\equiv qr\ll 1\)) of Eq. (55). The Kazantsev spectrum \(M_{k}(k)\sim k^{3/2}\) is predicted in the range \(q\ll k\ll k_{\eta}\). Since we consider large \(R_{\rm M}\), it is sufficient to expand our generalised equation in the limit of small \(z\). We introduce two different cases, which correspond to two initialisation for \(a\) and \(q\). The first case is used to give a detailed derivation. However, the second case is more general and gives rise to a lengthy calculation so we will only present the results. #### iv.4.1 Two independent vectors First consider the case where \(a\) and \(q\) are perfectly independent. It is straight forward to evaluate Eq. (30) and Eq. (50) knowing that \(\langle a_{i}a_{j}a_{h}a_{l}\rangle=\delta_{(ij}\delta_{h)l}/15\) for a random vector and we can directly plug the expansion of the correlators' components in Eq. (55). These considerations simplify strongly our generalised Kazantsev equation such that it reduces to the following expression in the limit \(z\ll 1\) \[\frac{\partial M_{\rm L}}{\partial t} = q^{2}T_{\rm L}(0)\bigg{[}\big{(}\frac{2\eta}{T_{\rm L}(0)}+\frac {z^{2}}{3}\big{)}\partial_{z}^{2}M_{\rm L}+\big{(}\frac{8\eta}{zT_{\rm L}(0)}+2 z\big{)}\partial_{z}M_{\rm L}+\frac{8}{3}M_{\rm L}\bigg{]} \tag{57}\] \[+\frac{a^{4}q^{4}\tau^{3}}{160}\bigg{[}\frac{z^{4}}{10}\partial_{ z}^{4}M_{\rm L}+\frac{8z^{3}}{5}\partial_{z}^{3}M_{\rm L}+\frac{958z^{2}}{135} \partial_{z}^{2}M_{\rm L}+\frac{404z}{45}\partial_{z}M_{\rm L}+\frac{32}{27}M_{ \rm L}\bigg{]}.\] We can further assume that \(\tilde{M}_{\rm L}\) is independent of space such that we use the ansatz \(M_{\rm L}(r,t)=\tilde{M}_{\rm L}(z)e^{\gamma\tilde{t}}\) where \(\tilde{t}=tT_{\rm L}(0)q^{2}\) and \(\gamma\) is a normalised growth rate. We also set \(\bar{\tau}=\tau T_{\rm L}(0)q^{2}\) and rename \(T_{\rm L}(0)=\eta_{t}\) to stick to the conventions used in Bhat and Subramanian [1]. After some algebra we end up with \[0 = \big{(}\frac{2\eta}{\eta_{t}}+\frac{z^{2}}{3}\big{)}\partial_{z} ^{2}\tilde{M}_{\rm L}+\big{(}\frac{8\eta}{z\eta_{t}}+2z\big{)}\partial_{z} \tilde{M}_{\rm L}+\big{(}\frac{8}{3}-\gamma\big{)}\tilde{M}_{\rm L} \tag{58}\] \[+\frac{9\bar{\tau}}{10}\bigg{[}\frac{z^{4}}{10}\partial_{z}^{4} \tilde{M}_{\rm L}+\frac{8z^{3}}{5}\partial_{z}^{3}\tilde{M}_{\rm L}+\frac{958z^{2}}{135 }\partial_{z}^{2}\tilde{M}_{\rm L}+\frac{404z}{45}\partial_{z}\tilde{M}_{\rm L}+ \frac{32}{27}\tilde{M}_{\rm L}\bigg{]}.\] We now focus on the range \(z_{\eta}=qr_{\eta}\ \ll\ z\ \ll\ 1\), where \(\bar{\tau}\) terms cannot be neglected. Here, we use a Landau-Lifshitz approximation Landau and Lifshitz (1935) and consider \(\bar{\tau}\) to be a small parameter. In order to derive approximated expressions for the high order derivatives of \(\bar{M}_{\rm L}\) as a function of the first and the second order derivatives, we neglect \(\bar{\tau}\) and \(\sqrt{\eta/\eta_{t}}\) compared to \(z\) \[z^{3}\partial_{z}^{3}\tilde{M}_{\rm L}=-8z^{2}\partial_{z}^{2} \tilde{M}_{\rm L}+(3\gamma_{0}-8)\partial_{z}\tilde{M}_{\rm L},\] \[z^{4}\partial_{z}^{4}\tilde{M}_{\rm L}=(3\gamma_{0}+58)z^{2} \partial_{z}^{2}\tilde{M}_{\rm L}-10(3\gamma_{0}-14)\partial_{z}\tilde{M}_{ \rm L}, \tag{59}\] where \(\gamma_{0}\) is the growth rate for a delta-correlated in time flow. As a first approach (we will give in Sec. VI a more rigorous treatment) we neglect \(\sqrt{\eta/\eta_{t}}\). The two expressions for high order derivatives can be plugged into Eq. (58) to obtain \[0=\frac{9\bar{\tau}}{10}\bigg{[}z^{2}\big{(}\frac{3\gamma_{0}}{ 10}+\frac{13}{135}\big{)}\partial_{z}^{2}\tilde{M}_{\rm L}+z\big{(}\frac{9 \gamma_{0}}{5}+\frac{78}{135}\big{)}\partial_{z}\tilde{M}_{\rm L}\] \[+\frac{32}{27}\tilde{M}_{\rm L}\bigg{]}+\frac{z^{2}}{3}\partial_{ z}^{2}\tilde{M}_{\rm L}+2z\partial_{z}\tilde{M}_{\rm L}+\big{(}\frac{8}{3}- \gamma\big{)}\tilde{M}_{\rm L}. \tag{60}\] It is pretty obvious that this equation admits a power law solution \(\tilde{M}_{\rm L}\sim z^{-\lambda}\). Solving for \(\lambda\) we find \[\lambda=\frac{5}{2}\pm\frac{i}{2}\bigg{[}4\frac{8+16\bar{\tau}/5-3 \gamma}{1+81\gamma_{0}\bar{\tau}/100+13\bar{\tau}/50}-25\bigg{]}^{1/2}. \tag{61}\] We find that the real part of \(\lambda\) is \(5/2\), which is exactly the same as Bhat and Subramanian Bhat and Subramanian (1998) and is expected for a Kazantsev spectrum. Gruzinov _et al._Gruzinov _et al._ (2016) have argued that the growth rate, in the limit of \(R_{\rm M}\to\infty\) is given by finding a value of \(\lambda\) such that \({\rm d}\gamma/{\rm d}\lambda=0\). We can then plug that value into Eq. (61), \(\gamma\) is thus given by \[\gamma=\frac{7}{12}-\frac{147}{320}\bar{\tau}, \tag{62}\] where we used the self-consistent value for \(\gamma_{0}=7/12\). Note that the complete expression for the growth rate of the dynamo is then: \(\gamma T_{\rm L}(0)q^{2}\). #### v.2.2 Arbitrary degree of compressibility The main problem with the initialisation that we just presented is that it does not include any parameter to control the degree of compressibility (DOC). We define the DOC by \[\sigma_{c}\equiv\frac{\langle(\nabla\cdot\mathbf{u})^{2}\rangle}{ \langle(\nabla\times\mathbf{u})^{2}\rangle}=\frac{\langle a_{i}a_{j}q_{i}q_{j} \rangle}{\langle\epsilon_{ijk}\epsilon_{ihl}a_{k}a_{l}q_{j}q_{h}\rangle}, \tag{63}\] where \(\epsilon_{ijk}\) is the Levi-Civita symbol; the third expression is obtained after averaging over \(\psi\). The DOC is then zero for an incompressible flow and goes to infinity for a fully irrotational flow. In order to derive an equation for an arbitrary DOC we set \(\mathbf{q}\) to be a random vector with norm \(q\) and \(\mathbf{a}\) defined by \[a_{i}=b(\tilde{P}_{ij}\hat{A}_{j}\sin{(\theta)}+\hat{q}_{j}\hat{A}_{j}\hat{q}_{ i}\cos{(\theta)}), \tag{64}\] where as before \(\mathbf{A}\) is a random vector of norm \(A\) and \(\hat{A}_{j}=A_{j}/A\). The two parameters \(b\) and \(\theta\) (that are constant) allow to control, respectively, the norm of \(\langle\mathbf{a}^{2}\rangle\) and the DOC. In such a parametrisation the component of \(\mathbf{A}\) along \(\mathbf{q}\) is always rescaled by \(\cos{(\theta)}\) whereas the component of \(\mathbf{A}\) orthogonal to \(\mathbf{q}\) in the plane described by \(\mathbf{A}-\mathbf{q}\) is always rescaled by \(\sin{(\theta)}\). This parametrisation is taken for convenience, and \(\theta\) can be interpreted as the mean absolute angle between \(\mathbf{a}\) and \(\mathbf{q}\). Although this parametrisation might seem arbitrary, we can show that the results we derive here are independent on the exact evaluation of \(a_{i}\) as long as \(\sigma_{c}\) is uniquely defined (see Appendix A). Under such considerations \[\sigma_{c}=\frac{1}{2\tan{(\theta)}^{2}},\ \ \langle\mathbf{a}^{2}\rangle=b^{2} \bigg{(}\frac{2}{3}\sin{(\theta)}^{2}+\frac{1}{3}\cos{(\theta)}^{2}\bigg{)}. \tag{65}\] We directly see that the value \(\theta=\pi/2\) represents the incompressible case and \(\theta=0\) the fully irrotational one. Note that due to the random behavior of \(\mathbf{A}\) we have \(\langle\xi\rangle=0\). We apply the exact same methodology as for the first initialisation, which means we expand the two-point correlators, use the ansatz, and re-express the high order derivatives with the first and the second order ones. The expressions for the velocity correlators with this parametrisation can be found in the Appendix B (Eq. 115). Furthermore, we define the two functions \[\epsilon(\theta) =\frac{216}{5(\Omega+3)^{2}}\bigg{[}\frac{\Omega+3}{5-\Omega} \bigg{(}\frac{1}{24}\Omega_{1}-\frac{3}{28}\Omega_{2}+\frac{3}{40}\Omega_{3} \bigg{)}\] \[\times\bigg{(}5\gamma_{0}-\frac{40}{\Omega+3}\bigg{)}+\frac{157} {420}\Omega_{1}-\frac{599}{630}\Omega_{2}+\frac{121}{180}\Omega_{3}\bigg{]},\] \[\zeta(\theta) =\frac{216}{5(\Omega+3)^{2}}\bigg{[}\frac{2}{3}\Omega_{1}- \frac{14}{9}\Omega_{2}+\frac{8}{9}\Omega_{3}\bigg{]}. \tag{66}\] The set of five parameters \(\Omega_{\rm i}\) appears very naturally in the derivation of Eq. (115) and depends only on \(\theta\); exact expressions are given in Eq. (116). From the velocity correlators we have \(\eta_{t}=\tau b^{2}(\Omega+3)/72\). However the parameter \(b\) is still free and we can set it to \(b^{2}=6a^{2}/(\Omega+3)\) such that \(\langle\mathbf{a}^{2}\rangle=a^{2}\). As a result \(\eta_{t}=\tau a^{2}/12\), and the normalised correlation time can be evaluated to \(\bar{\tau}=St^{2}/12\). The normalised correlation time is then fully controlled by \(St\); independently on the choice of DOC. The resulting equation is \[\bigg{(}\frac{2\eta}{\eta_{t}}+z^{2}\frac{5-\Omega}{5(\Omega+3)} \bigg{)}\,\partial_{z}^{2}\tilde{M}_{\rm L}\] \[+\bigg{(}\frac{8\eta}{z\eta_{t}}+6z\frac{5-\Omega}{5(\Omega+3)} \bigg{)}\,\partial_{z}\tilde{M}_{\rm L}+\left(\frac{8}{\Omega+3}-\gamma \right)\tilde{M}_{\rm L}\] \[+\bar{\tau}\bigg{[}\epsilon(\theta)z^{2}\partial_{z}^{2}\tilde{M}_{ \rm L}+6\epsilon(\theta)z\partial_{z}\tilde{M}_{\rm L}+\zeta(\theta)\tilde{M}_{ \rm L}\bigg{]}=0, \tag{67}\] Similarly to the first case we compute the growth rate and scale factor of the power law solution in the limit of large \(R_{\rm M}\), we find that the real part of \(\lambda\) is still \(5/2\). The growth rate is given this time by \[\gamma_{0}=\frac{7+5\Omega}{4(\Omega+3)},\ \ \ \ \gamma=\gamma_{0}+\bar{\tau}\big{[} \zeta(\theta)-\frac{25}{4}\epsilon(\theta)\big{]}. \tag{68}\] We already see from this quick evaluation that the Kazantsev spectrum seems to be preserved even with an arbitrary DOC and correlation time. However to confirm this first approach and include the effects of finite magnetic Reynolds numbers, we need to study more carefully the solutions to Eq. (67). ## VI Finite magnetic resistivity solutions The scaling solution that has been derived in the previous section only works if the term \(\sqrt{\eta/\eta_{t}}\) is neglected. However to include effects due to a finite magnetic resistivity we should not systematically neglect it. A WKB approximation can be used to evaluate the solution of Eq. (67) including the finite resistivity. An explicit derivation of the WKB solutions can be found in Appendix C. We only review the main results obtained for the magnetic power spectrum and the growth rate of the dynamo including a finite magnetic Reynolds number. ### Growth rate The normalised growth rate of the dynamo that includes contributions from the magnetic resistivity (through \(R_{\rm M}\)), compressibility (through \(\Omega\) and \(\theta\)) and finite correlation time (through \(\bar{\tau}\)) is found to be \[\gamma = -\bigg{(}\frac{\pi}{\ln{(R_{\rm M})}}\bigg{)}^{2}\bigg{[}\frac{5 -\Omega}{5(\Omega+3)}+\bar{\tau}\epsilon(\theta)\bigg{]}+\frac{7+5\Omega}{4( \Omega+3)} \tag{69}\] \[+\bar{\tau}\bigg{[}\zeta(\theta)-\frac{25}{4}\epsilon(\theta) \bigg{]},\] \[\equiv -\bigg{(}\frac{\pi}{\ln{(R_{\rm M})}}\bigg{)}^{2}\gamma_{R_{\rm M} }+\gamma_{0}+\bar{\tau}\gamma_{1},\] where the functions \(\epsilon(\theta)\) and \(\zeta(\theta)\) are given in Eq. (66) and we have introduced the different components of the growth rate \(\gamma_{R_{\rm M}}\), \(\gamma_{0}\), and \(\gamma_{1}\). In Tab. 1 we list the different parameters of the flow and the magnetic spectrum for three regimes, namely incompressible (\(\nabla\cdot\mathbf{u}=0\)), irrotational (\(\nabla\times\mathbf{u}=0\)), and the intermediate case treated in Sec. V.4.1. To get a better intuition on the results presented here, we display in Fig. 1 the DOC dependency of the two main contributions to the growth rate. From the evolution of \(\gamma_{0}\) it is very clear that the compressibility tends to decrease the growth rate of the magnetic energy spectrum of the dynamo. Moreover, \(\gamma_{0}\) is comprised between \(0.75\) and \(0.25\) which indicates that the dynamo action always exists. It is interesting to note that \(\gamma_{0}\) is a monotonously decreasing function of the DOC, whereas \(\gamma_{1}\) has a maximum around \(\sigma_{c}\sim 2\). In Fig. 2 we study more precisely the dependence of the growth rate on \(R_{\rm M}\). As expected from Eq. (69), \(\gamma\) increases as \(R_{\rm M}\) increases. In practice, due to the WKB approximation, there is a limiting value on the magnetic Reynolds number (\(R_{\rm M,thresh}\)) for which Eq. (69) is valid such that we need to keep \(R_{\rm M}>R_{\rm M,thresh}\). The impact on the total growth rate of the DOC is stronger when \(R_{\rm M}\) is small. In this work, we only considered first order corrections; and discrepancies can already represent \(\sim~{}85\%\) for the lowest values of \(R_{\rm M}\) presented. However, the correlation time has a negligible impact on the total growth rate in the limit \(St\ll 1\). Indeed, the correlation time enters the computation through \(St\) which itself contributes through \(\bar{\tau}\propto St^{2}\ll 1\). Figure 1: Evolution of the two main contributions to the magnetic spectrum growth rate with respect to the DOC for large magnetic Reynolds number. The left axis represents the main contribution \(\gamma_{0}\) and the right axis the contribution to the growth rate related to the correlation time \(\gamma_{1}\) of Eq. (69). Dashed lines correspond to the value for a fully irrotational flow (\(\sigma_{c}\to\infty\)). \begin{table} \begin{tabular}{c|c c c} Parameters & Incompressible & Intermediate Irrotational \\ \hline \(\theta\) & \(\frac{\pi}{2}\) & \(\frac{\pi}{4}\) & \(0\) \\ \(\sigma_{c}\) & \(0\) & \(\frac{\pi}{2}\) & \(\infty\) \\ \hline \(\lambda_{k}\) & \(\frac{3}{2}\) & \(\frac{3}{2}\) & \(\frac{3}{2}\) \\ \hline \(\gamma_{0}\) & \(\frac{3}{4}\) & \(\frac{7}{12}\) & \(\frac{1}{4}\) \\ \(\gamma_{1}\) & \(-\frac{135}{224}\) & \(-\frac{147}{320}\) & \(-\frac{1017}{2240}\) \\ \(\gamma_{R_{\rm M}}\) & \(\frac{1}{5}+\bar{\tau}\frac{27}{280}\) & \(\frac{1}{3}+\bar{\tau}\frac{293}{1200}\) & \(\frac{3}{5}+\frac{7429}{2800}\) \\ \hline \(R_{\rm M,thresh}\) & \(\sim 3\cdot 10^{5}\) & \(\sim 1.5\cdot 10^{5}\) & \(\sim 7\cdot 10^{4}\) \\ \end{tabular} \end{table} Table 1: Presentation of the velocity field and magnetic spectrum parameters for three types of flow: incompressible, irrotational, and intermediate (see Sec. V.4). ### Magnetic power spectrum In the range of interest, \(z_{\eta}~{}\ll~{}z~{}\ll~{}1\), we find the solution for the longitudinal two-point magnetic correlation function \[M_{\rm L}(z,t)=e^{\gamma t}z^{-5/2}M_{0}\cos\bigg{[}\frac{\pi}{\ln\left(R_{\rm M }\right)}\ln\bigg{(}\frac{z}{z_{0}}\bigg{)}\bigg{]}, \tag{70}\] where \(\gamma\) is given by Eq. (69). This result is already very interesting as we can identify a power law \(z^{-5/2}\) independent of the correlation time that dominates the spectrum compared to the slowly varying \(\cos\left[\ln\left(z\right)\right]\) function. However the Kazantsev spectrum we are interested in predicts that the magnetic power spectrum scales as \(M_{k}(k)\sim k^{-3/2}\) in the range \(q\ll k\ll k_{\eta}\). We can show (see Appendix D) that the magnetic power spectrum and the longitudinal two-point correlation function are related by \[M_{k}(k,t)=\frac{1}{\pi}\int(kr)^{3}M_{\rm L}(r,t)j_{1}(kr)~{}{\rm d}r. \tag{71}\] The Bessel function \(j_{1}(x)\) is very peaked around \(x\sim 2\), so the dominant part of the integral is around \(k\sim 1/r\). Thus for \(M_{\rm L}(r)\sim r^{\lambda}\) we have \(M_{k}(k)\sim k^{\lambda_{k}}\) with \(\lambda_{k}=-(1+\lambda)\). Plugging \(\lambda=-5/2\) gives the well-known Kazantsev spectrum \(M_{k}(k)\sim k^{3/2}\) even for the compressible and time-correlated flow considered here. Note that the main contribution to the power spectrum derived does not depend on the any of the parameters of the flow. The last row of Tab. 1 corresponds to the minimal value of \(R_{\rm M}\) for which the WKB approximation holds (see Appendix C.3). We see that for most astrophysical application of the small-scale dynamo our derived results remain valid. ## VII Discussion and conclusions Several authors have previously modeled the kinematic phase of the small-scale dynamo with the Kazantsev theory [see e.g. 34, 36, 40, 67]. They found that the Kazantsev spectrum is preserved, even for a compressible flow. However, they often assumed Gaussian statistics of the velocity field, such that the flow is delta-correlated in time. There are several examples of analytic treatments that include a finite correlation time: Bhat and Subramanian [61] solved the incompressible case, Kolekar _et al._[50] used a similar approach to our work but in the context of the mean-field dynamos, and Schekochihin and Kulsrud [68] and Kleeorin _et al._[69] considered a general case of fluctuation dynamo. Most of the theoretical studies, if not all, have found that the Kazantsev spectrum is preserved even if compressibility, finite correlation time, or finite resistivity are considered. Our work shows that the combined effect of the three on the Kazantsev spectrum is negligible, as Eq. 70 scales mostly with the power-law \(z^{-5/2}\). However, our results are only derived for first order corrections from the correlation time; as higher order corrections are usually hard to treat. Besides the shape of the magnetic energy spectrum, the dynamo growth rate \(\gamma\) is of particular interest. Our results for \(\gamma\) are similar to the ones obtained by Kulsrud and Anderson [34] and Schekochihin _et al._[36] for the limit of incompressibility. Schekochihin _et al._[40] also derived a formula for the growth rate for an arbitrary DOC for a delta-correlated in time flow. They found that \(\gamma\) ranges between \(3/4\) for an incompressible flow and \(1/8\) for a fully irrotational flow. It does not match our results by a factor of two in the limit of the fully irrotational flow. Similarly, the growth rate related to the finite resistivity \(\gamma_{R_{\rm M}}\) matches their result in the incompressible case but is overestimated by a factor of \(2\) in the fully compressible limit. The discrepancy can be solved when we consider instead of \(\gamma\) alone the complete growth rate, namely \(\gamma\eta_{t}q^{2}\). In their paper Schekochihin _et al._[40] defined the initial growth rate from the velocity correlators in Fourier space while we define it from real space. If we transfer a factor \((\Omega+3)/4\) from \(\eta_{t}\) to \(\gamma\), our growth rate matches theirs in both limits. Although, we derived a expression for the growth rate that includes the DOC, a finite resistivity and time correlations our treatment of turbulence is very simple due to the imposed velocity field. A more rigorous treatment [17] highlights that the growth rate might also be a power law of the Reynolds number. Rogachevskii and Kleeorin [41] used a different approach to the problem as they impose directly a velocity correlation function instead of the velocity field itself. Their results match ours for the magnetic power spectrum as long as \(T_{\rm L}(r)\sim r^{2}\); but the growth rate differs as their velocity spectrum is different from ours. This highlights that the Kazantsev spectrum should be preserved for a large class of flows. From a numerical point of view it seems indeed that a slope close to \(3/2\) in the magnetic energy spectrum Figure 2: Ratio of the growth rate and its main contribution for a few values of \(R_{\rm M}\) and \(St=10^{-2}\). Dashed lines correspond to the value for a fully irrotational flow (\(\sigma_{c}\to\infty\)). can be observed in both incompressible and compressible MHD simulations at large length scales [see e.g. 70-74]. Although the slope measured in simulations is often close to the theoretical prediction, small discrepancies can still arise. One of the possible explanations can directly appear from the size of the simulation box. Indeed \(k\) has to be small but simulations are limited in resolution. This often can lead to an insufficient separation of spatial scales. A related problem is the assumption of very large hydrodynamic and/or magnetic Reynolds numbers in theoretical models. The required large values of these two numbers make a comparison between numerical simulations and theory hard. Kopyev _et al._[75] also found that time irreversible flows can generate a nontrivial deviation to the Kazantsev spectrum. Regarding the growth rate of the dynamo, its reduction by the correlation time has also been observed in numerical studies [76]. Further discussions of the current state of dynamo numerical simulations can be found in Brandenburg _et al._[77]. In conclusion, we have given an example of an analytical treatment for the fluctuation dynamo in the most generic case of a compressible flow with a finite correlation time. To this end, we proposed a framework to study the cumulative effects of a finite correlation time and an arbitrary degree of compressibility by generalising the former work of Bhat and Subramanian [1]. We used the renovating flow method which assumes a very crude flow that does not allow for a very complex modelling of turbulence but keep the analytical treatment tractable. We derived a generalisation to the Kazantsev equation in real space (Eq. 55) that is valid at any scale. We note however that if we assume an incompressible flow that is delta-correlated in time at this point we retrieve the original Kazantsev equation. This equation describes the time evolution of the two-point magnetic correlation function \(M_{\rm L}\) from the velocity correlators and the spatial derivatives of \(M_{\rm L}\) up to the fourth order. We then studied solutions for length scales much smaller than turbulent forcing scale (i.e. \(qr\ll 1\)). By the use of the WKB approximation, we derived formulas for the growth rate and slope of the magnetic power spectrum \(M_{k}(k)\) for large magnetic Reynolds number \(R_{\rm M}\gg 1\) and small Strouhal number \(St\ll 1\). In particular, it allowed to capture the effect of finite magnetic diffusivity. Furthermore we could define a lower bound on \(R_{\rm M}\) for which our results should hold, \(R_{\rm M,thresh}\sim 10^{5}\), which is smaller than most of the typical values in astrophysical objects. Although the growth rate showed dependencies on both the degree of compressibility and the correlation time, the Kazantsev spectrum seemed to be preserved, i.e. \(M_{k}(k)\sim k^{3/2}\), independently of \(\tau\) or \(\sigma_{c}\). Our results are derived in a very special context, namely for a renovating flow. But our predictions regarding the magnetic field spectrum seem robust in the sense that both numerical and theoretical studies agree with the conservation of the Kazantsev spectrum for compressible and time-correlated flows. ###### Acknowledgements. DRGS gratefully acknowledges support by the ANID BASAL projects ACE210002 and FB210003, as well as via the Millenium Nucleus NCN19-058 (TITANs). YC and DRGS thank for funding via Fondecyt Regular (project code 1201280). JS acknowledges the support by the Swiss National Science Foundation under Grant No. 185863. ## Appendix A General initialisation for delta-correlated in time flow We would like to review an even more general initialisation than the one presented in Sec. V.4.2. We only consider a delta-correlated in time flow but this discussion could in principle be generalised to a finite correlation time. A very general expression for \(\mathbf{a}\) that preserves isotropy is \[a_{i}=b(\tilde{P}_{ij}\hat{A}_{j}f_{1}+\hat{q}_{j}\hat{A}_{j}\hat{q}_{i}f_{2}), \tag{10}\] where \(f_{1}\) and \(f_{2}\) are two constants. In order to control the norm \(a\) we should impose that \(f_{1/2}\) are between minus one and one. It is then straightforward to show that \(\sigma_{c}=f_{2}^{2}/(2f_{1}^{2})\). Once again we compute the velocity correlators and plug in Eq. (55). To simplify this derivation we also neglect the resistivity \(\eta\). We find the equation \[\frac{z^{2}}{5}\frac{2f_{1}^{2}+3f_{2}^{2}}{2f_{1}^{2}+f_{2}^{2}} \partial_{z}^{2}\tilde{M}_{\rm L}+\frac{6z}{5}\frac{2f_{1}^{2}+3f_{2}^{2}}{2f _{1}^{2}+f_{2}^{2}}\partial_{z}\tilde{M}_{\rm L}\] \[+\big{(}4\frac{f_{1}^{2}+f_{2}^{2}}{2f_{1}^{2}+f_{2}^{2}}-\gamma \big{)}\tilde{M}_{\rm L}=0, \tag{11}\] that again allows some power law solution. If we follow the same approach than in Sec. V.4.2, we find \[\lambda=\frac{5}{2}\pm ig(f_{1},f_{2},\gamma),\qquad\gamma=\frac{6f_{1}^{2}+f_ {2}^{2}}{4(2f_{1}^{2}+f_{2}^{2})}, \tag{12}\] with \(g(f_{1},f_{2},\gamma)\) a function that characterises the growth rate. Once again the power spectrum slope is constant and \(\gamma=3/4\) for an incompressible flow, \(\gamma=1/4\) for a fully irrotational one and \(\gamma=7/12\) if \(f_{1}=f_{2}\). If we define \[f_{1}=\sin{(\theta)},\qquad\qquad f_{2}=\cos{(\theta)}, \tag{13}\] we retrieve the initialisation presented. This is convenient as we reduced the number of parameters to only \(\theta\) to completely and uniquely define \(\sigma_{c}\). Also it presents the option to work with another more natural parameter as in this case \(f_{1}^{2}\) and \(f_{2}^{2}\) are related by \(f_{1}^{2}+f_{2}^{2}=1\). In fact, we just showed that the exact initialisation does not matter as long as \(\sigma_{c}\) is uniquely defined and that we can choose the most convenient one. Note also that \(f_{2}=\sqrt{2}\cos{(\theta)}\) is also an option that keeps the norm of \(\mathbf{a}\) independent of the DOC. ## Appendix B Complementary expressions We display in Tab. 16 the main tools used to contract Eq. (53) with \(\hat{r}_{ij}\). If we carry out all the algebra of Sec. V.4.2 we get the following expressions for the two-point velocity correlators \[T_{ij}=\frac{\tau b^{2}}{12}\bigg{\{}\hat{r}_{ij}(\frac{\Omega+1} {2}+\Omega\partial_{z}^{2})+\hat{P}_{ij}(\frac{\Omega+1}{2}+\Omega\frac{ \partial_{z}}{z})\bigg{\}}j_{0}(z),\] \[T_{ijhl}^{x^{2}y^{2}}=\frac{\tau^{2}b^{4}}{120}\bigg{\{}\hat{r}_ {ijhl}[\frac{1}{16}\Omega_{1}\partial_{z}^{4}+\frac{3}{2}\Omega_{2}\partial_{z }^{2}+3\Omega_{3}+6\frac{\Omega_{\rm tot}}{j_{0}(2z)}]\] \[+\hat{r}_{(ij}\hat{r}_{hl)}\big{[}\frac{3}{16}\Omega_{1}(\frac{ \partial_{z}^{2}}{z^{2}}-\frac{\partial_{z}}{z^{3}})+\frac{1}{2}\Omega_{2} \frac{\partial_{z}}{z}+\Omega_{3}+2\frac{\Omega_{\rm tot}}{j_{0}(2z)}]\] \[+\hat{P}_{(ij}\hat{r}_{hl)}\big{[}\frac{3}{16}\Omega_{1}(\frac{ \partial_{z}^{3}}{z}-2\frac{\partial_{z}^{2}}{z^{2}}+2\frac{\partial_{z}}{z^{ 3}})+\frac{1}{4}\Omega_{2}(\partial_{z}^{2}+\frac{\partial_{z}}{z})\] \[+\Omega_{3}+2\frac{\Omega_{\rm tot}}{j_{0}(2z)}\big{]}\bigg{\}}j_ {0}(2z),\] \[T_{ijhl}^{x^{3}y}=\frac{\tau^{2}b^{4}}{40}\bigg{\{}\hat{r}_{ijhl} \big{[}3\Omega_{1}\partial_{z}^{4}+6\Omega_{2}\partial_{z}^{2}+3\Omega_{3} \big{]}\] \[+\hat{r}_{(ij}\hat{r}_{hl)}\big{[}3\Omega_{1}(\frac{\partial_{z}^ {2}}{z^{2}}-\frac{\partial_{z}}{z^{3}})+2\Omega_{2}\frac{\partial_{z}}{z}+ \Omega_{3}\big{]}\] \[+\hat{P}_{(ij}\hat{r}_{hl)}\big{[}3\Omega_{1}(\frac{\partial_{z}^ {3}}{z}-2\frac{\partial_{z}^{2}}{z^{2}}+2\frac{\partial_{z}}{z^{3}})+\Omega_{ 2}(\partial_{z}^{2}+\frac{\partial_{z}}{z})\] \[+\Omega_{3}\big{]}\bigg{\}}j_{0}(z), \tag{16}\] Where the set of five parameters is defined as follow \[\Omega=2\sin{(\theta)}^{2}-1, \Omega_{1}=\Omega^{2}, \Omega_{2}=\Omega\frac{\Omega+1}{2},\] \[\Omega_{3}=\big{(}\frac{\Omega+1}{2}\big{)}^{2}, \Omega_{\rm tot}=\frac{\Omega_{1}}{5}-\frac{2\Omega_{2}}{3}+\Omega_{3}. \tag{17}\] These five parameters appear very naturally in the derivation of the velocity correlators that is why we decided not to reduce the expressions to a single dependency on \(\Omega\). Note that \(\Omega=1\) for an incompressible flow and \(\Omega=-1\) for a fully irrotational one. ## Appendix C WKB solutions derivation The scaling solution that has been derived in the previous section only works if the term \(\sqrt{\eta/\eta_{t}}\) is neglected. However to include effects due to a finite magnetic resistivity we should not systematically neglect it. A WKB approximation can be used to evaluate the solution of Eq. (67) including the finite resistivity. ### The WKB approximation The WKB (Wentzel-Kramers-Brillouin) approximation is first introduced in 1926 [78; 79]. In particular this approximation method has been extensively used in quantum mechanics to solve the Schrodinger equation [80; 81]. Formally the method can be used to solve equations of the type \[\frac{\mathrm{d}^{2}\Theta}{\mathrm{d}x^{2}}+p(x)\Theta=0, \tag{18}\] where the WKB solutions to this equation are linear combinations of \[\Theta=\frac{1}{p^{1/4}}\exp\bigg{(}\pm i\int p^{1/2}\ \mathrm{d}x\bigg{)}. \tag{19}\] We call turning points the value of \(x\) where \(p(x)\) is zero. In a given interval if \(p(x)<0\) the solution is in the form of growing and decaying exponential however if \(p(x)>0\) we have an oscillatory regime. Moreover the solutions need to satisfy boundary conditions, especially it is common to impose \(\Theta(x)\to\ 0\) for \(x\to\pm\infty\). ### Magnetic spectrum and growth rate at finite magnetic Reynolds number In the context of dynamos the WKB approximation is commonly used to derive the growth rate of the two-point correlation function of magnetic fluctuations. Reconsider Eq. (67), which is valid in the limit \(z\ll 1\). To apply the WKB approximation we define a new coordinate, which is more convenient to use [35], as \(e^{x}=\bar{z}\equiv\sqrt{\eta/\eta}z\). With this new coordinate Eq. (67) becomes \[\bigg{(}\frac{\mathrm{d}^{2}\tilde{M}_{\rm L}}{\mathrm{d}x^{2}}- \frac{\mathrm{d}\tilde{M}_{\rm L}}{\mathrm{d}x}\bigg{)}\bigg{(}\tau\epsilon( \theta)+\frac{5-\Omega}{5(\Omega+3)}+\frac{2}{\bar{z}^{2}}\bigg{)}\] \[+\frac{\mathrm{d}\tilde{M}_{\rm L}}{\mathrm{d}x}\bigg{(}6\tau \epsilon(\theta)+6\frac{5-\Omega}{5(\Omega+3)}+\frac{8}{\bar{z}^{2}}\bigg{)}\] \[+\tilde{M}_{\rm L}\bigg{(}\bar{\tau}\zeta(\theta)+\frac{8}{\Omega+ 3}-\gamma\bigg{)}=0. \tag{20}\] To simplify notations we rewrite \[\bigg{(}\frac{\mathrm{d}^{2}\tilde{M}_{\rm L}}{\mathrm{d}x^{2}}-\frac{\mathrm{ d}\tilde{M}_{\rm L}}{\mathrm{d}x}\bigg{)}A(x,\theta)+\frac{\mathrm{d}\tilde{M}_{\rm L }}{\mathrm{d}x}B(x,\theta)+\tilde{M}_{\rm L}C(x,\theta)=0, \tag{21}\] where the three functions are simply \[A(x,\theta)\,\bar{\tau}\epsilon(\theta)+\frac{5-\Omega}{5(\Omega+ 3)}+\frac{2}{z^{2}},\] \[B(x,\theta)\,\bar{6}\tau\epsilon(\theta)+6\frac{5-\Omega}{5( \Omega+3)}+\frac{8}{\bar{z}^{2}},\] \[C(x,\theta)\,\bar{\tau}\zeta(\theta)+\frac{8}{\Omega+3}-\gamma. \tag{22}\] We further assume that \(\tilde{M}_{\rm L}\) can be expressed as a product of two functions \(\tilde{M}_{\rm L}=g(x)W(x)\). The idea is to impose certain relations on \(g(x)\) such that all first order derivatives of \(W(x)\) are cancelled, leading us to an equation that has the desired form. If we take \[\frac{\mathrm{d}g}{\mathrm{d}x}=g\frac{A(x,\theta)-B(x,\theta)}{2A(x,\theta)}, \tag{23}\] we find the desired equation Eq. (101) for \(W(x)\) with \[p(x)=\frac{1}{A^{2}}\bigg{[}AC-\frac{1}{2}(B^{\prime}A-A^{\prime}B)-\frac{1}{4}(A -B)^{2}\bigg{]}, \tag{106}\] where primes denote derivative with respect to \(x\) and the three functions are given by Eq. (104). After some computation we can even show that \[p(x)=\frac{A_{0}\bar{z}^{4}-B_{0}\bar{z}^{2}-9}{(2+F\bar{z}^{2})^{2}}, \tag{107}\] where, for convenience, we set the following three functions of the DOC \[A_{0}=\bigg{(}\bar{\tau}\epsilon(\theta)+\frac{5-\Omega}{5( \Omega+3)}\bigg{)}\] \[\qquad\times\bigg{\{}\frac{7+5\Omega}{4(\Omega+3)}+\bar{\tau} \big{[}\zeta(\theta)-\frac{25}{4}\epsilon(\theta)\big{]}{-}\gamma\bigg{\}},\] \[B_{0}=2\gamma+19\bar{\tau}\epsilon(\theta)-2\bar{\tau}\zeta( \theta)+\frac{15-19\Omega}{5(\Omega+3)},\] \[F=\bar{\tau}\epsilon(\theta)+\frac{5-\Omega}{5(\Omega+3)}. \tag{108}\] Recall that we are interested in the solution for the range \(z_{\eta}~{}\ll~{}z~{}\ll~{}1\) which implies roughly that \(1~{}\ll~{}\bar{z}~{}\ll~{}R_{\rm M}^{1/2}\). If we take the limit of very small \(\bar{z}\), \(x\rightarrow-\infty\), we see that \(p\rightarrow-9/4\). As \(\bar{z}\) increases \(p(x)\) increases too, let's call the first turning point \(\bar{z}_{0}\). We can guess from the evaluation of \(\gamma\) in Sec. V.4.2 that \(A_{0}\) is very small compared to \(B_{0}\). Indeed when plugging in the value for \(\gamma\) we found previously, we obtain that \(A_{0}\) goes to zero while \(B_{0}\) has a part independent on \(\bar{\tau}\). In particular it implies that \(\bar{z}_{0}\) is large enough to neglect the constant terms in the equation of \(p(x)\) (i.e. \(\bar{z}_{0}\gg 1\)). The opposite limit of very large \(\bar{z}\), \(x\rightarrow\infty\), is not described by Eq. (104) as it is valid only in the small \(z\) limit. We need to go back to Eq. (55) and use that in the limit of very large \(z\) the velocity correlators and their derivatives should go to zero. After some computation we obtain for the highest contribution \[p(x)\sim-2e^{2x}\frac{(1+\eta_{t}/\eta)\gamma_{0}}{V(\theta,\eta_{t},\eta, \bar{\tau})^{2}}, \tag{109}\] such that \(p(x)<0\) in this limit. Note that we do not need to specify the exact form of \(V(\theta,\eta_{t},\eta,\bar{\tau})\) as the denominator is always positive. In this formula we also neglected terms that depend on \(\bar{\tau}\) in the numerator as they should always be smaller than \(\eta_{t}/\eta\) or \(\gamma_{0}\) which are both positive. Such a form means that \(p(x)\) must have gone through another zero at some point that we call \(\bar{z}_{1}\). To simplify the treatment we will say that Eq. (107) is valid for \(z<1\) and Eq. (109) is valid for \(z>1\). The boundary between the two can be taken to be \(z_{1}\) such that \(\bar{z}_{1}\sim R_{\rm M}^{1/2}\). In fact we will find that the final re sults have a small dependence on the exact value of \(z_{1}\) such that we can approximate it without changing the conclusions [61; 40; 55]. To summarise we consider that we have damped solutions for \(\bar{z}\ll\bar{z}_{0}\) and \(\bar{z}_{1}\ll\bar{z}\) and an oscillatory one for \(\bar{z}_{0}\ll\bar{z}\ll\bar{z}_{1}\). The exponentially growing solutions are discarded as \(M_{\rm L}(z)\) must remain finite at both \(z=0\) and \(z=\infty\). In order for the oscillatory solution to match the two damped regime we have to require [82; 83] \[\int_{x_{0}}^{x_{1}}p(x)^{1/2}\ {\rm d}x=\frac{(2n+1)\pi}{2}, \tag{111}\] where \(n\) is an integer. This condition is key to determine the growth rate \(\gamma\) of the two-point correlation of the magnetic field. In the context of this work we only consider the fastest eigen-mode given by \(n=0\). As we already mentioned the constant terms in Eq. (100) can be neglected which makes the solution to Eq. (111) exact. Evaluating the integral gives \[\int_{x_{0}}^{x_{1}}p(x)^{1/2}\ {\rm d}x =\int_{\bar{z}_{0}}^{\bar{z}_{1}}\frac{p(z)^{1/2}}{z}\ {\rm d}z, \tag{112}\] \[\simeq\int_{\bar{z}_{0}}^{\bar{z}_{1}}\frac{\sqrt{A_{0}z^{2}-B_{ 0}}}{Fz^{2}}\ {\rm d}z,\] \[=\frac{\sqrt{A_{0}}}{F}\left\{\ln\left(\frac{z_{1}}{z_{0}}+\sqrt {\frac{z_{1}^{2}}{z_{0}^{2}}-1}\right)-\sqrt{1-\frac{z_{0}^{2}}{z_{1}^{2}}} \right\},\] where to go from first to second line we used that \(\bar{z}_{0}\sim\sqrt{B_{0}/A_{0}}>0\). We can thus use the condition of Eq. (111), square both sides, and isolate the growth rate. The growth rate is finally given by \[\gamma=-\bigg{(}\frac{\pi}{\ln\left(R_{\rm M}\right)}\bigg{)}^{2 }\bigg{[}\frac{5-\Omega}{5(\Omega+3)}+\bar{\tau}\epsilon(\theta)\bigg{]}\] \[\qquad+\frac{7+5\Omega}{4(\Omega+3)}+\bar{\tau}\bigg{[}\zeta( \theta)-\frac{25}{4}\epsilon(\theta)\bigg{]}, \tag{113}\] where again we plugged the self-consistent value for \(\gamma_{0}\). Note that in this equation we also used the self-consistent evaluations \(\bar{z}_{0}\sim\ln\left(R_{\rm M}\right)\) and \(\bar{z}_{1}\sim R_{\rm M}^{1/2}\), such that we neglected \(\bar{z}_{0}\) compared to \(\bar{z}_{1}\). In the oscillatory range \(1\ \ll\ \bar{z}_{0}\ \ll\ \bar{z}\ \ll\ \bar{z}_{1}\) the WKB solution is thus given by \[W(x)\sim\bigg{(}\frac{\ln\left(R_{\rm M}\right)}{\pi}\bigg{)}^{1/2}\cos\bigg{[} \frac{\pi}{\ln\left(R_{\rm M}\right)}\ln\left(\frac{z}{z_{0}}\right)\bigg{]}. \tag{114}\] In this limit we see that Eq. (101) can be simplified such that \(g^{\prime}(x)\to-5g(x)/2\) which gives \(g(x)\sim e^{-5x/2}\). The two-point magnetic correlation function is then also scaling as \(z^{-5/2}\). So finally we find the equation for the longitudinal two-point magnetic correlation function in the region \(z_{\eta}\ \ll\ z\ \ll\ 1\) \[M_{\rm L}(z,t)=e^{\gamma\bar{t}}z^{-5/2}M_{0}\cos\bigg{[}\frac{\pi}{\ln\left( R_{\rm M}\right)}\ln\bigg{(}\frac{z}{z_{0}}\bigg{)}\bigg{]}, \tag{115}\] where \(\gamma\) is given by Eq. (113). ### Validity of the WKB approximation It can be showed that if we plug the solutions of the WKB approximation into Eq. (100) we arrive at the following equation \[\frac{{\rm d}^{2}\Theta}{{\rm d}x^{2}}+\bigg{(}1+\frac{1}{4p(x)^{2}}\frac{{ \rm d}^{2}p}{{\rm d}x^{2}}-\frac{3}{16p(x)^{3}}\big{(}\frac{{\rm d}p}{{\rm d} x}\big{)}^{2}\bigg{)}p(x)\Theta=0, \tag{116}\] such that we retrieve the initial problem to solve only if \[p_{\rm lim} \equiv\frac{1}{4p(x)^{2}}\frac{{\rm d}^{2}p}{{\rm d}x^{2}}-\frac{ 3}{16p(x)^{3}}\big{(}\frac{{\rm d}p}{{\rm d}x}\big{)}^{2}\] \[=\frac{z^{2}p^{\prime\prime}(z)+zp^{\prime}(z)}{4p(z)^{2}}-\frac{ 3z^{2}p^{\prime}(z)^{2}}{16p(z)^{3}} \tag{117}\] is very small compared to \(1\). Here primes denote derivatives with respect to the \(z\) variable. Furthermore, in a similar way to Schober _et al._[17], we consider that the criterion of validity for our WKB approximation is \(|p_{\rm lim}|<0.1\). We find that \(p_{\rm lim}\) depends not only on the magnetic Reynolds number but also on \(St\), \(\sigma_{c}\) and \(z_{c}\). We define here \(z_{c}\) to be the scale at which we evaluate \(p(z)\) and it derivatives. As the WKB approximation is valid between the two zeros of \(p(z)\) we must impose \(z_{0}\ll z_{c}\ll z_{1}\). Until now, we only ask \(R_{\rm M}\) to be very large, but the latter criterion gives us a way to quantify it. In particular, we use the expressions derived earlier for \(p(z)\) and \(\bar{\tau}\) to define a threshold on \(R_{\rm M}\) for which we consider that the derived results are valid[84]. In order to respect the conditions imposed on \(z_{c}\), we take \(z_{c}=(z_{0}+z_{1})/2\). Although the scale can seem arbitrary, we find only a slight dependency on it as long as \(z_{c}\) is not too close to \(z_{0}\) or \(z_{1}\). In Fig. 3 we present \(p_{\rm lim}\) for a fixed \(St=10^{-2}\) and a few DOC. Again, \(St\) being tiny its exact value does not highly impact \(R_{\rm M,thresh}\). It appears that the threshold of this work, regarding \(R_{\rm M}\) is around \(5\cdot 10^{5}\). More precisely, the \(R_{\rm M,thresh}\) threshold decreases until it reaches \(\sim 7\cdot 10^{4}\) when the DOC goes to infinity. The results derived in this work concerning the magnetic field are thus valid for most astrophysical objects where the fluctuation dynamo plays a major role. Note that from Fig. 3 we also have a valid WKB approximation for very small \(R_{\rm M}\). We can exclude this range of validity as we derived our generalised Kanzantsev equation (i.e. the expansion with respect to \(St\)) with the condition that \(R_{\rm M}\) was a large number. ## Appendix D Proof of Eq. (71) Let's start by expressing the magnetic power spectrum as the Fourier transform of the magnetic two-point correlation and take the Fourier transform of this expression \[M_{k}(k)=2\pi k^{2}\hat{M}_{ii}(k)=\frac{k^{2}}{(2\pi)^{2}}\int M_{ii}(r)e^{i \mathbf{k}\cdot\mathbf{r}}\ {\rm d}^{3}\mathbf{r}. \tag{118}\] Now use the properties of \(M(r)\) to derive the following \[M_{k}(k) =\frac{k^{2}}{2\pi}\int r^{2}\sin{(\theta)}M_{ii}(r)e^{ikr\cos{( \theta)}}\ \mathrm{d}r\mathrm{d}\theta,\] \[=\frac{ik}{2\pi}\int rM_{ii}(r)\bigg{(}e^{-ikr}-e^{ikr}\bigg{)}\ \mathrm{d}r,\] \[=\frac{1}{\pi}\int kr\bigg{(}3M_{\mathrm{L}}(r)+r\partial_{r}M_{ \mathrm{L}}(r)\bigg{)}\sin{(kr)}\ \mathrm{d}r,\] \[=\frac{1}{\pi}\int krM_{\mathrm{L}}(r)\bigg{(}\sin{(kr)}-\cos{( kr)}\bigg{)}\ \mathrm{d}r,\] \[=\frac{1}{\pi}\int(kr)^{3}M_{\mathrm{L}}(r)j_{1}(kr)\ \mathrm{d}r, \tag{20}\] where to go from the third to fourth line we integrated by parts. It is pretty obvious from the definition of \(rM_{\mathrm{L}}(r)\) that the boundary terms just go to zero.
2306.04731
Free Fermion Distributions Are Hard to Learn
Free fermions are some of the best studied quantum systems. However, little is known about the complexity of learning free-fermion distributions. In this work we establish the hardness of this task in the particle number non-preserving case. In particular, we give an information theoretical hardness result for the general task of learning from expectation values and, in the more general case when the algorithm is given access to samples, we give a computational hardness result based on the LPN assumption for learning the probability density function.
Alexander Nietner
2023-06-07T18:51:58Z
http://arxiv.org/abs/2306.04731v2
# Free Fermion Distributions Are Hard to Learn ###### Abstract Free fermions are some of the best studied quantum systems. However, little is known about the complexity of learning free-fermion distributions. In this work we establish the hardness of this task in the particle number non-preserving case. In particular, we give an information theoretical hardness result for the general task of learning from expectation values and, in the more general case when the algorithm is given access to samples, we give a computational hardness result based on the LPN assumption for learning the probability density function. ## 1 Introduction In this note we investigate the learnability of the output distributions of free fermion systems after measuring in the occupation number basis. We refer to those distributions simply as _free fermion distributions_. This learning task can be seen as the analogue to tomography of free fermion states from a classical perspective. Due to Wicks theorem it is known that tomography of free fermion states can be done efficiently: the \(n\)-point correlators are fully determined by the \(2\)-point correlators. In particular, a free fermion state is robustly determined via its covariance matrix, which in turn is determined by the two point correlators [1, 2]. Here we establish the hardness of learning free fermion distributions. In particular we focus on the general case, where the global particle number is not fixed. This is, our results do not imply hardness of learning free fermion distributions with a fixed particle number. While this work brings the hardness-of-learning transition closer to the latter family of distributions, our technique can not easily be applied to fixed particle number free fermion distributions. As established by [13], free fermions are closely related to local match gate circuits as introduced in [22]. In particular, nearest neighbour one dimensional match gate circuits are equivalent to nearest neighbour one dimensional free fermionic circuits. Thus, the free fermion formalism mildly generalizes the match gate formalism since the former can also be classically simulated when the nearest neighbor criterion is dropped. This however can also be emulated via match gates using a network of fermionic SWAP operations (i.e. FSWAP) which comes at an at most linear overhead in the system size. Thus the difference is only a mild one. We will use the match gate formalism for our technical formulae and give the free fermion interpretation in words. ### Our Contribution Our first result is concerned with learning algorithms that make use of statistical averages, only. **Informal Theorem 1:** (C.f. Corollary 1 and Corollary 2) _Free fermion distributions are exponentially hard to learn from empirical expectation values._ Interestingly, this result is of information theoretic nature: the hardness is due to the fact that any algorithm requires the values of exponentially many expectation values in order to determine the underlying distribution. This is in stark contrast to free fermion states: By Wick's theorem all \(k\)-point functions can be decomposed into combinations of \(2\)-point functions. Thus, the linearly many \(2\)-point functions, which can be estimated from expectation values, suffice to determine the corresponding quantum state1. Footnote 1: The proof of Theorem 1 is based on the fact that the \(k\)-point functions are exponentially hard to learn from empirical expectation values. Since Wicks theorem is at the core of the free fermion formalism, the lack of its applicability for distribution learning hints at the possibility of a stronger statement. The next statement goes into this direction as that it allows the learner to have more general sample access to the true distribution (instead of only empirical expectation values). It relies on the learning parities with noise assumption, a standard assumption in cryptography [14, 15], and only applies with respect to an evaluator. This means, that we can show hardness only when the algorithm is required to learn a program that is able to compute the probability density function of the underlying distribution (c.f. Section 2). **Informal Theorem 2:** (C.f. Corollary 3 and Corollary 4) _Free fermion distributions are hard to learn from samples under the learning parity with noise assumption._ The proofs of Informal Theorem 1 and Informal Theorem 2 follow closely those of Theorem 4 and Theorem 2 in [13], respectively. In particular, we give a description for embedding parities as well as noisy parities into free fermion distributions. The core technical difficulty is that fermion distributions themselves carry a parity constraint. It is not hard however to alleviate this by means of an additional auxiliary mode. ## 2 Preliminaries These preliminaries are sectioned into two parts. The first part is about distribution learning and the second part is about free fermion, respectively match gate distributions. While this may be safely skipped by readers familiar with the topic we believe it is handy to align with our notation. ### Distribution Learning We denote by \(\mathcal{F}_{n}\) the set of Boolean functions from \(\{0,1\}^{n}\) to \(\{0,1\}\) and by \(\mathcal{D}_{n}\) the set of probability distributions over \(\{0,1\}^{n}\). The parity of a bitstring \(x\) is denoted by \(|x|\) and is given by \(|x|=\sum_{i}x_{i}\mod 2\). A subset \(\mathcal{D}\subset\mathcal{D}_{n}\) is referred to as a distribution class. For two probability distributions \(P,Q\,:\{0,1\}^{n}\to[0,1]\), we denote by \(\mathrm{d}_{\mathrm{TV}}(P,Q)\,:=\frac{1}{2}\sum_{x\in[0,1]^{n}}|P(x)-Q(x)|\) the total variation distance between them. Access to distributions is formalized by an oracle with a specific operational structure. In this work, we consider the sample and the statistical query oracle (see also [11, 12, 13, 14]). **Definition 1:** (Distribution oracles) Given \(P\in\mathcal{D}_{n}\) and some \(\tau\in(0,1)\). We define: 1. The sample oracle \(\mathsf{Sample}(P)\) as the oracle which, when queried, provides a sample \(x\sim P\). 2. The statistical query oracle \(\mathsf{Stat}_{\tau}(P)\) as the oracle which, when queried with a function \(\phi\,:\,\{0,1\}^{n}\to[-1,1]\), responds with some \(v\) such that \(|\,\mathbf{E}_{x\sim P}[\phi(x)]-v|\leq\tau\). We say that an oracle presents a distribution. A learning algorithm has to output the distribution in the sense of some representation. In this work we mainly focus on the two most popular representations: generators and evaluators. These are formalizations of generative and density models, respectively, and are defined as follows. **Definition 2**:: (Representations of a distribution) Given \(P\in\mathcal{D}_{n}\), we say that 1. a probabilistic (or quantum) algorithm \(\mathsf{Gen}(P)\) is a generator for \(P\) if it produces samples according to \(x\sim P\). 2. an algorithm \(\mathsf{Eval}(P)\,:\,\{0,1\}^{n}\to[0,1]\) is an evaluator for \(P\) if, on input \(x\in\{0,1\}^{n}\) it outputs \(\mathsf{Eval}(P)[x]=P(x)\). We say that a generator (evaluator) represents a distribution. To formalize the notion of _distribution learning_, we use the framework introduced in Ref. [14]. This definition is analogous to the definition of probably-approximately correct function learning, in that it introduces parameters \(\epsilon\) and \(\delta\) to quantify approximation error and probability of success. **Problem 1**:: ((\(\epsilon,\delta\))-distribution-learning) Let \(\epsilon,\delta\in(0,1)\) and let \(\mathcal{D}\) be a distribution class. Let \(\mathcal{O}\) be a distribution oracle. The following task is called \((\epsilon,\delta)\)-distribution-learning \(\mathcal{D}\) from \(\mathcal{O}\) with respect to a generator (evaluator): Given access to oracle \(\mathcal{O}(P)\) for any unknown \(P\in\mathcal{D}\), output with probability at least \(1-\delta\) a generator (evaluator) of a distribution \(Q\) such that \(\mathsf{d}_{\text{TV}}(P,Q)<\epsilon\). **Definition 3**:: (Efficiently learnable distribution classes) Let \(\mathcal{D}\) be a distribution class, and let \(\mathcal{O}\) be a distribution oracle. We say that \(\mathcal{D}\) is computationally (query) efficiently learnable from \(\mathcal{O}\) with respect to a generator/evaluator, if there exists an algorithm \(\mathcal{A}\) which for all \((\epsilon,\delta)\in(0,1)\) solves the problem of \((\epsilon,\delta)\)-distribution learning \(\mathcal{D}\) from \(\mathcal{O}\) with respect to a generator/evaluator, using \(O(\text{poly}(n,1/\epsilon,1/\delta))\) computational steps (oracle queries). Moreover, the generator/evaluator which is learned must itself be an efficient algorithm, i.e. terminates after \(O(\text{poly}(n))\) many steps. As we are most often concerned with computational efficiency and with the sample oracle, we often omit these qualifiers in this case, and simply say "\(\mathcal{D}\) is efficiently learnable". If a distribution class is not efficiently learnable, then we say it is hard to learn. ### Free Fermion and Match Gate Distributions Free fermion states are states that can be written as gaussian states in the fermionic creation operators \[|\psi\rangle=\mathcal{N}\exp\Bigg{(}\sum_{ij}G_{ij}c_{i}^{\dagger}c_{j}^{ \dagger}\Bigg{)}|0\rangle\, \tag{1}\] where \(G\) is an anti-symmetric generating matrix, \(\mathcal{N}\) a normalization term and the \(c_{i}^{\dagger}\) are the fermionic creation operators while \(|0\rangle\) is the vacuum state. The amplitudes in the mode basis are then given by \[\langle x|\psi\rangle=\mathcal{N}\mathrm{Pf}\left(G|_{x}\right)\,. \tag{2}\] Here, \(\mathrm{Pf}\) is the Pfaffian and \(G|_{x}\) denotes the anti symmetric sub-matrix of \(G\) indicated by the 1's in the bitstring \(x\). Equivalently, these states can be created by circuits of (particle number non-preserving) free fermion unitaries. As shown in [10], on the level of qubits these correspond to nearest neighbor match gate circuits on a line. **Definition 4**:: (Match Gate): We say a \(2\)-qubit unitary of the form \[U=e^{i\phi}\begin{pmatrix}W_{11}&0&0&W_{12}\\ 0&Q_{11}&Q_{12}&0\\ 0&Q_{21}&Q_{22}&0\\ W_{21}&0&0&W_{22}\end{pmatrix} \tag{3}\] is a match gate if \(W,Q\in\mathrm{SU}(2)\) and \(\phi\in[0,2\pi)\). We denote by \(\mathcal{G}\) the set of all match gates. Quantum circuits composed of match gates are referred to as match gate circuits. Let us define two particularly important match gates in the context of this work. **Definition 5**:: The fermionic swap gate \(\mathrm{FSWAP}\) is defined as \[\mathrm{FSWAP}=\begin{pmatrix}1&0&0&0\\ 0&0&1&0\\ 0&1&0&0\\ 0&0&0&-1\end{pmatrix}\,. \tag{4}\] Moreover we define \[U_{X}(t)=e^{itX\otimes X}=\begin{pmatrix}\cos(t/2)&0&0&i\sin(t/2)\\ 0&\cos(t/2)&i\sin(t/2)&0\\ 0&i\sin(t/2)&\cos(t/2)&0\\ i\sin(t/2)&0&0&\cos(t/2)\end{pmatrix}=\cos(t/2)\mathds{1}_{4}+i\sin(t/2)X \otimes X\,, \tag{5}\] where \(X\) is the usual Pauli-\(X\) gate. The \(\mathrm{FSWAP}\) gate is identical to the \(\mathrm{SWAP}\) gate except for the \(-1\) phase in the \(((11),(11))\) entry. This phase corresponds to the fermionic statistics when swapping two occupied modes, thus the name fermionic swap. Any non-local two body interaction in the fermionic formalism can be represented by a circuit of local intereactions in the match gate formalism at the price of a linear increase in the depth of the circuit. Finally, we define the actual class of distributions we are interested in: the Born distributions corresponding to free fermion states when measured in the occupation number basis. This is equivalent to measuring match gate states in the computational basis. Thus we define the following. **Definition 6**:: (Match Gate Distributions): We define the class of (nearest neighbour one dimensional) match gate distributions on \(n\) qubits at depth \(d\) to contain all \(n\)-bit distributions of the form \[P(x)=|\,\langle x|U|0^{n}\rangle|^{2} \tag{6}\] where \(U\) is any nearest neighbour one dimensional match gate circuit on \(n\) qubits of depth \(d\). The set of all such distributions is denoted by \(\mathcal{M}(n,d)\) or simply \(\mathcal{M}\). The set of one dimensional nearest neighbor free fermion distributions is identical to the set of match gate distributinos at the corresponding depth and particle number. Similarly, we identify the set of non-local free fermion distributions at a given depth with the set of local match gate distributions at the same depth without the FSWAP gates taken into account. ## 3 Results In this section we will show the hardness of learning \(\mathcal{M}\) both, from statistical queries as well as from samples. To this end we will embed parities and then noisy parities. The hardness of learning then follows in analogy to [11]. We begin with the following definition. **Definition 7**:: (Parity): For any \(s\in\{0,1\}^{n}\) denote by \(\chi_{s}\,:\,\{0,1\}^{n}\to\{0,1\}\) the party function defined as \[\chi_{s}(x)=x\cdot s\,, \tag{7}\] where the scalar product is taken in \(\mathds{F}_{2}^{n}\). Moreover the corresponding parity distribution \(D_{s}\in\mathcal{D}_{n+1}\) is defined as \[D_{s}(x,y)=\begin{cases}2^{-n}\,,&\chi_{s}(x)=y\\ 0&\text{else},\end{cases} \tag{8}\] with \((x,y)\in\{0,1\}^{n}\times\{0,1\}\) and denote by \(\mathcal{D}_{\chi}\) the set of all such distributions. For any noise rate \(\eta\in[0,1]\) we define the noisy parity distribution as \[D_{s}^{\eta}(x,y)=\begin{cases}(1-\eta)\cdot 2^{-n}\,,&\chi_{s}(x)=y\\ \eta\cdot 2^{-n}&\text{else}.\end{cases} \tag{9}\] Denote the set of all such distributions by \(\mathcal{D}_{\chi}^{\eta}\). In this work we will focus on two distribution classes derived from those. First consider the "fermionized" version of \(\mathcal{D}_{\chi}\) (note that this "fermionization" is not unique). Here, \(|x|\) denotes the parity of \(x\) and addition of bits refers to the modulo two addition as in \(\mathds{F}_{2}\) (such as \(|x|+y\) for bit string \(x\) and bit \(y\)). **Definition 8**:: For any \(s\in\{0,1\}^{n}\) define the fermionized parity function \(\xi_{s}\,:\,\{0,1\}^{n}\to\{0,1\}^{2}\) as \[\xi_{s}(x)=\begin{pmatrix}\chi_{s}(x)\\ \chi_{s}(x)+|x|\end{pmatrix}\,. \tag{10}\] Next, we define the fermionized parity distribution \(M_{s}\in\mathcal{D}_{n+2}\) as \[M_{s}(x,y,z)=\begin{cases}2^{-n}\,,&\chi_{s}(x)=y\,\text{and}\,|x|+y=z\\ 0&\text{else}.\end{cases}=\begin{cases}2^{-n}\,,&\xi_{s}(x)=(y,z)\\ 0&\text{else}.\end{cases} \tag{11}\] The set of all such distributions is denoted by \(\mathcal{M}_{\chi}\) The interpretation of \(\xi_{s}\) and \(M_{s}\) is as follows: Any fermionic state has a parity constraint. Thus, we can not encode the distributions \(D_{s}\) corresponding to a random example oracle for the parity function \(\chi_{s}\) directly into a state on \(n+1\) qubits. Rather, we need an auxiliary qubit that takes care of the global parity constraint. This is done via Definition 8. The following lemma states an equivalence between \(M_{s}\) and \(D_{s}\). **Lemma 1**:: _An evaluator (generator) for \(M_{s}\) for some unknown \(s\) implies an evaluator (generator) for \(D_{s}\) and vice versa._ Proof.: In case of an evaluator: Assume an evaluator for \(M_{s}\), let \((x,y)\in\{0,1\}^{n}\times\{0,1\}\). Define \(z=|x|+y\). Then we can simulate an evaluator for \(D_{s}\) as \(\operatorname{\mathtt{Eval}}_{D_{s}}(x,y)=\operatorname{\mathtt{Eval}}_{M_{s }}(x,y,z)\). Conversely, assuming an evaluator for \(D_{s}\), let \((x,y,z)\in\{0,1\}^{n}\times\{0,1\}^{2}\). We first check whether \(|x|+y=z\). If this is the case we compute \(\operatorname{\mathtt{Eval}}_{M_{s}}(x,y,z)\) from \(\operatorname{\mathtt{Eval}}_{D_{s}}(x,y)\). Else we return \(0\). In case of a generator we can simulate \(\operatorname{\mathtt{Gen}}_{D_{s}}\) by simply discarding the last bit of the output of \(\operatorname{\mathtt{Gen}}_{M_{s}}\). Conversely, we can simulate \(\operatorname{\mathtt{Gen}}_{M_{s}}\) from \(\operatorname{\mathtt{Gen}}_{D_{s}}\) by sampling a bit string \((x,y)\) and appending \(|x|+y\). ### Statistical Query Lower Bound With this in mind the following lemma the leads to hardness of learning \(\mathcal{M}_{\chi}\) in the statistical query setting. **Lemma 2**:: (Statistical Query Reduction) _Any statistical query to \(\xi_{s}\) (\(M_{s}\)) with tolerance \(\tau\) can be simulated via two statistical queries to \(\chi_{s}\) (\(D_{s}\)) with tolerance \(\frac{\tau}{2}\)._ Proof.: Let \(\phi:\{0,1\}^{n}\times\{0,1\}^{2}\to[-1,1]\) be a function which we want to query. Then \(\phi\) can be decomposed in three parts \(\phi=\phi_{e}+\phi_{o}+\phi_{0}\) where: * \(\phi_{e}(x,y,z)=:\ \hat{\phi}_{e}(x,y)\) is non-zero only when \((x,y,z)\) is such that \(|x|+y=z=0\). * \(\phi_{o}(x,y,z)=:\ \hat{\phi}_{o}(x,y)\) is non-zero only when \((x,y,z)\) is such that \(|x|+y=z=1\). * \(\phi_{0}\) is such that \(\operatorname{\mathtt{E}}_{M_{s}}[\phi_{0}]=0\) for all \(s\). Thus \[\operatorname{\mathtt{E}}_{(x,y,z)\sim M_{s}}[\phi(x,z,y)]= \operatorname{\mathtt{E}}_{(x,z)\sim D_{s}}\left[\hat{\phi}_{e}(x,y)\right]+ \operatorname{\mathtt{E}}_{(x,z)\sim D_{s}}\left[\hat{\phi}_{o}(x,y)\right]\;, \tag{12}\] which implies that two statistical queries to \(D_{s}\) with tolerance \(\frac{\tau}{2}\) suffice to simulate a statistical query to \(M_{s}\) with tolerance \(\tau\). Thus, any statistical query algorithm for learning \(\mathcal{M}_{\chi}\) implies a statistical query algorithm for learning \(\mathcal{D}_{\chi}\) with twice the query complexity. The latter is known to be exponentially hard [1] thus the former must be hard, too. This gives the following lemma. **Lemma 3**:: _Let \(\epsilon<1/4\) and \(\delta<1/2-2^{-3n}\). \((\epsilon,\delta)\)-learning \(\mathcal{M}_{\chi}\) requires at least \(\Omega(2^{n/3-2})\) statistical queries with tolerance \(\tau=\Omega(2^{-n/3+1})\)._ Note, that we did not clarify the representation with respect to which the learner has to learn. This is, because the proof is of information theoretic nature and thus holds for any representation. Proof.: The proof is analogous to the proof of Theorem 4 in [12]. Assume an algorithm \(\mathcal{A}\) which is able to \((\epsilon,\delta)\)-learn \(\mathcal{M}_{\chi}\) for \(\epsilon,\delta<1/2\) using \(q\) many statistical queries with tolerance \(\tau\) and with respect to some representation. Using Lemma 2 we obtain an algorithm \(\mathcal{B}\) that given statistical query access with tolerance \(\tau/2\) to some \(D_{s}\in\mathcal{D}_{\chi}\subset\mathcal{D}_{n+1}\) learns the corresponding representation for some \(M\in\mathcal{D}_{n+2}\) such that \(\mathrm{d}_{\mathrm{TV}}(M,M_{s})<\epsilon\). We can use this representation of \(M\) to compute the corresponding underlying parity function without any further statistical queries. To see this, note that for any \(r\neq t\) it holds \(\mathrm{d}_{\mathrm{TV}}(M_{r},M_{t})\geq 1/2\). Thus, \(s\) is the unique bit string for which \(M_{s}\) is at most \(1/2-\epsilon<1/4\) far from \(M\), which can be found by brute force without any query to \(D_{s}\). In particular, algorithm \(\mathcal{B}\) makes \(2q\) statistical queries with tolerance \(\tau^{\prime}=\tau/2\) to \(\mathcal{D}_{\chi}\) in order to learn the correct parity function. Hence, for \(\tau=\Omega(2^{-n/3+1})\) it must hold that \(q=\Omega(2^{n/3-2})\)[1, Theorem 12]. With the following observation we can thus conclude that learning the corresponding match gate distributions is hard. **Lemma 4**:: \[\mathcal{M}_{\chi}\subset\mathcal{M}(n,O(n))\] (13) Proof.: Let \(\mathrm{par}_{k}\) denote the uniform distribution on even parity bit strings on \(k\) bits \[\mathrm{par}_{k}(x)=\begin{cases}2^{-n+1}\,,&|x|=0\\ 0\,,&\mathrm{else}\,.\end{cases} \tag{14}\] Every distribution \(M_{s}\) can be written as \[M_{s}=\Pi\cdot\left(\mathrm{par}_{m}\otimes\mathrm{par}_{n+2-m}\right), \tag{15}\] where \(\Pi\) is a permutation on \(n+2\) bits consisting of swaps of bits only and thus can be written as a linear depth network of SWAP gates, and where \(m-1\leq n\) is the number of \(1\)'s in \(s\). To see this we note that the parity constraint \(z=|x|+y\) is equivalent to \(z=|x|_{-s}|\) where \(x|_{-s}\) denotes the sub string of \(x\) labelled by the \(0\)'s in \(s\). Next we note that \(\mathrm{par}_{k}\) can be realized as the Born distribution of a depth \(2\) match gate circuit on \(k\) qubits. The corresponding circuit consists only of \(U_{X}(\pi/2)\) gates. To see this we note that \(U_{X}(\pi/2)\) applied to \(|ij\rangle\) flips with probability \(0.5\) both bits to the state \(|\neg i\neg j\rangle\) and leaves both in the \(|ij\rangle\) state with probability \(0.5\) (up to a phase). Thus, the depth two brick work circuit composed of those gates applied to the all zero state creates every even parity bit string with an equal probability. Finally, we observe that applying an FSWAP before the measurement has the same effect as applying a SWAP on the post measurement distribution. In particular, \[D^{\otimes 2}\circ\left(\mathrm{FSWAP}\otimes\mathrm{FSWAP}^{\dagger}\right)= \mathrm{SWAP}\circ D^{\otimes 2}\,, \tag{16}\] where \(\mathrm{FSWAP}\otimes\mathrm{FSWAP}^{\dagger}\) is the quantum channel corresponding to FSWAP and \(D\) is the local computational basis measurement operator which maps the quantum state to its diagonal vector \[D(|i\rangle\!\langle j|)=\delta_{ij}\,|i\rangle. \tag{17}\] Combining this with Equation (15) we find that any \(M_{s}\) can be implemented by a depth \(O(n)\) match gate circuit. Together with Lemma 7 in analogy to Theorem 4, both from [11], we obtain the following corollary. **Corollary 1:** (Local Free Fermion Distributions: Formal version of Informal Theorem 1) _There is no efficient algorithm for learning \(\mathcal{M}\) at depth \(d=\omega(\log(n))\) from inverse polynomial accurate queries. Equivalently, learning \(\mathcal{M}\) at depth \(\Omega(n)\) from statistical queries with tolerance \(\tau=\Omega(2^{-n/3+1})\) requires \(\Omega(2^{n/3-1})\) many queries._ From the perspective of free fermions it is also natural to consider the same question without a locality constraint. On the level of match gates this amounts as counting the FSWAP as a free resource. Comparing with the proof of Lemma 4 we then find the following corollary which is probably the strongest formulation of our result. **Corollary 2:** (Non-local Free Fermion Distributions: Formal version of Informal Theorem 1) _Learning non-local free fermion distributions at constant depth \(d\geq 2\) from statistical queries with tolerance \(\tau=\Omega(2^{-n/3+1})\) requires \(\Omega(2^{n/3-1})\) many queries._ ### Hardness for Learning from Samples Let us now consider the sample oracle. To this end we are going to embed the learning parities with noise (LPN) problem into \(\mathcal{M}\). The LPN problem is defined in the context of _probably approximately correct_ (PAC) learning. **Definition 9:** (PAC learning) Let \(\epsilon,\delta>0\), let \(C\) be a class of boolean functions \(f:\{0,1\}^{n}\to\{0,1\}\) and let \(D\) be a distribution over \(\{0,1\}^{n}\). We say an algorithm \(\mathcal{A}\) efficiently \((\epsilon,\delta)\)-PAC learns \(C\) with respect to \(D\) if, for any \(f\in\mathcal{C}\), the algorithm receives \(N\) samples \((x_{1},f(x_{1})),...,(x_{N},f(x_{N}))\) with \(x\sim D\), and, with probability \(1-\delta\) returns a boolean function \(h\), such that \[\Pr_{x\sim D}\left[f(x)\neq h(x)\right]<\epsilon\,. \tag{18}\] where the run time of \(\mathcal{A}\) (and thus also the number of samples \(N\)) is bounded by \(O(\operatorname{poly}(n,1/\epsilon,1/\delta))\) **Conjecture 1:** (Learning Parities With Noise) _There is a constant \(0<\eta<1/2\) such that there is no efficient algorithm for learning parity functions under the uniform distribution in the PAC model with classification noise rate \(\eta\)._ Thus, LPN states that there is a constant \(\eta\) such that it is hard to learn \(s\) when given access to \(D_{s}^{\eta}\). In [12] it is shown how this implies hardness of learning \(D_{s}^{\eta}\) with respect to an evaluator in the sense of distribution learning. In particular, learning \(\mathcal{D}_{\chi}^{\eta}\) is at least as hard as LPN. We will now embed \(\mathcal{D}_{\chi}^{\eta}\) into \(\mathcal{M}\) in order to obtain the corresponding hardness result for match gates. Again we start by defining the "fermionized" LPN distribution. **Definition 10:** For any \(s\in\{0,1\}^{n}\) and noise rate \(0\leq\eta\leq 1\) we define the fermionized noisy parity distribution \(M_{s}^{\eta}\) over \(\{0,1\}^{n+2}\) as \[M_{s}^{\eta}(x,y,z)=\begin{cases}(1-\eta)\cdot 2^{-n}\,,&\chi_{s}(x)=y\, \text{and}\,|x|+y=z\\ \eta\cdot 2^{-n}\,,&\chi_{s}(x)=\neg y\,\text{and}\,|x|+y=z\\ 0\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \, The set of all such distributions is denoted by \(\mathcal{M}_{\chi}^{\eta}\). Again, the structure is such that, by definition, the distribution \(M_{s}^{\eta}\) is supported only on even parity strings and, on those, encodes \(D_{s}^{\eta}\). This is made clear in the following lemma similar to Lemma 1. **Lemma 5**:: _The generators \(\mathsf{Gen}(D_{s}^{\eta})\) and \(\mathsf{Gen}(M_{s}^{\eta})\) simulate each other, and hence the corresponding sample oracles simulate each other. Similarly, \(\mathsf{Eval}(D_{s}^{\eta})\) and \(\mathsf{Eval}(M_{s}^{\eta})\) simulate each other._ Proof.: Let \((x,y)\sim D_{s}^{\eta}\). Then \((x,y,|x|+y)\sim M_{s}^{\eta}\). Contrarily, let \((x,y,z)\sim M_{s}^{\eta}\). Then \((x,y)\sim D_{s}^{\eta}\). The statement on evaluators follows in both directions from the defining equation \(D_{s}^{\eta}(x,y)=M_{s}^{\eta}(x,y,|x|+y)\) together with \(M_{s}^{\eta}(x,y,\neg(|x|+y))=0\). We are now able to show hardness of learning \(\mathcal{M}_{\chi}^{\eta}\) with respect to an evaluator. **Lemma 6**:: _Under the LPN assumption there is no efficient algorithm for learning \(\mathcal{M}_{\chi}^{\eta}\)._ Proof.: The proof is analogous to the proof of Theorem 16 in [14]. Let \(\mathcal{A}\) be an algorithm that efficiently \((\epsilon,\delta)\)-learns \(\mathcal{M}_{\chi}^{\eta}\). We will now construct an algorithm \(B\) that solves LPN. We first use Lemma 5 in order to efficiently transform the noisy parity oracle to a sample oracle for some unknown \(M_{s}^{\eta}\). We then run \(\mathcal{A}\) in order to obtain, with probability \(1-\delta\) an evaluator for a distribution \(M\) with \(\mathrm{d}_{\mathrm{TV}}(M,M_{s}^{\eta})<\epsilon\). For a uniform random \(x\) we can, with probability \(1-\epsilon\) over the choice of \(x\), compute \(\chi_{s}(x)\) by checking whether \(M(x,0,|x|)\) or \(M(x,1,|x|+1)\) is larger and return accordingly. Thus we can \((\epsilon,\delta)\)-PAC learn \(\chi_{s}\) with respect to the uniform distribution. We conclude, by Conjecture 1 that \(\mathcal{A}\) must be inefficient. We now show that the fermionized noisy parity distribution actually is contained in \(\mathcal{M}\). **Lemma 7**:: \[\mathcal{M}_{\chi}^{\eta}\subset\mathcal{M}(n,O(n))\,.\] (20) Proof.: Let \(|\psi_{s}\rangle\) be some match gate state that has \(M_{s}\) as its Born distribution (c.f. Lemma 4). Then, applying \(U_{X}(t)\) to the \(y,z\) register of \(|\psi_{s}\rangle\) with \(\sin^{2}(t/2)=\eta\) results in a state \(|\psi_{s}^{\prime}\rangle\) with \(M_{s}^{\eta}\) as corresponding Born distribution. In particular \[|\,\langle x,y,z|\mathds{1}_{n}\otimes U_{X}(t)|\psi_{s}\rangle |^{2}=\cos(t/2)^{2}|\langle x,y,z|\psi_{s}\rangle|^{2}+\sin(t/2)^{2}|\langle x,\neg y,\neg z|\psi_{s}\rangle|^{2}=M_{s}^{\eta}(x,y,z) \tag{21}\] This leads us to the following concluding corollary: **Corollary 3**:: (Local Free Fermion Distributions: Formal version of Informal Theorem 2) _Assuming LPN, then for any \(d=n^{\Omega(1)}\) there is no efficient algorithm for learning \(\mathcal{M}(n,d)\) with respect to an evaluator from samples._ Similarly to Corollary 2 we can make an even stronger statement in terms of non-local free fermion distributions. **Corollary 4**:: (Non-local Free Fermion Distributions: Formal version of Informal Theorem 2): _Assuming LPN there is no efficient algorithm for learning non-local free fermion distributions with respect to an evaluator from samples at any constant depth \(d\geq 2\)._ ## 4 Conclusion and Discussion In this work we have shown that it is hard to learn free fermion, or match gate distributions. We have shown that algorithms with access only to empirical expectation values require exponentially many queries to learn the underlying distribution. Moreover, if the algorithm is given general sample access to the underlying distribution the problem is still hard. In particular, learning the probability density function, or an evaluator, is still at least as hard as learning parity with noise. A problem that is believed to be computationally hard. Our work gives first results about the (non)-learnability of free fermion distributions. However, many questions remain open. Two immediate questions regard (1) the average case hardness of free fermion distributions in the statistical query setting, similar to the analysis in [23] and (2) the hardness of learning free fermion distributions with a fixed particle number. ## Acknowledgements I am thankful to many fruitfull discussions with, in random order, Andreas Bauer, Marek Gluza, Marcel Hinsche, Marios Ioannu, Lennart Bittel, Ryan Sweke, Jonas Haferkamp and Jens Eisert. This work was supported by the BMBF (QPIC-1, Hybrid), DFG (CRC 183), the BMBK (EniQmA), and the Munich Quantum Valley (K-8). ## Notes 1. We note that it is actually easy to see that any algorithm that tries to learn free fermion distributions by means of two point functions only must have some blind spots: The set of free fermion distributions contains all tensor products of even parity distributions (c.f. Lemma 4). The even parity distribution, on the other hand, has uniform marginals. Thus, there exist pairs of distributions that can not be distinguished by the knowledge of all two point functions.
2302.07917
Evaluating Trade-offs in Computer Vision Between Attribute Privacy, Fairness and Utility
This paper investigates to what degree and magnitude tradeoffs exist between utility, fairness and attribute privacy in computer vision. Regarding privacy, we look at this important problem specifically in the context of attribute inference attacks, a less addressed form of privacy. To create a variety of models with different preferences, we use adversarial methods to intervene on attributes relating to fairness and privacy. We see that that certain tradeoffs exist between fairness and utility, privacy and utility, and between privacy and fairness. The results also show that those tradeoffs and interactions are more complex and nonlinear between the three goals than intuition would suggest.
William Paul, Philip Mathew, Fady Alajaji, Philippe Burlina
2023-02-15T19:20:51Z
http://arxiv.org/abs/2302.07917v1
# Evaluating Trade-offs in Computer Vision Between Attribute Privacy, Fairness and Utility ###### Abstract This paper investigates to what degree and magnitude tradeoffs exist between utility, fairness and attribute privacy in computer vision. Regarding privacy, we look at this important problem specifically in the context of attribute inference attacks, a less addressed form of privacy. To create a variety of models with different preferences, we use adversarial methods to intervene on attributes relating to fairness and privacy. We see that that certain tradeoffs exist between fairness and utility, privacy and utility, and between privacy and fairness. The results also show that those tradeoffs and interactions are more complex and nonlinear between the three goals than intuition would suggest. ## 1 Introduction Despite recent successes in developing artificial intelligence (AI) and deep learning (DL) systems [22] for a range of applications including image interpretation, vehicular navigation, robotics, and medicine [17; 29; 28; 31; 9; 25; 33], ensuring these systems are reliable enough to build trust is still an open problem. Trust and ethical concerns about AI systems such as explainability, adversarial vulnerabilities (adversarial machine learning or AML) [16; 10] and importantly, fairness [6; 7] and privacy [32; 26; 18] - the two main foci of this work - have recently put into question the deployment of certain autonomous systems. A common theme underpinning a number of these concerns is being able to discern what information the model is using to make its decision. Fairness across groups, which is trying to ensure performance of the system is equal in some respect between different subpopulations denoted by some sensitive attribute, is typically viewed as the model learning correlations between the sensitive attribute and the task in an excessive manner. Privacy has a number of different forms, but commonly focuses on cases either where the model may memorize the training data used in some manner or, what we address in this work, where features are used as a surrogate for the original data to infer private attributes about individuals, known as attribute privacy. For attribute privacy, the model effectively carries information about the private attribute into the features, either inadvertently or for use at the task at hand, acting similarly to concerns raised in addressing fairness. However, if we want to address these concerns simultaneously, there can be a balancing act for determining what information is used for the task, leading to potential trade-offs between how well the task is performed (utility) as well as the fairness and attribute privacy of the model. Though such trade-offs have been shown for differential privacy [20], to the best of our knowledge, we have not seen such an analysis of trade-offs between all three areas for attribute privacy or computer vision related tasks such as facial recognition. Consequently, in this work, we seek to explore and evaluate how these trade-off present themselves, how severe they are, as well as to what degree we can control them. We leverage adversarial methods as one of the most prominent methods to address group fairness, also using these methods for attribute privacy due to similar motivations. We evaluate on a variety of image datasets, focusing on the tasks of classification and facial recognition, and train a diverse family of models with different preferences of each area of concern to better understand their corresponding metrics are affected. ## 2 Prior Work There has been increased interest in looking at aspects of assured AI now that deep learning has demonstrated it can operate on par or beyond human capabilities for tasks including classification, detection in natural images and that machine learning model allow for having similar performance to that of clinicians for diagnostics for pathologies like skin lesions or retinopathies [8]. There are a number of different areas within assuring AI, such as being able to provide explanations, robust to imperceptual attacks [10], as well as our focus, ensuring fair and private operation. Fairness is typically viewed disparities between subpopulations characterized by some attribute, otherwise known as group fairness, and has been investigated in a number of prior studies ranging from natural language processing to medical imagery [36; 5; 36; 21]. A broad taxonomy of methods [11] is organized along the line of approaches that intervene in different parts of the training process, from data preprocessing, calibrating the predictions, and changing the model, including modifying the loss function used which is our approach in this work. Model interventions are a common way to address fairness [15; 4; 1], typically focusing on reducing the undue influence of an attribute within some internal features of the model. This is done via reducing some notion of distance between features of different subpopulations, such as maximum mean discrepancy, or other form of discrimination such as those induced by adversaries [35; 37; 34]. Privacy has a number of different instantiations, such as protecting against membership inference attacks, where an attacker is trying to infer whether a data point was used for training, or achieving some level of differential privacy, controlling how much influence the training dataset has on the final weights. However, our focus in this work is on a third form called attribute inference attacks, where an attacker is attempting to predict a private attribute about the original data from a transformed or masked version acting as a surrogate. In information theory, attempting to ensure robustness to this attack while maintaining some level of utility of the transformed data is known as the privacy funnel [24; 2]. More specifically, the goal of the privacy funnel is to maximize the mutual information between the transformed data and the original data, while keeping the mutual information between transformed data and the private attribute below a certain threshold. Unlike differential privacy, attribute privacy requires the network to actively prune out information about the private attribute in order to defend against an adversarial threat model. Consequently, there is significant overlap between addressing attribute privacy and group fairness, as both arguably only differ in terms of how the included information in the feature is presented downstream [3]. Figure 1: Diagram depicting our overall methodology and framework. In order to create classifiers with different characteristics, we modulate the amount of information related to the sensitive and private attributes. Finally, we evaluate utility, fairness, and privacy holistically, answering what interactions exist between them as well as how they interact with the chosen coefficients. Addressing both group fairness and attribute privacy simultaneously is less studied however. For differential privacy, [20] appears to be the most prominent approach towards addressing both, though this work focuses on differential privacy. [30; 12] both evaluated how well adversarial learning in facial recognition can help ensure group fairness and attribute privacy. Motivating our work however, they did not investigate what trends exist between fairness, privacy, and utility. ## 3 Methodology We detail notation and defintions used, how we construct our framework for experiments, as well as what metrics are used. **Notation and Definitions:** In terms of notation used, we consider the dataset to be comprised of images \(X\), target labels \(Y\),predicted labels \(\hat{Y}\), sensitive labels \(Y_{A}\), and private labels \(Y_{P}\). The models we use consist of: the feature extractor F, converting images to features with weights \(\theta_{F}\); the classifier (C) using the features to predict \(Y\) with weights \(\theta_{C}\); the fairness adversary A attempting to predict \(Y_{A}\) from the features using weights \(\theta_{A}\); and the privacy adversary (P) attempting to predict \(Y_{P}\) from the features using weights \(\theta_{P}\). \(\alpha\) and \(\beta\) are linear coefficients scaling the loss for the fairness and privacy adversaries. We review the fairness criteria we focus on below: **Definition 1** (Fairness Criteria).: _Given the sensitive label \(A\), target label \(Y\), and the classifier predictions \(\hat{Y}\), the classifier is said to satisfy:_ * _accuracy parity if_ \(P(\hat{Y}=Y|Y_{A})=P(\hat{Y}=Y)\)_, i.e., the predictions matching Y are independent of_ \(A\)_._ * _equality of opportunity if for a given Y=y,_ \(P(\hat{Y}|Y_{A},Y=y)=P(\hat{Y}|Y=y)\)_, i.e., the predictions for the class is y, commonly taken to be the positive class, are independent of_ \(A\)_._ Demographic parity and equality of odds are two notable criteria we do not evaluate on. For demographic parity, all of the tasks we consider in this work focus on more descriptive attributes which are less likely to have allocative bias that demographic parity addressed, where the actual labeling assignment is not fair. Similarly, accuracy parity is focused on as a weaker form of equality of odds, not penalizing the model trading false negatives for false positives. For attribute privacy, we consider the following threat model for attribute privacy: **Definition 2**.: _(Attribute Privacy) Given oracle access to \(F(X)\) for input data \(X\), producing features associated with \(X\), access to labeled public data \(\{(X^{i},Y_{P}^{i})\}_{i=1,\dots,n}\), and access to \(\{F(\tilde{X}^{i})\}_{i=1,\dots,\tilde{n}}\), the features corresponding to hidden data \(\tilde{X}\), an adversary is trying to infer on the hidden data the unknown \(\tilde{Y}_{P}\) corresponding to \(\tilde{X}\) better than chance, i.e., achieving \(P(\tilde{Y}_{P}|F(X))>P(\tilde{Y}_{P})\)._ The adversary's objective is to generalize from the features they have corresponding private attributes for to the features. This form of privacy is not addressed by usual forms of differential privacy, which try to lessen the influence of the training data used, as the feature extractor must activity prune information about \(Y_{P}\) from \(X\). ### Investigating Trade-Offs between Fairness and Privacy There are a number of potential methods that can remove information about an attribute from features, but a consistently used technique is using adversaries to remove information. Consequently, when training models, the corresponding optimization is: \[\min_{\theta_{C},\theta_{F}}\max_{\theta_{A},\theta_{P}}\mathbb{CE}(Y,C(F(X) ))-\alpha\mathbb{CE}(Y_{A},A(F(X),Y))-\beta\mathbb{CE}(Y_{P},P(F(X),Y)) \tag{1}\] where \(\mathbb{CE}\) denotes the cross entropy and both adversaries are conditional on \(Y\) as we are targeting fairness criteria that are conditional on \(Y\) and for privacy as the attacker can easily acquire \(Y\) from the features. For each dataset, as shown in Figure 1, we then perform a grid search over \(\alpha\) and \(\beta\), with each hyperparameter either zero or going from \(10^{-2}\) to \(10\) logarithmically in 10 steps, and evaluate the corresponding effects on utility, fairness and privacy. **Metrics:** We utilize several metrics in this study, primarily for evaluating area of concern as well as evaluating a combination of these metrics. For utility, we use either the overall accuracy or true positive rate. For fairness, to expand into cases where \(Y_{A}\) is not binary, we take fairness to be the maximum pairwise absolute difference of the utility metric chosen over the subpopulations defined by \(Y_{A}\). For privacy, to match the threat model, we train an separate adversary, a linear model in this work, on features extracted from the validation dataset, mimicking the access to the labeled public data. The linear model is trained using loss re-weighting to ensure the adversary does not be a constant prediction. The hidden data is consequently the test dataset used, and the metric for privacy is the balanced accuracy for reasons noted in the previous sentence. For metrics evaluating the combination of metrics, we use pairwise correlation to measure the linear relationship between different metrics. To incorporate preferences that are not strictly pairwise, we also introduce a metric called Conjunctive Soft Ranking (CSR) to rank the models. CSR is effectively the convex combination between the normalized metrics for each area, normalized so that the worst model over \(\alpha\) and \(\beta\) is \(0\%\) and the best is \(100\%\). \[N(M_{\alpha,\beta})=\frac{M_{\alpha,\beta}-\min_{\alpha^{ \prime},\beta^{\prime}}M_{\alpha^{\prime},\beta^{\prime}}}{\max_{\alpha^{ \prime},\beta^{\prime}}M_{\alpha^{\prime},\beta^{\prime}}-\min_{\alpha^{\prime },\beta^{\prime}}M_{\alpha^{\prime},\beta^{\prime}}} \tag{2}\] \[CSR_{\alpha,\beta}(\gamma_{U},\gamma_{A},\gamma_{P})=100(\gamma_ {U}N(M_{\alpha,\beta}^{U})+\gamma_{A}(1-N(M_{\alpha,\beta}^{A}))+\gamma_{P}(1 -N(M_{\alpha,\beta}^{P}))) \tag{3}\] where \(\gamma_{U}\),\(\gamma_{A}\),and \(\gamma_{P}\) sum to one, and \(M_{\alpha,\beta}\) denotes the metric for the model trained using \(\alpha\) and \(\beta\), with the superscript denoting which area the metric is measuring. To probe these rankings and see how the best \(\alpha\) and \(\beta\) changes as the preference changes, we chose preferences focusing on a primary metric (using a weight of 0.6) and weighing the other two equally (using a weight of 0.2 each). ## 4 Experiments We evaluate on CelebA [23] on both age classification and facial recognition tasks as well as EyePACs [13] and CheXpert [19] on disease classification. Age classification on CelebA uses accuracy and \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Dataset & Baseline Utility & Baseline Fairness & Baseline Privacy & Best Utility & Best Fairness & Best Privacy \\ \hline \hline CelebA & 74.26\% (Acc.) & 9.27\% (Acc. Gap) & 82.54\% (50\%) & 76.26\% & 4.11\% & 70.33\% \\ CelebA FR & 85.69\% (TPR) & 7.53\% (TPR Gap) & 69.92\% (50\%) & 85.69\% & 1.87\% & 65.38\% \\ EyePACs & 63.33\% (TPR) & 21.00\% (TPR Gap) & 79.83\% (33\%) & 70.00\% & 8.00\% & 67.77\% \\ CheXpert & 62.69\% (TPR) & 24.95\% (TPR Gap) & 38.69\% (25\%) & 67.16\% & 14.57\% & 33.47\% \\ \hline \hline \end{tabular} \end{table} Table 1: Results on single metrics on different datasets. We show the baseline metrics without any intervention, where the specific metric for utility and fairness is shown in parentheses for those columns. For privacy, the percentage in parentheses is when the attack accuracy is no better than chance. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Dataset & U./F: Corr. & U./P: Corr. & F./P. Corr. & CSR(\(0.6,0.2,0.2\)) & CSR(\(0.2,0.6,0.2\)) & CSR(\(0.2,0.2,0.6\)) \\ \hline \hline CelebA & 0.50 & -0.01 & -0.14 & 91.04\% (H, H, ) & 88.98\% (H, H, ) & 88.39\% (H, H, ) \\ CelebA FR & -0.92 & -0.08 & 0.15 & 77.67\% (L, M) & 75.11\% (H, M) & 77.64\% (M, M) \\ EyePACs & 0.58 & -0.25 & -0.19 & 76.00\% (H, M) & 92.00\% (H, M) & 92.00\% (H, M) \\ CheXpert & -0.02 & 0.07 & 0.03 & 87.71\% (M, M) & 82.42\% (H, L) & 87.71\% (M, M) \\ \hline \hline \end{tabular} \end{table} Table 2: Metrics incorporating multiple metrics. We negate utility when computing correlations on utility to match direction of improvement. For CSR metrics, letters in parantheses denote which grouping of \((\alpha,\beta)\) the ranking belongs to. Figure 2: Heatmaps on CelebA over the different grouped regularization strengths for \(\alpha\) and \(\beta\). accuracy gap as its utility and fairness metrics respectively, while true positive rate and true positive rate gap are used for the remaining tasks. For facial recognition, the true positive rate is calibrated so that there is a false positive rate of \(10^{-3}\) for both subpopulations. For classification tasks, we use a ResNet50 pretrained on Imagenet as our model, and adversaries are three layer multi-layer perceptrons MLP with ReLU activations. For facial recognition, we instead use the same architecture for both the model and adversaries as [12] in using a frozen pretrained ArcFace model and appending a learnable MLP. The model takes in a image of 224 by 224 pixels for all datasets except CheXpert, which matches [19] in using a resolution of 320 by 320 pixels. For training, we use alternating minimization, switching the optimization every batch for classification and every seventy batches for facial recognition. For model selection, we used the model with the best validation loss. In the grid search we perform, we train each set of \(\alpha\) and \(\beta\) with three different random seedings, and for visualization, we create a heatmap for each metric and dataset, grouping \(\alpha\) or \(\beta\) into the categories of Baseline (0, B.), Low ([0.01, 0.05], L.), Medium ([0.1-0.5], M.), and High ([1.0-10.0], H.) and taking the median in each group for use in the heatmap. **Datasets:** CelebA is a large facial imagery datasets consisting of celebrities and providing a large number of potential attributes ranging from age to gender. We evaluate fairness with respect to gender, and use a heuristic called Individual Typology Angle (ITA), thresholded to produce a binary attribute, as a surrogate for skin color for privacy. For age classification, the task is to predict age, and for facial recognition, the task is match the identity from a database of stored features, a particular notable case where attribute privacy is important. EyePACs is a retinal imagery dataset where the task is to determine if a particular fundus photo is referable for diabetic retinopathy, where fairness is evaluated with respect to ITA again, and privacy is with respect to the quality of the fundus photo as described in [14], a surrogate for locations where imaging capabilities are less developed. CheXpert is a dataset of chest X-rays with a number of different disease classifications, of which we take predicting pleural effusion as the task. and fairness is with respect to age and privacy with respect to race. ITA for both datasets is computed as in the procedure detailed in [27], and both datasets for classification use training splits that exacerbate fairness by balance the task label while undersampling the positive target and sensitive attributes, while the testing split are balanced across the trio of target, sensitive, and private attributes. Splits for facial recognition are partitioned by the identity without controlling for balancing, while CheXpert reuses the provided splits. **Discussion:** Tables 1 and 2 and Figures 2, 3, 4, and 5 detail our results. Starting with Table 1, we see that we are able to successfully improve both fairness and privacy taken over our sweep. Improvements in utility are more muted, with medical datasets seeing larger improvements compared with the tasks on CelebA. For Table 2, the correlations taken between pairs of metrics are typically strongest between utility (U.) and fairness (U.), which is not inherently surprising given that they both deal with performance. More interesting is that only facial recognition has a strong negative correlation, where improved fairness means decreased overall performance, while classification tasks are either moderately positive or near independent. Part of this may be due to test balancing, but even for the unbalanced CheXpert test split, none of the models had worse utility than the baseline, as seen in the heatmap. Correlation between utility and privacy (P.) was more muted, with EyePACs having the highest magnitude. One caveat here, underscoring the importance of looking at all three metrics concurrently, is that for EyePACs the utility never decreased when intervening on privacy, instead increasing more with a higher \(\alpha\) compared with a higher \(\beta\). For the correlations between fairness and privacy, the intuition regarding the sign of the correlation matches more closely with how the trends behave, with datasets having negative correlations focusing the most on privacy typically has the worst fairness and vice versa. Finally, for the CSR metrics, we see that typically the same model is Figure 3: Heatmaps on CelebA for facial recognition over the different grouped regularization strengths for \(\alpha\) and \(\beta\).
2304.11223
A Group-Specific Approach to NLP for Hate Speech Detection
Automatic hate speech detection is an important yet complex task, requiring knowledge of common sense, stereotypes of protected groups, and histories of discrimination, each of which may constantly evolve. In this paper, we propose a group-specific approach to NLP for online hate speech detection. The approach consists of creating and infusing historical and linguistic knowledge about a particular protected group into hate speech detection models, analyzing historical data about discrimination against a protected group to better predict spikes in hate speech against that group, and critically evaluating hate speech detection models through lenses of intersectionality and ethics. We demonstrate this approach through a case study on NLP for detection of antisemitic hate speech. The case study synthesizes the current English-language literature on NLP for antisemitism detection, introduces a novel knowledge graph of antisemitic history and language from the 20th century to the present, infuses information from the knowledge graph into a set of tweets over Logistic Regression and uncased DistilBERT baselines, and suggests that incorporating context from the knowledge graph can help models pick up subtle stereotypes.
Karina Halevy
2023-04-21T19:08:49Z
http://arxiv.org/abs/2304.11223v1
# A Group-Specific Approach to NLP for Hate Speech Detection ###### Abstract Automatic hate speech detection is an important yet complex task, requiring knowledge of common sense, stereotypes of protected groups, and histories of discrimination, each of which may constantly evolve. In this paper, we propose a group-specific approach to NLP for online hate speech detection. The approach consists of creating and infusing historical and linguistic knowledge about a particular protected group into hate speech detection models, analyzing historical data about discrimination against a protected group to better predict spikes in hate speech against that group, and critically evaluating hate speech detection models through lenses of intersecontality and ethics. We demonstrate this approach through a case study on NLP for detection of antisemitic hate speech. The case study synthesizes the current English-language literature on NLP for antisemitism detection, introduces a novel knowledge graph of antisemitic history and language from the 20th century to the present, infuses information from the knowledge graph into a set of tweets over Logistic Regression and uncased DistilBERT baselines, and suggests that incorporating context from the knowledge graph can help models pick up subtle stereotypes. ## 1 Introduction **Disclaimer**: Due to the nature of this work, some example data and evaluation criteria contain offensive language and stereotypes. These items do not reflect the authors' values--the aim of this paper is to detect and mitigate such hateful language and stereotypes. Hate speech detection and mitigation have become increasingly important issues of technical and societal concern, especially with the rise of large social media platforms with high volumes of content. Many production-scale systems treat hate speech detection as a prediction problem that is independent of the group being targeted. However, given privacy and storage restrictions on social media data (Twitter, Meta), it can be difficult to develop and maintain a persistent corpus and effective model for generalized hate speech detection (Arviv et al., 2021; Jikeli et al., 2019). Furthermore, recent work in NLP ethics has demonstrated that large language models, including but not limited to those built for hate speech detection (Davidson et al., 2019; Xu et al., 2021), can produce racist harms specific to particular protected groups. Previous work has also found that hate speech can be subtle (Magu et al., 2017), quickly evolving (Magu et al., 2017; Warner and Hirschberg, 2012), and coded in a way that requires specialized background knowledge of the protected group being targeted (Jikeli et al., 2019). In this paper, we argue that hate speech detection should be complemented with group-specific knowledge and analyses. Our approach applies group-specific analyses in pursuit of three core research questions: 1. How can NLP methods leverage historical and linguistic knowledge to detect harmful text-based digital content? 2. How can NLP methods reveal historical, social, political, and economic patterns or motivations behind spikes in digital content that harms a protected group or its subgroups? 3. What latent stereotypes do NLP models have about people in a protected group and organizations led by members of this group? We ground our approach in a case study of detecting hate speech that targets Jewish people. To begin using historical knowledge about Jewish people to better classify antisemitic hate, we introduce KnowledJe (Section 3.1), a knowledge graph (KG) of antisemitic events, people, organizations, products, publications, and slurs from the 20th century to the present. We then investigate augmenting an annotated dataset of antisemitic hate speech with entries from KnowledJe. We find that performance can improve by 6% to 20% in F1 score over initially knowledge-poor models, but we also uncover some of the technical challenges that need to be addressed in pursuing knowledge infusion. Continuing our case study, we synthesize findings from recent work that shows that group-specific historical knowledge can be helpful in predicting spikes in online antisemitism (Section 4.1). Finally, to address question (3) above, we introduce a list of criteria for evaluating latent biases in language models that are unique to antisemitism (Section 5.1). To summarize, our main contributions in this paper are the following: * We describe three areas in which group-specific analysis can advance hate speech detection research. * We create and publicize a novel knowledge graph of antisemitic history and a demonstration of its application in hate speech detection.1 Footnote 1: [https://github.com/ENSCMA2/knowledge](https://github.com/ENSCMA2/knowledge) * We apply a framework for evaluating group-specific harms of a language model. ## 2 Related Work ### General Work on Hate Speech and Ethics Several papers from recent years have worked on general hate speech detection (surveyed in Schmidt and Wiegand 2017), online antisemitism detection and analysis (Jikeli et al., 2019), and racism and sexism in NLP models (surveyed in Blodgett et al. 2020 and Field et al. 2021). Additionally, recent works have experimented with giving NLP models general commonsense knowledge. Sap et al. (2019) developed ATOMIC, an atlas of inferential knowledge about everyday commonsense, and showed that neural models could reason about previously unseen data after acquiring knowledge from the atlas. Alkhamisi et al. (2022) then used ATOMIC to further pre-train a BART-based language model and showed improvements in hate speech detection performance over the BART baseline. Our paper adds to this literature by further exploring knowledge infusion for antisemitic hate speech detection and providing a framework to evaluate antisemitism in NLP models. Some papers have also revealed that hate speech may not only be produced by humans--automatic text generation models may experience neural toxic degeneration, in which they are prone to generate racist, sexist, and otherwise offensive text (Sheng et al. 2019, Gehman et al. 2020). The issue of algorithmic bias does not just manifest itself in the form of hate speech during text generation--recent papers have also called for and presented rigorous frameworks and benchmarks for evaluating algorithmic fairness. On the data level, Gebru et al. (2021) created a process to document machine learning datasets that includes capturing their motivation, composition, collection process, pre-processing/cleaning/labeling process, use cases, intended distribution, and maintenance strategy. On the model level, Mitchell et al. (2019) presented a framework for detailed and transparent documentation of machine learning models that includes model details, intended use cases, factors that could influence model performance, model performance metrics, decision thresholds, training and evaluation data, ethical considerations, and caveats and recommendations. At the production level, Raji et al. (2020) proposed a framework that organizations can use to audit their algorithms internally. Tan et al. (2021) also introduced a quantitative method for testing the reliability of NLP models as a way to balance fairness with performance across diverse demographics. Additionally, several researchers have published datasets meant to reveal latent biases in NLP algorithms. Nadeem et al. (2020) created a general challenge set for researchers to use in detecting bias embedded in NLP models,2 and Nangia et al. (2020) created a challenge set for the measurement of bias in masked language models.3 However, these challenge sets have also drawn some criticism--Blodgett et al. (2021) raised concerns about ambiguities and assumptions that make stereotype detection through such benchmark datasets unreliable. We supplement this work by introducing a rubric for evaluation of group-specific biases in NLP algorithms, which can motivate the creation of additional benchmark datasets that reveal these particular biases. Footnote 2: [https://stereoest.mit.edu/](https://stereoest.mit.edu/) Footnote 3: [https://github.com/nyu-mll/crowns-pairs](https://github.com/nyu-mll/crowns-pairs) ### Detecting Antisemitic Hate Speech Antisemitism is defined by the International Holocauk Remembrance Association (IHRA) as a negative perception of Jewish people that may be expressed as hatred towards Jewish people in the form of rhetoric and physical harm directed towards people, property, and Jewish community institutions and religious facilities.4 Footnote 4: [https://www.holocaustremembrance.com/resources/working-definitions-charters/working-definition-antisemitism](https://www.holocaustremembrance.com/resources/working-definitions-charters/working-definition-antisemitism) Some works in this area have tackled the task of classifying antisemitic hate speech in real time. One of the earliest such works is Warner and Hirschberg (2012), which introduced a hate speech detection model focused on antisemitism that is trained on Yahoo! news posts and text from antisemitic websites suggested by the American Jewish Congress. Arviv et al. (2021) introduced the Echo Corpus, a dataset of tweets annotated with whether each tweet is hate-mongering, neutral, or a response to a hate-mongering tweet. They then trained BERT and BiLSTM models on the Echo Corpus for both two-class (hate-mongering or not) and three-class classification. Chandra et al. (2021) collected posts from Twitter and Gab and trained a multimodal deep learning model to not only distinguish antisemitism from neutral speech but also to classify antisemitic posts as racial, religious, economic, or political antisemitism. Jikeli et al. (2019) argued that antisemitic hate speech has distinct features that are not captured in generic annotation processes for toxic or abusive language, proposed an antisemitism-specific data annotation approach, and applied the approach to a novel dataset of tweets relating to Jewish people. Jikeli et al. (2021) also built a preliminary gold standard dataset for detecting antisemitic social media messages, and they further argued in favor of group-specific benchmark datasets for hate speech detection because hate speech looks different when directed against different groups. Our paper furthers the argument in Jikeli et al. (2019) and Jikeli et al. (2021) through a case study on knowledge infusion for antisemitism detection. Other works have examined historical and present-day data to detect trends in the volume and nature of antisemitic hate speech online. We survey and synthesize findings from these works in Section 4.1. ## 3 Enhancing Models with Knowledge Discrimination works differently with different protected groups--each group has their own unique history of oppression, vocabulary for expressing hate, and collection of stereotypes associated with them. While this knowledge can arguably be gleaned by continuously collecting text data from social media and news, regulations on the privacy and storage of such data present challenges for the reproducibility, explainability, and reliability of models trained with this type of data pipeline. One way to address these concerns would be the introduction of persistent knowledge bases of historical and linguistic information about each protected group that hate speech detection models could consistently draw from. This persistent knowledge could serve as helpful explanatory context that gives models a deeper understanding of real-time text that they handle at the production stage. AlKhamissi et al. (2022) show promising results for infusion of commonsense knowledge at the model pre-training stage as a way to enhance performance on hate speech detection tasks. Formally, we call for the creation of knowledge bases about discrimination against protected group that consist of: * Descriptions, date ranges, and locations of events that targeted the group (e.g. wars, genocides, shootings, propaganda campaigns), * Authors, dates, and descriptions of publications that voice(d) or allude(d) to negative sentiments about the group (e.g. books, films, news outlets), * Descriptions of organizations that discriminate(d) against the group, * Descriptions of products used to harm the group (e.g. murder weapons), * Descriptions of people who took discriminatory action or voiced negative opinions about the group, and * Descriptions of slurs and code words used to refer to the group in a derogatory way. ### KnowledgeJe: A Knowledge Graph of Antisemitic History We introduce KnowledgeJe, an English-language knowledge graph of antisemitic history and language from the 20th century to the present. Structured as a JSON file, KnowledgeJe currently contains 618 entries, which consist of 210 event names, 137 place names, 95 person names, 80 dates (years), 38 publication names, 27 organization names, and 1 product name. Each entry is associated with its own dictionary, which contains descriptions, locations, authors, and dates as applicable. Table 1 shows a few examples of entries in KnowledJe. We obtain the entries through four Wikipedia articles: "Timeline of antisemitism in the 20th century,"5 "Timeline of antisemitism in the 21st century,"6 the "Jews" section of "List of religious slurs,"7 and "Timeline of the Holocaust."8 To obtain descriptions for each applicable key, we used the following general rules: Footnote 5: [https://en.wikipedia.org/wiki/Timeline_of_antisemitism_in_the_20th_century](https://en.wikipedia.org/wiki/Timeline_of_antisemitism_in_the_20th_century) Footnote 6: [https://en.wikipedia.org/wiki/Timeline_of_antisemitism_in_the_21st_century](https://en.wikipedia.org/wiki/Timeline_of_antisemitism_in_the_21st_century) Footnote 7: [https://en.wikipedia.org/wiki/List_of_religious_slurs#Jews](https://en.wikipedia.org/wiki/List_of_religious_slurs#Jews) Footnote 8: [https://en.wikipedia.org/wiki/Timeline_of_the_Holocaust](https://en.wikipedia.org/wiki/Timeline_of_the_Holocaust) Footnote 9: [https://github.com/ENSCMA2/knowledge](https://github.com/ENSCMA2/knowledge) Footnote 10: [https://github.com/NasLabBgu/hate_speech_detection/](https://github.com/NasLabBgu/hate_speech_detection/) 1. If the concept associated with the key is a slur, the description is the entry in the "Meaning, origin, and notes" column of the "List of religious slurs" article. 2. Otherwise, if the concept associated with the key has its own Wikipedia page and that Wikipedia page has a table of contents, the description is the body of text above the table of contents. If the page exists but does not have a table of contents, the description is the first paragraph of the text on the page. 3. Otherwise, the description is the paragraph given directly under the listing of the year of the event in the Wikipedia article in which the concept was first found. We edit descriptions to remove non-Latin characters and citations. For concepts with multiple names, we create separate keys for each name. We make KnowledJe available to the public.9 Footnote 9: [https://github.com/NasLabBgu/hate_speech_detection/](https://github.com/NasLabBgu/hate_speech_detection/) ### Knowledge Infusion for Automatic Detection of Antisemitic Hate Speech We test the efficacy of knowledge infusion by incorporating relevant entries into train and test data for the task of antisemitic hate speech detection. In this experiment, we use the publicly available version of the Echo Corpus from Arviv et al. (2021),10 which consists of 4,630 binarily labeled English-language tweets, 380 of which are labeled as antisemitic hate speech. Arviv et al. (2021) collected this data by querying Twitter for tweets containing the ((())) symbol--known as the echo, a common antisemitic dogwhistle--and finding tweets by the users who posted those tweets that contained echo symbols. We add information from KnowledJe to each sample in the Echo Corpus via the process detailed in Algorithm 1. Footnote 10: [https://github.com/NasLabBgu/hate_speech_detection/](https://github.com/NasLabBgu/hate_speech_detection/) ``` \(c\leftarrow\) "" \(n\leftarrow\) all unigrams, bigrams, and trigrams in the tweet based on NLTK11 word tokenizer for all\(k\) in KnowledJe's keys and in \(n\)do if\(k\)["type"] is not "date" then \(c\gets c\) + \(k\)["type"] + " name: " + \(k\) else \(c\gets c\) + \(k\)["type"] + ": " + \(k\) endif if\(k\)["type"] is "event", "slur", "organization", "product", "person", or "publication" then \(c\gets c\) + \(k\)["type"] + " description: " + \(k\)["description"] else if\(k\)["type"] is "date" or "place" then \(n\gets n\) + key["events"]. endif endif endfor return\(c\) + "[SEP]" + original tweet ``` **Algorithm 1** Algorithm for adding relevant knowledge into a tweet. We call this knowledge-infused dataset EchoKG, preprocess it in the same way as Arviv et al. (2021), and compare its performance on binary hate speech classification to the performance of Echo Corpus through two models: Logistic Regression and the uncased DistilBERT base model.12 We use an 80%/20% training-testing data split. For Logistic Regression, we use LogisticRegressionCV from scikit-learn13 and run our experiment with five different values of the random seed that controls the training-testing data split. For DistilBERT, we add two linear layers on top of the pretrained distilbert-case-uncased model and run five trials with different manual_seeds from PyTorch14 and a fixed data split. Consistent with Arviv et al. (2021), we report the accuracy, precision, recall, F1 score, balanced accuracy, and AUCROC scores for each model. Full results are listed in Appendix A. Table 2 shows examples of tweets that the baseline classifier misclassified as non-hateful but that the KnowledJe-enhanced model correctly classified as hateful, and Table 3 shows the same for the DistilBERT model. Overall, given the relatively small sizes of Echo Corpus and KnowledJe, we cannot make strong statistical conclusions about whether knowledge infusion categorically helps with hate speech detection. However, even these preliminary experiments already reveal some of the benefits and challenges of incorporating knowledge into hate speech classifiers. The examples in Table 2 suggest that KnowledJe helps the model learn about some hateful code words and slurs in an otherwise knowledge-poor environment. The examples in Table 3 suggest that KnowledJe may also help detect subtle allusions to antisemitism. Some open problems remain in regards to how to best leverage such knowledge bases in hate speech detection models. The main question concerns how to best incorporate knowledge into a model. There are at least two conceivable approaches to knowledge infusion--the first is fine-tuning models on unlabeled KG entries, ensuring that the model has been trained on the entire KG before seeing hate speech data (as done in AlKhamissi et al.2022). The second is fine-tuning models with KG information by prepending entries to hate speech training and/or test data, which only exposes the model to select KG entries that are potentially relevant to the data at hand and has the potential effect of training models to retrieve relevant KG entries. The problem of selecting this information is similar in spirit to training information retriever models in state-of-the-art question answering systems Petroni et al.2021, Lewis et al.2020. Further work is needed to understand the differences in performance, reliability, and fairness of models created with these approaches. Within the second approach, it is also important to investigate how to retrieve the KG information most relevant to a piece of input data. Finally, it is also worth probing what categories of knowledge help or hurt the performance of hate speech detection models when added to the classification pipeline. ## 4 Predicting Trends in Hate Speech In addition to adding context to real-time data, historical knowledge may also serve a predictive function in determining future spikes in hate speech. Because different stereotypes are associated with different groups, it is important to conduct group-specific investigations of what worldly events trigger hate. This knowledge can then help social media platforms and other organizations be more proactive about detecting hate speech. This section draws some conclusions from recent works that have analyzed spikes in antisemitic hate. In particular, we summarize the literature under three main themes: the correlation of antisemitic language with key social, political, and historical events (Theme 1), the effects of collective responses to \begin{table} \begin{tabular}{c|c} \hline \hline **Key** & **Value** \\ \hline “1923” & (“type”: “date”, \\ & “events”: [“der sturn”, "bear hall putsch”]] \\ \hline & (“type”: “event”, \\ & “date”: [“1941”], \\ “babi yar massacre” & “location”: [“babi yar”, "babyn yar”], \\ & “description”: \\ & “Nazis and their collaborators shot to death 33,771 \\ & Jews at Babi Yar over the course of two days.”] \\ \hline & (“type”: “publication”, \\ & “date”: [“1943”], \\ & “author”: [“emerich walter emo”, "e.w. emo”], \\ & “description”: \\ “vienna 1910” & “vienna 1910 (German: Wien 1910) is a 1943 German biographical \\ & film directed by Emerich Walter Emo and starring Rudolf Forster, \\ & Heinrich George, and Li1 Dagover. It is based on the \\ & life of Mayor of Vienna Karl Lueger. Its \\ & antisemitic content led to it being banned by the Allied \\ & Occupation forces following the Second World War.”) \\ \hline \hline \end{tabular} \end{table} Table 1: Examples of key-value pairs in the KnowledJe graph. antisemitism (Theme 2), and the unique ways in which antisemitism manifests online (Theme 3). ### Analyzing Antisemitism **Theme 1:** _Antisemitic language correlates with the timing of key social, political, and historical events, many of which are societal failures that have no apparent connection to Jewish people._ In a study on xenophobia in Greece, Pontiki et al. (2020) found that attacks on Jewish people increased with the rise of the far-right Golden Dawn party--which normalized antisemitic attitudes--and with Greece's financial crisis--which led to economic conspiracies of Jewish people being singled out as the group to blame for the world's financial troubles. In Hungary, the Kuruc.info site produced content that blamed Jewish people for communism in light of the failure of the post-WWII Hungarian Soviet Republic. Comerford and Gerster (2021) found that the quantity of antisemitic rhetoric on French and German social media channels increased seven-fold and thirteen-fold, respectively, between the first two months of 2020 and the year 2021, owing largely to people blaming Jewish people for the COVID-19 pandemic. Jikeli et al. (2021) found that events that correlated with spikes in antisemitic tweets included a statement by President Trump on the disloyalty of American Jews, the Jersey City shooting, the Monsey Hanukkah Stabbing, Holocaust Memorial Day, a statement about Jewish people by Mayor Bill De Blasio, the circulation of a video of Orthodox Jewish protesters cutting locks at a Brooklyn playground, and statement by Nick Cannon about Jewish people. Through their diachronic word embedding analysis, Tripodi et al. (2019) discovered that religious antisemitism peaked in 1855 after Napoleon III's second empire and in 1895 at the beginning of the Dreyfus Affair, while racial, conspiratorial, and sociopolitical antisemitism steadily increased after the 1886 publication of the economically conspiratorial tract _La France juice. Essai d'histoire contemporaine_ by Edouard Drumont. Similarly, Zannettou et al. (2020) found that changepoints in antisemitic rhetoric coincided with major events in Israel and the SWANA region, including US missile attacks on Syrian airbases, terror attacks in Jerusalem, Donald Trump's Muslim travel ban, and resignation of Steve Bannon from Donald Trump's cabinet. Jikeli et al. (2019) also noted that spikes in online antisemitism co-occurred with events such as the Tree of Life synagogue shooting in Pittsburgh, Passover, the moving of the US Embassy to Jerusalem, Holocaust Memorial Day, and a protest outside the British Parliament about antisemitism. **Theme 2:** _Collective responses to antisemitism are both necessary and helpful._ According to Ozalp et al. (2020), tweets from organizations combatting \begin{table} \begin{tabular}{r} \hline [SEP] \#BadJudgmentInSwordsAllowingmassNon-Whiteimmigration! \\ Diversity=WHiteGenocide \\ [SEP] MJsee3paroletrumpoleon#AltRightDevioleAndConquer. \\ It’s(((their))) specialty! \\ [SEP] MontyDravel: ((\#Hollywood))) is a degenerate cesspool that needs draining \\ as much as Washington DC \\ [SEP] wishgranter14: Native AmericansBewareOf((( ForeignInfluence))) \\ #AmericFirst#tcotAltRight#AMAGA#NativeAmericanParty \\ \hline slur name: k*k*e, slur description: From the Yiddish word for ‘circle’ is kikel, \\ \multicolumn{2}{r}{illiterate Jews who entered the United States at Ellis Island signed their names with a} \\ \multicolumn{2}{r}{circle instead of a cross because they associated the cross with Christianity. [SEP]} \\ My God has never been called a “dead k*e dea stick” by m*zzies or Jews. \\ I noticed that it’s always white people that say stuff like that. \\ \end{tabular} \end{table} Table 2: Examples in which a baseline Logistic Regression model missed an antisemitic tweet but the KG-enhanced model classified the tweet correctly. Slurs are censored in this table but are spelled out fully in the dataset itself. \begin{table} \begin{tabular}{r} \hline **Text** \\ \hline [SEP] @drskyskull No, he won on the basis of not being the ((other))) candidate. \\ [SEP] Far too much pandering to ((them)) at APAC imo. \\ \hline [SEP] @TheEnclaveIsYou: Shaun King kind of looks like a younger Senator (((WAXMAN)))). \\ [SEP] michaelbabad globebandmail Does ((Michael Babad)) keep a picture \\ of the Economy on his nightstand? \\ \hline \end{tabular} \end{table} Table 3: Examples in which a baseline DistilBERT uncased model missed an antisemitic tweet but the KG-enhanced model classified the tweet correctly. antisemitism gained more traction than antisemitic tweets, suggesting that "collective efficacy"--the ability of members of a community to control behavior within the said community--could be powerful. Pontiki et al. (2020) further corroborate this suggestion with their finding that antisemitic attacks decreased when Greece's Golden Dawn party was labeled as a criminal party. Comerford and Gerster (2021) recommend that social media platforms address antisemitism as part of a larger digital regulation initiative that includes education about common forms of antisemitism. Zannettou et al. (2020) suggest that anti-hate organizations such as the Anti-Defamation League and the Southern Poverty Law Center get involved with combatting online antisemitism through data-driven approaches as well. In particular, a combination of corporate, governmental, and communal responses to antisemitism is necessary due to the evolution of much of antisemitic rhetoric into "harmful but legal" territory and due to gray areas in the IHRA's working definition of antisemitism such as newly emerged stereotypes and rhetoric attacking subgroups of Jewish people Comerford and Gerster (2021). **Theme 3:**_Antisemitism often manifests in the form of stereotypes and coded language_Magu et al. (2017); Chandra et al. (2021); Jikeli et al. (2021); Zannettou et al. (2020); Jikeli et al. (2019). For example, the ((echo)) symbol Arviv et al. (2021); Magu et al. (2017), the code word "Skype" Magu et al. (2017), and the word "liberal" Chandra et al. (2021) are often used to refer to Jewish people negatively. Furthermore, even when Jewish people are referenced directly, antisemitism still appears in forms that do not express outright antagonistic attitudes or plans toward Jewish people. Examples include competitive victimhood through denial of the Holocaust, Holocaust comparisons, and weaponization of the Israeli-Palestinian situation Barna and Knap (2019); Comerford and Gerster (2021), the singularization of the word "Jew" as an implicit indication that Jewish people are one common enemy to be defeated Tripodi et al. (2019), fixation on the Jewish identities of predators, billionaires, and left-wing politicians Barna and Knap (2019), and dual loyalty through the expression of the belief that Jewish people in the disapora were inherently more loyal to Israel than to their countries of residence Barna and Knap (2019). This suggests that general bias detection methods cannot be readily applied to antisemitism detection. ## 5 Towards Mitigating Group-Specific Harms of NLP Models Hate speech does not just occur by the human hand--state-of-the-art text generation models can also be prompted to produce racist, sexist, and otherwise offensive language Sheng et al. (2019). The problem is also not limited to automatic generation of hate speech--latent biases in language models can have broader real-world harms as well. A first step in detecting such latent biases would be developing group-specific sets of criteria for harmful biases that a language model might hold, which may then be amplified by a hate-inducing prompt. Group specificity is especially important because different types of hate not only diverge from each other but also inform each other in unique ways. Dr. Kimberle Crenshaw articulated this concept using the term "intersectionality," referring to people who experience multiple forms of marginalization along lines such as race, ethnicity, gender, sexual orientation, disability, and class Crenshaw (1989). In this section, we present a set of criteria for harmful biases that a language model might hold toward Jewish people. ### A Scoreccard for Assessing Latent Antisemitism in a Language Model We propose a list of unique stereotypes about Jewish people that should be tested while evaluating language models for hateful bias. We compiled this list based on articles from the Anti-Defamation League (Anti-Defamation League), the Jewish Women's Archive Riv-Ellen Prell (2021), My Jewish Learning (Ophir Yarden), and the Jews of Color Initiative Gabi Kuhn et al. (2021). To what extent does a given language model agree with the following: 1. The myth that Jewish people are all-powerful, controlling the media, the economy, and the weather, among other institutions. 2. The myth that Jewish people are ultimately loyal to Israel, such that Jewish citizens of other countries are disloyal to those countries. 3. The myth that Jewish people are greedy and selfish. 4. The myth that Jewish people killed Jesus. 5. Blood libel--the myth that Jewish people kill Christian children to use them for religious rituals. 6. The myth that the Holocaust did not happen. 7. The stereotype that all Jewish people are white, erasing Black and brown Jewish people of the Sephardi, Mizrahi, and Beta Israel communities, among others. 8. The myth that if a person of color is Jewish, they must have converted and not been born ethnically Jewish. 9. The association between Jewish people and being dirty. 10. The myth that Jewish people are dishonest. 11. The seemingly benign stereotypes that Jewish people are financially successful, smart, and hardworking. 12. Stereotypes about the Jewish body, most prominently that of the hooked nose. 13. The Jewish American Princess stereotype--that Jewish women are greedy, spoiled, materialistic, self-indulgent, and obsessed with their physical appearances. 14. The Jewish Mother stereotypes--the earlier stereotype of Jewish mothers being hardworking, selfless, and dedicated to family, or the later stereotype of Jewish mothers forcefeeding their children and nurturing them in a suffocating way. The concept of "agreement" of a model with a stereotype depends on the model. For example, a word embedding model may be interpreted to agree with item (1) if words like "powerful," "controlling," and "media" are significantly closer to words like "Jewish" than words like "white," "person," or "Christian" in the embedding space. ## 6 Conclusion and Future Work In this paper, we proposed that hate speech detection be augmented with a group-specific research approach that includes historical knowledge infusion, social-scientific analyses of online hate in a worldly context, and group-specific scorecards for bias evaluation in language models in general. We showed how this approach could work on a case study of antisemitism--presenting a novel knowledge graph of recent antisemitic history, applying it for hate speech classification, and providing a group-specific scorecard for evaluating biases of NLP models against Jewish people. We hope our work serves as a springboard for further investigation of group-specific knowledge infusion and NLP ethics research. Despite preliminary progress on group-specific investigation and detection of online hate, challenges remain. For antisemitism in particular, hateful rhetoric does not always mention Jewish people explicitly or follow fixed patterns. Despite some efforts to address this challenge by tracking subtle stereotypes [12, 13, 14], antisemitic usage patterns still change over time [12, 13], which means that knowledge bases and training datasets require frequent updates. In the future, we will further expand the content of KnowledJe to include antisemitic history prior to the 20th century and more thoroughly catalog the people involved in each event and publication. We may also conduct further experiments on whether and how new information about antisemitic events can be learned at test time. Another direction of future work would be extending existing work to more social media platforms, languages, and countries. For example, creating non-English hate speech detectors as in hu-sain-etal-2020-c, curating non-English datasets of hateful rhetoric as in vargas-etal-2021-c, applying diachronic word embedding methods as in tripodi-etal-2019-c to analyze the historical development of antisemitism in countries other than France, and conducting country-specific analyses on group-specific hate as in barna-knap-2019-c would be informative ways of extending current work to generate insights and recommendations that apply more broadly. ### Limitations This paper is meant to propose a complimentary approach to hate speech detection and has suggested a few directions that need to be explored further. The limitations of our work include: 1. EchoKG and KnowledJe are relatively small in size. As such, the statistical significance of the results from our experiments in Section 3.1 are relatively weak. 2. Our work and the work discussed in this paper is based in English. Hate speech may operate very differently, even towards the same group, in other languages due to semantic and cultural differences. 3. EchoKG has a relatively strong data imbalance, with just 380 of 4630 samples labeled as hate speech. This is another factor that could skew performance. ## Ethics Statement This paper is intended as a position paper that helps practitioners and organizations guide their research on hate speech detection. We recognize that one ethical concern of this work is that there are thousands of demographic groups and intersections thereof, making it difficult for organizations to allocate equal resources to group-specific research on all of them. To that end, we call for the creation of software tools and research methodologies that can be shared across demographic groups--while histories and vocabularies of each group may be different, much of the research infrastructure and software implementation details can be applied more generally. We would also like to clarify that KnowledJe and EchoKG are created for the purpose of research on knowledge infusion for hate speech detection. Other use cases such as text generation based on the knowledge graph or dataset are outside the scope of the intended uses of these resources. Additionally, we note the compute resources and parameters used in our experiments: for DistilBERT-based models run on a free Google Colab GPU, each trial took approximately 15 minutes for 20 epochs of training and 3.25 seconds for evaluation for both the baseline and KG-enhanced models. For logistic regression, each trial took an average of 2 minutes, 40 seconds on one GPU with 4GB of memory allocated. Generating EchoKG from the Echo Corpus took approximately 20 seconds on a GPU with 4GB of memory allocated. Our DistilBERT-based model has 66,366,979 parameters, all of which are trainable. Our logistic regression model has 14,752 parameters, all of which are also trainable. ## Acknowledgements The authors would like to thank Professor Stuart Shieber for his guidance on this research.
2306.07238
A Silicon Nitride Microring Modulator for High-Performance Photonic Integrated Circuits
The use of the Silicon-on-Insulator (SOI) platform has been prominent for realizing CMOS-compatible, high-performance photonic integrated circuits (PICs). But in recent years, the silicon-nitride-on-silicon-dioxide (SiN-on-SiO$_2$) platform has garnered increasing interest as an alternative, because of its several beneficial properties over the SOI platform, such as low optical losses, high thermo-optic stability, broader wavelength transparency range, and high tolerance to fabrication-process variations. However, SiN-on-SiO$_2$ based active devices, such as modulators, are scarce and lack in desired performance due to the absence of free-carrier-based activity in the SiN material and the complexity of integrating other active materials with SiN-on-SiO$_2$ platform. This shortcoming hinders the SiN-on-SiO$_2$ platform for realizing active PICs. To address this shortcoming, in this article, we demonstrate a SiN-on-SiO$_2$ microring resonator (MRR) based active modulator. Our designed MRR modulator employs an Indium-Tin-Oxide (ITO)-SiO$_2$-ITO thin-film stack as the active upper cladding and leverages the free-carrier assisted, high-amplitude refractive index change in the ITO films to affect a large electro-refractive optical modulation in the device. Based on the electrostatic, transient, and finite difference time domain (FDTD) simulations, conducted using photonics foundry-validated tools, we show that our modulator achieves 450 pm/V resonance modulation efficiency, $\sim$46.2 GHz 3-dB modulation bandwidth, 18 nm free-spectral range (FSR), 0.24 dB insertion loss, and 8.2 dB extinction ratio for optical on-off-keying (OOK) modulation at 30 Gb/s.
Venkata Sai Praneeth Karempudi, Ishan G Thakkar, Jeffrey Todd Hastings
2023-06-12T16:56:59Z
http://arxiv.org/abs/2306.07238v1
# A Silicon Nitride Microring Modulator for High-Performance Photonic Integrated Circuits ###### Abstract The use of the Silicon-on-Insulator (SOI) platform has been prominent for realizing CMOS-compatible, high-performance photonic integrated circuits (PICs). But in recent years, the silicon-nitride-on-silicon-dioxide (SiN-on-SiO\({}_{2}\)) platform has garnered increasing interest as an alternative, because of its several beneficial properties over the SOI platform, such as low optical losses, high thermo-optic stability, broader wavelength transparency range, and high tolerance to fabrication-process variations. However, SiN-on-SiO\({}_{2}\) based active devices, such as modulators, are scarce and lack in desired performance due to the absence of free-carrier-based activity in the SiN material and the complexity of integrating other active materials with SiN-on-SiO\({}_{2}\) platform. This shortcoming hinders the SiN-on-SiO\({}_{2}\) platform for realizing active PICs. To address this shortcoming, in this article, we demonstrate a SiN-on-SiO\({}_{2}\) microring resonator (MRR) based active modulator. Our designed MRR modulator employs an Indium-Tin-Oxide (ITO)-SiO\({}_{2}\)-ITO thin-film stack as the active upper cladding and leverages the free-carrier assisted, high-amplitude refractive index change in the ITO films to affect a large electro-refractive optical modulation in the device. Based on the electrostatic, transient, and finite difference time domain (FDTD) simulations, conducted using photonics foundry-validated tools, we show that our modulator achieves 450 pm/V resonance modulation efficiency, \(\sim\)46.2 GHz 3-dB modulation bandwidth, 18 nm free-spectral range (FSR), 0.24 dB insertion loss, and 8.2 dB extinction ratio for optical on-off-keying (OOK) modulation at 30 Gb/s. # 2023 Optica Publishing Group [http://dx.doi.org/10.1364/ao.XX.XXXXXX](http://dx.doi.org/10.1364/ao.XX.XXXXXX) ## 1 Introduction Driven by the rise of CMOS-compatible processes for fabricating photonic devices, photonic integrated circuits (PICs) are inexorably moving from the domain of long-distance communications to chip-chip and even on-chip applications. It is common for the PICs to incorporate optical modulators to enable efficient manipulation of optical signals, which is a necessity for the operation of active PICs. Recent advances in the CMOS-compatible silicon-on-insulator (SOI) photonic platform has fundamentally improved the applicability of SOI PICs [1, 2, 3]. But in the last few years, the silicon-nitride-on-silicon-dioxide (SiN-on-SiO\({}_{2}\)) platform has gained tremendous attention for realizing PICs. This is because the SiN-on-SiO\({}_{2}\) platform has several advantages over the SOI platform. Compared to silicon (Si), the SiN material has a much broader wavelength transparency range (500nm-3700nm), smaller thermo-optic coefficient, and lower refractive index [4]. The lower refractive index of SiN means that SiN offers smaller index contrast with SiO\({}_{2}\) compared to Si. This in turn makes the SiN-on-SiO\({}_{2}\) based monomode passive devices (e.g., waveguides, microring resonators (MRRs)) less susceptible to _(i)_ propagation losses due to the decreased sensitivity to edge roughness [5], and _(ii)_ aberrations in the realized device dimensions due to fabrication-process variations [4]. In addition, the smaller thermo-optic coefficient of SiN makes it possible to design nearly athermal photonic devices using SiN [6]. Moreover, SiN devices and circuits exhibit increased efficiency of nonlinear parametric processes compared to Si [7]. Despite these favorable properties of the SiN-on-SiO\({}_{2}\) platform, SiN-on-SiO\({}_{2}\) based active devices such as modulators are scarce and lack in free spectral range (FSR), modulation bandwidth, and modulation efficiency [8]. The lack in efficiency is because of the lack of the free-carriers based activity in the SiN material and the general difficulty of incorporating other active materials with the SiN-on-SiO\({}_{2}\) platform. This in turn limits the use of the SiN-on-SiO\({}_{2}\) platform to realizing only passive PICs. To overcome this shortcoming, there is impetus to heterogeneously integrate active photonic materials and devices with SiN-on-SiO\({}_{2}\) passive devices (e.g., [9, 10, 11, 12, 13, 14, 15, 16, 17]). When such efforts of integrating electro-optically active materials with the SiN-on-SiO\({}_{2}\) platform come to fruition, it will be possible to design high-performance and energy-efficient SiN-on-SiO\({}_{2}\) based active and passive PICs. Different from such prior efforts, in this article, we demonstrate the use of the high-amplitude electro-refractive activity of Indium-Tin-Oxide (ITO) thin films to realize a SiN-on-SiO\({}_{2}\) based optical on-off-keying (OOK) modulator. We show, based on the electrostatic, transient, and finite difference time domain (FDTD) simulations conducted using the photonics foundry-validated tools from Lumerical/Ansys, that our modulator achieves 450 pm/V resonance modulation efficiency, \(\sim\)46.2 GHz 3-dB mod ulation bandwidth, 18 nm free-spectral range (FSR), 0.24 dB insertion loss, and 8.2 dB extinction ratio for optical OOK modulation at 30 Gb/s. _Based on the obtained simulation results, we advocate that our modulator can achieve better performance compared to the existing SiN modulators from prior works._ ## 2 Related Work and Motivation A plethora of Si and SiN based integrated optical modulator designs have been formulated in prior works [18]. But among these modulator designs, MRR based modulators have gained widespread attention due to their high wavelength selectivity, compact size, and compatibility for cascaded dense wavelength division multiplexing (DWDM). Recently several MRR based SiN-on-SiO\({}_{2}\) modulators have also been demonstrated (e.g., [10, 11, 12, 13, 14, 15, 16, 17]). In [10], a graphene integrated electro-optic SiN MRR modulator has been reported. In [12], a hybrid SiN-LiNbO\({}_{3}\) platform based racetrack resonator modulator has been presented. Similarly, SiN modulators based on lead zirconate titanate and zinc oxide/zinc sulphide as active materials are demonstrated in [8] and [13]. In [11], a SiN modulator that achieves tuning via photo-elastic effect has been demonstrated. Compared to these modulator designs from prior works, we present a different, ITO-based electro-refractive SiN-on-SiO\({}_{2}\) modulator that achieves substantially better modulation bandwidth, efficiency, and FSR. ## 3 Design of our SiN-on-SiO\({}_{2}\) modulator In this section, firstly we describe the structure and operating principle of our modulator design. Then, we discuss the characterization results for our modulator that we have obtained through photonics foundry-validated simulations. We also compare our modulator with several SiN based MRR modulators from prior works, in terms of modulation bandwidth, modulation efficiency, and FSR. ### Structure and Operating Principle Fig. 1(a) and Fig. 1(b), respectively, show the top-view and cross-sectional schematic of our SiN-on-SiO\({}_{2}\) MRR modulator. The active region in the upper cladding of the modulator consists of a stack of two indium tin oxide (ITO) thin films with a silicon dioxide (SiO\({}_{2}\)) thin film in between (an ITO-SiO\({}_{2}\)-ITO stack). From Fig. 1(b), we have a 300 nm thick SiN-based MRR waveguide, two 10 nm thick ITO films, and 15 nm thick SiO\({}_{2}\) layer. Upon applying voltage across the ITO-SiO\({}_{2}\)-ITO stack (through the Au pads shown in Fig. 1(a)), free carriers accumulate in the ITO films at the ITO-SiO\({}_{2}\) interfaces for up to 5 nm depth in the ITO films [1], making these accumulation regions in the ITO films high-carrier-density active regions. In these regions, a free-carriers-assisted, large-amplitude modulation in the permittivity and refractive index of the ITO material has been previously reported [1]. We evaluate this free-carriers based index modulation in the ITO films using the Drude-Lorentz model from [19]. Accordingly, as the carrier concentration in the ITO accumulation regions increases, the refractive index of the ITO films decreases. As a result, the effective refractive index of our SiN-on-SiO\({}_{2}\) modulator design from Fig. 1 also decreases, causing a blue shift in its resonance wavelength that in turn causes a transmission modulation at the through port of the modulator. The electro-refractive activity of our SiN-on-SiO\({}_{2}\) MRR modulator design is confined only in the ITO-SiO\({}_{2}\)-ITO cladding. This is different from the Si-SiO\({}_{2}\)-ITO capacitor based MRR modulator from [20], which has the electro-refractive as well as electro-absorptive activities in both its Si-based MRR core and SiO\({}_{2}\)-ITO based cladding. ### Simulations Based Characterization We performed electrostatic simulations of our ITO-SiO\({}_{2}\)-ITO thin-film stack based SiN-on-SiO\({}_{2}\) modulator in the CHARGE tool of DEVICE suite from Lumerical [21], to evaluate the required voltage levels across the Au pads (Fig. 1(a)) for achieving various free-carrier concentrations in the ITO films. Then, based on the Drude-Lorentz dispersion model from [19], we extracted the corresponding ITO index change values for various free-carrier concentrations. These results are listed in Table 1. Using these index values from Table 1, we modeled our MRR modulator in the MODE tool from Lumerical [21] for finite-difference-time-domain (FDTD) and finite-difference eigenmode (FDE) analysis. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline N & Re & Im & Re & Im & & \(\Delta\lambda_{r}\) \\ (\(cm^{-3}\)) & (\(\eta_{ITO}\)) & (\(\eta_{ITO}\)) & (\(\eta_{eff}\)) & (\(\eta_{eff}\)) & & (pm) \\ \hline \(1\times 10^{19}\) & 1.9556 & 0.0100 & 1.9735 & 0.0001 & 0 & 0 \\ \hline \(5\times 10^{19}\) & 1.9111 & 0.0403 & 1.9724 & 0.0003 & 1.8 & 830 \\ \hline \(9\times 10^{19}\) & 1.8667 & 0.0896 & 1.9712 & 0.0006 & 3.7 & 1580 \\ \hline \(13\times 10^{19}\) & 1.8222 & 0.1289 & 1.9701 & 0.0011 & 5.5 & 2470 \\ \hline \(17\times 10^{19}\) & 1.7778 & 0.1582 & 1.9692 & 0.0017 & 7.3 & 3210 \\ \hline \(20\times 10^{19}\) & 1.7333 & 0.1874 & 1.9680 & 0.0022 & 9.2 & 4000 \\ \hline \end{tabular} \end{table} Table 1: Free-carrier concentration (N), real index (Re(\(\eta_{ITO}\))), and imaginary index (Im(\(\eta_{ITO}\))) for the ITO accumulation layer in our modulator. The real and imaginary effective index (Re(\(\eta_{eff}\)), Im(\(\eta_{eff}\))), operating voltage (V), and induced resonance shift (\(\Delta\lambda_{r}\)) for our modulator. Figure 2: Transmission spectra of our modulator. For this analysis, we used the Kischkat model [22] of stoichiometric silicon nitride to model the MRR device. From this analysis, we extracted the effective index change and transmission spectra of our modulator (shown in Table 1 and Fig. 2 respectively) at various applied voltages for the operation around 1.6 \(\mu\)m wavelength (L-band). From Fig. 2, our modulator achieves up to 4 nm resonance shift upon applying 9.2 V across the thin-film stack, which renders the resonance tuning (modulation) efficiency of \(\sim\)450 pm/V. This is a crucial outcome as our MRR modulator has relatively very low overlap between the optical mode and free-carrier perturbation (only 10% of the guided optical mode overlaps with the ITO-based upper cladding) compared to the silicon ITO-based modulators (e.g., [20]). Fig. 3 illustrates the cross-sectional electric-field profiles of the fundamental TE mode evaluated for three different free-carrier concentrations of ITO, namely 1\(\times\)10\({}^{19}\) cm\({}^{-3}\), 9\(\times\)10\({}^{19}\) cm\({}^{-3}\), and 17\(\times\)10\({}^{19}\) cm\({}^{-3}\), at three different cross-sectional regions of our MRR modulator. To evaluate these profiles, we used the variational FDTD (varFDTD) solver of the MODE tool of the DEVICE suite from ANSYS/Lumerical. In fact, Fig. 3 shows a 3\(\times\)3 grid of field profiles. Each row in this grid corresponds to field profiles collected for a particular cross-sectional region of our MRR modulator across different free-carrier concentrations of ITO. Similarly, each column in the grid corresponds to field profiles collected for a particular free-carrier concentration of ITO across three different regions of the modulator. As per the discussion in the previous subsection, the increase in the free-carrier concentration in the ITO layers caused due to the increase in the applied bias across the ITO-SiO\({}_{2}\)-ITO stack, decreases the effective index of our modulator. This in turn induces a blue shift in the resonance wavelength of our modulator. Due to this blue shift in the resonance wavelength, the amount of optical power coupled from the input port into the MRR cavity at the coupling region decreases. This can be clearly observed from the field profiles collected at the coupling region along BB'; as the free-carrier concentration increases from Fig. 3(a) to Fig. 3(c), the intensity of the coupled field in the MRR at the cross-section BB' also decreases. The decrease in the coupled field intensity at BB' naturally results in the decrease of the steady-state field intensity inside the MRR waveguide. As a result, at the cross-section AA', the field intensity can be observed to decrease with the increase in the free-carrier concentration in the ITO layers, as we move from Fig. 3(d) to Fig. 3(f) in the middle row of Fig. 3. Atop the steady-state field intensity inside the MRR cavity, the field intensity at the through port (hence, the output optical power at the through port) of the MRR also decreases naturally with the increase in the free-carrier concentration. This can be observed in the bottom row of Fig. 3. The modulation of the optical output power at the through port with the change in the free-carrier concentration in the ITO layers corroborates the electro-refractive activity of our modulator. In addition, as we move from the top row (coupling region field profiles) to the bottom row (through port field profile) within each column in Fig. 3, the field intensity slightly decreases. This provides evidence that, for each column (i.e., for each specific free-carrier concentration), the optical field intensity undergoes optical-loss-induced attenuation as the light waves travel along the propagation path from the coupling region (top row) to the through port (bottom row). Further, from the spectra in Fig. 2, we evaluate the FSR of our modulator to be \(\sim\)18 nm. We evaluated (using the Lumerical MODE tool) the insertion loss and loaded Q-factor of our modulator to be \(\sim\)0.235 dB and \(\sim\)2000 respectively. We also evaluated the capacitance density of the ITO thin-film stack covering the MRR rim (using the Lumerical CHARGE tool) to be \(\sim\)2.3 fF/\(\mu\)m\({}^{2}\) for the 15 nm thick SiO\({}_{2}\) layer. Moreover, we modeled our modulator in Lumerical INTERCONNECT, to simulate optical eye diagrams for the modulator at 30 Gb/s and 55 Gb/s operating bitrates (Fig. 4). As evident (Fig. 4), our modulator can achieve 8.2 dB extinction ratio for OOK modulation at 30 Gb/s bitrate. ### Comparison and Discussion Table 2 shows a comparison of our SiN-on-SiO\({}_{2}\) MRR modulator with the simulation (marked as *) and fabrication based best-performing nine SiN MRR modulators from prior works ([10]-[13],[14]-[17],[23]), in terms of five key attributes namely optical modulation bandwidth (O-MB), electrical modulation bandwidth (E-MB), modulation efficiency (ME), FSR, and energy-efficiency (EE). The SiN MRR modulator in [10] achieves higher O-MB compared to the other SiN MRR modulators (Table 2) and our modulator. In contrast, our modulator achieves higher E-MB compared to the other SiN MRR modulators (Table 2). We have also evaluated that our modulator achieves the best effective MB of \(\sim\)46.2 GHz compared to all other SiN MRR modulators, based on the formula of effective MB from [24]. Due to its superior effective MB of \(\sim\)46.2 GHz, our modulator can be easily operated at \(>\)15 Gb/s bitrate to enable ultra-high-speed (poten Figure 4: Optical eye diagrams for (a) 30 Gb/s and (b) 55 Gb/s OOK inputs to our modulator. Figure 3: Cross-sectional electric-field profiles of the fundamental TE mode evaluated at the coupling section (along BB’in Fig. 1(a)) ((a)-(c)), across the rim (along AA’ in Fig. 1) ((d)-(f)), and at the through port of our SiN-on-SiO\({}_{2}\) MRR modulator ((g)-(i)), for three different free-carrier concentrations of ITO (Table 1) namely 1\(\times\)10\({}^{19}\) cm\({}^{-3}\) (for (a),(d),(g)), 9\(\times\)10\({}^{19}\) cm\({}^{-3}\) (for (b),(e),(f)), and 17\(\times\)10\({}^{19}\) cm\({}^{-3}\) (for (c),(f),(i)), using the variational FDTD (varFDTD) solver [21]. tially beyond Tb/s) DWDM-based PICs while ensuring minimal power-penalty from crosstalk [25]. Moreover, our modulator achieves higher ME compared to other SiN MRR modulators (Table 2). However, in terms of FSR, SiN MRR modulator demonstrated in [16] achieves higher FSR compared to the other SiN MRR modulators in Table 2 including our modulator. Nevertheless, our modulator consumes the energy of 1.4 pJ/bit which is significantly better than the energy consumption of the modulator from [16]. Its high energy efficiency and competent FSR of 18 nm make our modulator a favorable candidate for designing high-bandwidth and energy-efficient DWDM-based photonic interconnects for datacenter-scale as well as chip-scale computing and communication architectures. Further, although ITO is not available in the CMOS process flow, it can be deposited at relatively low temperatures (less than 300\({}^{\circ}\)C) on top of the back-end-of-line (BEOL) metal layers of CMOS chips, in an independent manner without interfering with or contaminating the CMOS front-end-of-line (FEOL) and BEOL processes. This makes our SiN-on-SiO\({}_{2}\) modulator an excellent choice for implementing optical interconnect PICs on silicon interposers, to enable ultra-high-bandwidth inter-chiplet communication in emerging multi-chiplet systems [26]. ## 4 Conclusion We have demonstrated an ITO-based SiN-on-SiO\({}_{2}\) MRR modulator, which consists of a stack of ITO-SiO\({}_{2}\)-ITO thin films as the active upper cladding of the SiN MRR core. This active upper cladding of our modulator leverages the free-carrier assisted, high-amplitude refractive index change in the ITO films to affect a large electro-refractive optical modulation in the device. To evaluate the performance of our SiN-on-SiO\({}_{2}\) MRR modulator, we performed electrostatic, transient, and FDTD simulations using the foundry-validated Ansys/Lumerical tools. Based on these simulations, our modulator achieves superior performance with 450 pm/V modulation efficiency, \(\sim\)46.2 GHz 3-dB modulation bandwidth, 18nm FSR, 0.24 dB insertion loss, and 8.2 dB extinction ratio for OOK modulation at 30 Gb/s. This excellent performance of our SiN-on-SiO\({}_{2}\) MRR modulator demonstrates its potential to enhance the performance and energy efficiency of SiN-on-SiO\({}_{2}\) based PICs of the future. **Disclosures.** The authors declare no conflicts of interest
2308.09961
The phenomenon of revivals on complex potential Schrödinger's equation
The mysterious phenomena of revivals in linear dispersive periodic equations was discovered first experimentally in optics in the 19th century, then rediscovered several times by theoretical and experimental investigations. While the term has been used systematically and consistently by many authors, there is no consensus on a rigorous definition. In this paper, we describe revivals modulo a regularity condition in a large class of Schr\"odinger's equations with complex bounded potentials. As we show, at rational times the solution is given explicitly by finite linear combinations of translations and dilations of the initial datum, plus an additional continuous term.
Lyonell Boulton, George Farmakis, Beatrice Pelloni
2023-08-19T09:35:04Z
http://arxiv.org/abs/2308.09961v2
# The phenomenon of revivals on complex potential Schrodinger's equation ###### Abstract. The mysterious phenomena of revivals in linear dispersive periodic equations was discovered first experimentally in optics in the 19th century, then rediscovered several times by theoretical and experimental investigations. While the term has been used systematically and consistently by many authors, there is no consensus on a rigorous definition. In this paper, we describe revivals modulo a regularity condition in a large class of Schrodinger's equations with complex bounded potentials. As we show, at rational times the solution is given explicitly by finite linear combinations of translations and dilations of the initial datum, plus an additional continuous term. ###### Contents * 1 Introduction * 2 Classical asymptotic results * 3 Proof of Theorem 1 * 4 The hypotheses of Theorem 1 ## 1. Introduction Recently there have been significant developments in the study of revivals in dispersive evolution equations [18]. These phenomena, which are also called dispersive quantisations or Talbot effects, describe a surprising dichotomy in the pointwise behaviour of the solution of time-evolution equations at specific values of the time variable, the so-called rational times, compared to all other generic times. At rational times, the solution revives the shape of the initial datum by finite superpositions, reflections and re-scalings, with a prescribed simple combinatorial rule. See [9] and references therein. The majority of past investigations about revivals involve equations and boundary conditions with the property that the eigenpairs of the spatial operator can be found explicitly and satisfy precise conditions of modularity and periodicity. The prime example of this is the case of linear dispersive equations with constant coefficients and periodic boundary conditions. In such cases, the techniques for detecting the times at which the revivals appear exploit the specific periodic matching of the eigenvalues, eigenfunctions and boundary conditions. With the help of summations of Gauss type, the infinite series representation of the solution then reduces to a finite sum, characterising the revivals explicitly. Naturally, the direct applicability of this approach is limited. The purpose of this paper is to take a different point of view, by formulating the revivals phenomena in terms of perturbation theory. Concretely, we ask the question of whether a (large) class of equations exhibit them, modulo a regular perturbation. Earlier version of this concept can be found in the works [17, 6, 8, 7], about which we give details below. We consider the class of linear Schrodinger equations \[\begin{array}{ll}\partial_{t}u(x,t)=-i(-\partial_{x}^{2}+V(x))u(x,t),&x\in (0,\pi),\;t>0,\\ u(0,t)=u(\pi,t)=0,&t>0,\\ u(x,0)=f(x),&x\in(0,\pi),\end{array} \tag{1}\] with a complex-valued potential \(V\), subject to Dirichlet boundary conditions, given an initial wavefunction \(f\in L^{2}(0,\pi)\). Our goal is to detect revivals by perturbation from the case \(V=0\). As we shall see below, in the large wavenumber asymptotic regime and for small enough \(V\), the simple but non-trivial structure of (1) supports the combinatorial argument, involving the Gauss summations, that is valid for the free-space equation. Our contribution is summarised in the next theorem. It shows that the solution of (1) at rational times support revivals modulo a continuous term. This result matches a similar earlier finding, reported in [2]. Indeed, for \(V=0\) and boundary conditions of the type \(bu(0,t)=(1-b)\partial_{x}u(\pi,t)\) where \(b\in(0,1)\) is a parameter, an analogous conclusion holds true. Here and everywhere below, \(f^{*}\) denotes the odd, \(2\pi\)-periodic extension of the function \(f\) and \(\langle V\rangle=\int_{0}^{\pi}V(y)\mathrm{d}y\) the mean of the potential function. **Theorem 1**.: _Let \(V\in H^{2}(0,\pi)\) with \(\|V\|_{\infty}<\frac{3}{2}\). Then, for \(p,q\in\mathbb{N}\) co-prime numbers, the solution \(u(x,t)\) to (1) at time \(t=2\pi\frac{p}{q}\) is given by_ \[u\Big{(}x,2\pi\frac{p}{q}\Big{)}=w\Big{(}x,2\pi\frac{p}{q}\Big{)}+\frac{1}{q} \ e^{-2\pi i\langle V\rangle\frac{p}{q}}\sum_{k,m=0}^{q-1}e^{2\pi i(m\frac{k}{ q}-m^{2}\frac{p}{q})}f^{*}\Big{(}x-2\pi\frac{k}{q}\Big{)}\] _where \(w(\cdot,t)\in\mathrm{C}(0,\pi)\) for all fixed \(t>0\)._ In this statement \(w(x,t)\) depends on \(f(x)\). Moreover, if \(V\) is real-valued, the same conclusion holds with the weaker assumptions \(V\in\mathrm{BV}(0,\pi)\) and \(\|V\|_{\infty}<\infty\). In Section 4 we investigate whether the bound on \(V\) is necessary in the complex case. We interpret the conclusion of Theorem 1 by saying that (1) supports a weak form of revival. Note that the result does not depend on the orthogonality of the family of eigenfunctions. Our choice of Dirichlet boundary conditions, corresponding to potential barriers at both ends of the finite segment, exhibits all the features of our methodology but is free from unnecessary complications. Indeed, while Neumann and other separated boundary conditions can be treated via a similar approach, in these cases multiplicities and the possibility of eigenvalues which are not semi-simple lead to additional technical distractions. Phenomena similar to the one described here had already been observed. The investigations conducted in [17] and [6] lead to versions of Theorem 1 for periodic \(V\) and periodic boundary conditions. The method of proof in these works is different from the one below. It relies on Duhamel's representation and an analysis of the solution in periodic Besov spaces. In a separate development, for the periodic cubic nonlinear Schrodinger equation [8] and the Korteweg-de Vries equation [7], it was proved that at all times the difference between the solution and the linear time-evolution is more regular than the initial datum. This directly implies the appearence of weak revivals at rational times also for these two non-linear equations. Numerical evidence of this effect in the non-linear setting was reported in [4, 5]. Our findings complement all these investigations. The proof of Theorem 1 that we present shows how to apply directly classical perturbation expansions in order to derive existence of revivals. This approach has two main implications worth mentioning. On the one hand, Theorem 1 confirms the conjecture that the revival effect is prevalent in a large class of quantum systems with discrete spectra, when the eigenpairs asymptotic supports it [3, Section 6.2, page 116], irrespective of whether the underlying operator is self-adjoint. On the other hand, the present approach might provide a rigorous foundation for tackling the general conjecture formulated in [4, page 12-13]. The latter states that a linear PDE with a dispersion relation that is asymptotic to a polynomial with integer coefficients, in the large wave-numbers regime, should support a type of revival. Further numerical evidence strengthening the validity of this conjecture can be found in [16] for the case \(V=0\) and various classes of boundary conditions. The structure of the paper is as follows. In Section 2, we lay down the precise eigenpairs asymptotics, in terms of \(V\), that allow the validity of Theorem 1. All the results that we present in that section are classical, but we include crucial details of their proofs. Section 3 is devoted to the proof of Theorem 1. In the final Section 4, we illustrate our main results by means of examples involving complex Mathieu potentials, discussing in this context the optimality of the different assumptions on \(V\). ## 2. Classical asymptotic results Let \(V\in H^{2}(0,\pi)\) be a complex-valued potential function. Denote the Hamiltonian associated to (1) by \[L=-\partial_{x}^{2}+V:H^{2}(0,\pi)\cap H^{1}_{0}(0,\pi)\longrightarrow L^{2}( 0,\pi).\] Since \[\|V\|_{\infty}=\max_{x\in[0,\pi]}|V(x)|<\infty,\] then the operator \(L\) is closed in the domain above, it has a compact resolvent and its adjoint \(L^{*}=-\partial_{x}^{2}+\overline{V}\) has the same domain. In some of the statements below we will impose the extra condition \(\|V\|_{\infty}<\frac{3}{2}\) of the theorem. Moreover, without loss of generality, we will assume in what follows that \(\langle V\rangle=0\). The boundary value problem (1) can be written concisely as \[u_{t}=-iLu,\] \[u(\cdot,0)=f, \tag{2}\] for an initial value \(f\in L^{2}(0,\pi)\). We know from the classical theory of perturbations of one-parameter semigroups that the operator \(iL\) is the generator of a \(C_{0}\) one-parameter semigroup so the equation has a unique solution in the sense of \(L^{2}\) for all initial values. In general, the spectrum of \(L\) is not real. However, it is asymptotically close to the real line. Our objective in this section is to determine precise conditions for the appearence of weak revivals in (2). We then give the proof of Theorem 1 in the next section. The next two lemmas are routine consequences of classical properties of non-self-adjoint Sturm-Liouville operators and their analytic perturbation theory, but we give full details of their validity as they are not standard. They imply that the operator \(L\) has an infinite sequence of eigenpairs (eigenfunctions and eigenvalues) \[\{y_{j},w_{j}^{2}\}_{j=1}^{\infty}\subset(H^{2}\cap H_{0}^{1})\times\mathbb{C}\] with an asymptotic structure close enough to that of the case \(V=0\), so that a revival term can be isolated in part of the solution. **Lemma 1**.: _Let \(w_{j}^{2}\) be the eigenvalues of \(L\). Then, for \(j\to\infty\),_ \[|w_{j}-j|=\frac{a_{3}}{j^{3}}+O(j^{-4})\] _where \(a_{3}\in\mathbb{C}\) is a constant that only depends on \(V\). Moreover, if \(\|V\|_{\infty}<\frac{3}{2}\), then each eigenvalue \(w_{j}^{2}\) is simple._ Proof.: By virtue of the classical Marchenko asymptotic formula [14, Theorem 1.5.1], we know that \[w_{j}=j+\frac{a_{1}}{2j}+\frac{\tilde{a}_{3}}{8j^{3}}+O(j^{-4})\qquad k\to\infty\] where \(a_{1}=\langle V\rangle=0\). This gives the eigenvalue asymptotic. The family of operators \(\alpha\longmapsto T_{\alpha}=-\partial_{x}^{2}+\alpha V\) on the domain \(H^{2}\cap H_{0}^{1}\) is a holomorphic family of type (A) for \(\alpha\in\mathbb{C}\). See [13, Example 2.17, p.385]. As the operator \(T_{0}\) has a compact resolvent, then it follows that \(T_{\alpha}\) have compact resolvent for all \(\alpha\in\mathbb{C}\). Now, assume that \(\|V\|_{\infty}<\frac{3}{2}\). Then, for \(|\alpha|\leq 1\), all the eigenvalues of the family \(T_{\alpha}\) are simple, as they lie in the \(\alpha\|V\|_{\infty}\)-neighbourhood of \(\{j^{2}\}_{j=1}^{\infty}\). In particular, for \(\alpha=1\) we know that therefore all the eigenvalues of \(L\) are simple. We will now show that the eigenfunctions of \(L\) are close enough to the orthonormal Fourier-sine basis corresponding to \(V=0\), \[d_{j}(x)=\sqrt{\frac{2}{\pi}}\sin(j\pi x),\quad j\in\mathbb{N}. \tag{3}\] The latter supports revivals. From the second statement of the next lemma, it follows that the solution to (2) is given by \[u(x,t)=\sum_{j=1}^{\infty}\langle f,y_{j}^{*}\rangle e^{-iw_{j}^{2}t}y_{j}(x)\] for \(y_{j}^{*}\) the eigenfunctions of \(L^{*}\) scaled to form a bi-orthogonal set paired with \(y_{j}\), \(\langle y_{j},y_{k}^{*}\rangle=\delta_{jk}\), and the series converges in \(L^{2}\). This will turn out to be crucial for the validity of Theorem 1. **Lemma 2**.: _In the asymptotic regime \(j\to\infty\), the eigenfunctions of \(L\) are such that_ \[y_{j}(x)=c\left[\sin(jx)-\frac{\cos(jx)V_{1}(x)}{2j}+R_{j}(x)\right] \tag{4}\] _where \(\|R_{j}\|_{\infty}=O(j^{-2})\) and \(V_{1}\) is defined by_ \[V_{1}(x)=\int_{0}^{x}V(t)\,\mathrm{d}t. \tag{5}\] _If \(\|V\|_{\infty}<\frac{3}{2},\) then_ \[\left\{\frac{y_{j}}{\|y_{j}\|_{2}}\right\}_{j=1}^{\infty}\] _is a Riesz basis for \(L^{2}(0,\pi)\)._ Proof.: According to [14, Lemma 1.4.1], the eigenfunction associated to \(w_{k}\) is \[y_{j}(x)=A^{+}\Phi_{j}^{+}(x)+A^{-}\Phi_{j}^{-}(x)\] where \[\Phi_{j}^{\pm}(x)=e^{\pm iw_{j}x}\left(1\pm\frac{V_{1}(x)}{2iw_{j}}+\frac{V_{2 }(x)}{(2iw_{j})^{2}}+R_{j}^{\pm}(x)\right)\] for \(\|R_{j}^{\pm}\|_{\infty}=O(w_{j}^{-3})\), \(V_{1}\) given by (5) and \[V_{2}(x) =\int_{0}^{x}LV_{1}(t)\,\mathrm{d}t\] \[=\int_{0}^{x}V(t)V_{1}(t)\,\mathrm{d}t-V(x)+V(0).\] Moreover, again from [14, Lemma 1.4.1], we know that \(R_{j}^{\pm}(0)=0\). Thus, substituting the boundary conditions \(y_{j}(0)=0\), we get \(-A^{-}=A^{+}\). Set the latter equal to \(1\). Hence, \[y_{j}(x)=\sin(w_{j}x)-\frac{\cos(w_{j}x)V_{1}(x)}{2w_{j}}+\tilde{R}_{j}(x) \quad\text{where}\] \[\|\tilde{R}_{j}\|_{\infty}=O(j^{-2}).\] Now, \[\sin\left(jx+O(j^{-3})x\right) =\sin(jx)+s_{j}(x)\qquad\text{and}\] \[\cos\left(jx+O(j^{-3})x\right) =\cos(jx)+c_{j}(x)\] where \[|s_{j}(x)|+|c_{j}(x)| \leq 2\big{|}\cos(O(j^{-3})x)-1\big{|}+2\big{|}\sin(O(j^{-3})x) \big{|}\] \[\leq k_{1}j^{-6}+k_{2}j^{-3}\] for all \(x\in[0,\pi]\). This gives (4), by taking \(R_{j}(x)=\tilde{R}_{j}(x)+s_{j}(x)+c_{j}(x)\). Let us now show that if \(\|V\|_{\infty}<\frac{3}{2}\), then the eigenfunctions form a basis. We aim at applying [13, Theorem 2.20, p.265]. According to [14, Theorem 1.3.1] combined with Lemma 1, the family of eigenfunctions \(\{y_{j}\}\) is complete in \(L^{2}(0,\pi)\). Since it has a dual pair, \(\{y_{j}^{*}\}\), then it is minimal and so therefore exact [11]. Minimality ensures that \(\{y_{j}\}\) is \(\omega\)-independent [11]. This gives two of the hypothesis of [13, Theorem 2.20, p.265]. Now, by virtue of (4) already proven, there exists a constant \(c_{4}>0\) such that \[\left\|\frac{y_{j}}{\|y_{j}\|_{2}}-d_{j}\right\|_{2}\leq\frac{k_{3}}{j}.\] Thus, \[\sum_{j=1}^{\infty}\left\|\frac{y_{j}}{\|y_{j}\|_{2}}-d_{j}\right\|_{2}^{2}<\infty.\] This is the other hypothesis required in [13, Theorem 2.20, p.265] and so indeed1\(\left\{\frac{y_{j}}{\|y_{j}\|}\right\}_{j=1}^{\infty}\) is a Riesz basis of \(L^{2}(0,\pi)\). Footnote 1: The conclusion of [13, Theorem 2.20, p.265] does not exactly state that the family is a Riesz basis. But the proof implies that the family is equivalent to an orthonormal basis, hence it is indeed always a Riesz basis. We will discuss the optimality of the condition \(\|V\|_{\infty}<\frac{3}{2}\) and the case where \(V\) is real-valued in Section 4. From the above, we gather that the eigenvalues of \(L\) are \[\lambda_{j}=w_{j}^{2}=j^{2}+\frac{k_{j}}{j^{2}}\qquad\text{for suitable $\{k_{j}\}\in\ell^{\infty}$}. \tag{6}\] Let \[n_{j}(x)=\sqrt{\frac{2}{\pi}}\cos(j\pi x),\quad j\in\mathbb{N}\] be the non-constant orthonormal Fourier-cosine basis. Below we fix the eigenfunctions according to a normalisation of their bi-orthogonal pairs. Concretely, let \[\phi_{j}(x)=\gamma_{j}y_{j}(x)=\gamma_{j}d_{j}(x)-\frac{\gamma_{j}n_{j}(x)V_{1}(x )}{2j}+\gamma_{j}R_{j}(x) \tag{7}\] where \(\|R_{j}\|_{\infty}\leq\frac{c}{j^{2}}\). Without further mention, from now on the non-zero constants \(\gamma_{j}\) are chosen, such that the associated bi-orthogonal sequence \(\{\phi_{j}^{*}\}\) are \[\phi_{j}^{*}(x)=d_{j}(x)-\frac{n_{j}(x)\overline{V}_{1}(x)}{2j}+R_{j}^{*}(x). \tag{8}\] Then, there exist constants \(0<\tilde{\gamma}_{1}<\tilde{\gamma}_{2}<\infty\) such that \[\frac{\tilde{\gamma}_{1}}{j}<|\gamma_{j}-1|<\frac{\tilde{\gamma}_{2}}{j}\qquad \text{and}\qquad\frac{\tilde{\gamma}_{1}}{j}<\big{|}\|\phi_{j}^{*}\|_{2}-1 \big{|}<\frac{\tilde{\gamma}_{2}}{j}, \tag{9}\] for all \(j\in\mathbb{N}\). ## 3. Proof of Theorem 1 We now state and prove a crucial lemma, from which Theorem 1 follows as a corollary. **Lemma 3**.: _If the potential \(V\) is such that \(\langle V\rangle=0\) and \(\|V\|_{\infty}<\frac{3}{2}\), then the solution to the time-evolution equation (2) is such that_ \[u(x,t)=w(x,t)+\sum_{j=1}^{\infty}\langle f,d_{j}\rangle e^{-ij^{2}t}d_{j}(x) \tag{10}\] _where, for each fixed \(t>0\), \(w(\cdot,t)\in C([0,\pi])\)._ Proof.: We separate the proof into four steps. Step 1. Consider the \(L^{2}\) expansion of the initial condition, \[f(x)=\sum_{j=1}^{\infty}c_{j}\phi_{j}(x),\quad c_{j}=\langle f,\phi_{j}^{*}\rangle.\] According to (6), \[u(x,t) =\sum_{j=1}^{\infty}c_{j}e^{-i\lambda_{j}t}\phi_{j}(x)=\sum_{j=1} ^{\infty}c_{j}e^{-i\big{(}j^{2}+\frac{k_{j}}{j^{2}}\big{)}t}\phi_{j}(x)\] \[=\sum_{j=1}^{\infty}c_{j}e^{-ij^{2}t}\left(1-\frac{ik_{j}}{j^{2}} \int_{0}^{t}e^{-\frac{ik_{j}}{j^{2}}s}\,\mathrm{d}s\right)\phi_{j}(x)\] \[=U_{1}(x,t)-U_{2}(x,t)\] where \[U_{1}(x,t)=\sum_{j=1}^{\infty}c_{j}e^{-ij^{2}t}\phi_{j}(x)\] and \(U_{2}(x,t)\) has a similar expression but involving the integral above. We treat these two terms separately. Step 2. Let us show that \(U_{2}\in C^{1}([0,\pi])\). Set \[\zeta_{j}(x,t)=\frac{ic_{j}k_{j}}{j^{2}}e^{-ij^{2}t}\int_{0}^{t}e^{-\frac{ik_{ j}}{j^{2}}s}\,\mathrm{d}s\ \phi_{j}(x).\] Then, \[|\zeta_{j}(x,t)|\leq\frac{\|\{k_{j}\}\|_{\infty}\|\phi_{j}\|_{\infty}|\langle v,\phi_{j}^{*}\rangle|}{j^{2}}tB_{j}\leq\frac{\|\{k_{j}\}\|_{\infty}\|\phi_{j}\| _{\infty}\|v\|_{2}\|\phi_{j}^{*}\|_{2}}{j^{2}}tB_{j}\] where \[B_{j}=\sup_{s\in[0,t]}\left|e^{-\frac{ik_{j}s}{j^{2}}}\right|\leq\sup_{s\in[0,t]}e^{\left|\frac{\ln k_{j}s}{j^{2}}\right|}\leq B<\infty.\] According to (6), the right hand side constant is independent of \(j\). Here \(t\) is fixed. Moreover, by virtue of (7) and (9), \[\max\left\{\|\phi_{j}^{*}\|_{2},\|\phi_{j}\|_{\infty}\right\}\leq c\] for all \(j\in\mathbb{N}\), where \(c>0\) is a constant independent of \(j\). Hence, by Weierstrass M-test, \[U_{2}(x,t)=\sum_{j=1}^{\infty}\zeta_{j}(x,t)\] converges absolutely and uniformly to a \(C^{1}\) function, because each component is \(C^{1}\). Step 3. Consider now \(U_{1}(x,t)\). According to (7), \[U_{1}(x,t) =\sum_{j=1}^{\infty}c_{j}\gamma_{j}e^{-ij^{2}t}d_{j}(x)-\sum_{j=1 }^{\infty}\frac{c_{j}\gamma_{j}e^{-ij^{2}t}}{2j}n_{j}(x)V_{1}(x)+\sum_{j=1}^{ \infty}c_{j}\gamma_{j}e^{-ij^{2}t}R_{j}(x)\] \[=u_{3}(x,t)+u_{4}(x,t)+u_{5}(x,t).\] In this step we show that \(u_{4}\) and \(u_{5}\) are continuous in the variable \(x\). From the asymptotic behaviour of \(R_{j}(x)\) and an identical argument as we used in step 2, we know that \(u_{5}(x,t)\) is \(C^{1}\) in the variable \(x\). Now, for \(u_{4}(x,t)\), note that \[\sum_{j=1}^{\infty}|c_{j}|^{2}<\infty,\] because \(\{\phi_{j}^{*}\}\) is a Riesz basis, e.g. [11, Theorem 7.13]. Then, by Cauchy-Schwarz, \[\sum_{j=1}^{\infty}\left|\frac{c_{j}}{j}\right|<\infty.\] Thus, since \(V_{1}(x)\) is a bounded function, for all fixed \(t\), the sequence \[\left\{\frac{c_{j}\gamma_{j}e^{-ij^{2}t}\|V_{1}\|_{\infty}}{2j}\right\}_{j=1}^ {\infty}\in\ell^{1}(\mathbb{N}).\] Hence, for all fixed \(t>0\), the family of sequences (family for \(x\in[0,\pi]\)) \[\left\{\frac{c_{j}\gamma_{j}e^{-ij^{2}t}n_{j}(x)V_{1}(x)}{2j}\right\}_{j=1}^{ \infty}\in\ell^{1}(\mathbb{N}).\] Therefore, by the Dominated Convergence Theorem (in \(\ell^{1}\)), we have that \[\lim_{x\to x_{0}}u_{4}(x,t)=u_{4}(x_{0},t),\] for all \(x_{0}\in[0,\pi]\). That is, \(u_{4}(x,t)\) is a continuous function of the variable \(x\). Step 4. Finally, we consider \(u_{3}(x,t)\). By (8), we know that \[c_{j}=\langle f,d_{j}\rangle-\frac{\langle fV_{1},n_{j}\rangle}{2j}+\langle f,R_{j}^{*}\rangle.\] Then, \(u_{3}(x,t)=u_{6}(x,t)-u_{7}(x,t)+u_{8}(x,t)\), where \[u_{6}(x,t)=\sum_{j=1}^{\infty}\langle f,d_{j}\rangle\gamma_{j}e^{-ij^{2}t}d_{ j}(x),\qquad u_{7}(x,t)=\sum_{j=1}^{\infty}\frac{\langle fV_{1},n_{j}\rangle}{2j} \gamma_{j}e^{-ij^{2}t}d_{j}(x)\] \[\text{and}\quad u_{8}(x,t)=\sum_{j=1}^{\infty}\langle f,R_{j}^{*}\rangle \gamma_{j}e^{-ij^{2}t}d_{j}(x).\] We write \(\gamma_{j}=1+(\gamma_{j}-1)\) and split each term of \(u_{6}(x,t)\), \(u_{7}(x,t)\), \(u_{8}(x,t)\) into two sums. For \(u_{6}(x,t)\) we have that \[u_{6}(x,t)=\sum_{j=1}^{\infty}\langle f,d_{j}\rangle e^{-ij^{2}t}d_{j}(x)+ \sum_{j=1}^{\infty}\langle f,d_{j}\rangle(\gamma_{j}-1)e^{-ij^{2}t}d_{j}(x)\] The first term is in the conclusion of the lemma. To deal with the second one we use (9). By Cauchy-Schwarz, \[\sum_{j=1}^{\infty}\frac{|\langle f,d_{j}\rangle|}{j}<\infty.\] Hence, by Weierstrass M-test again, for each \(t>0\), the series of functions \[\sum_{j=1}^{\infty}\langle f,d_{j}\rangle(\gamma_{j}-1)e^{-ij^{2}t}d_{j}(x)\] converges absolutely and uniformly to a \(C^{1}\) function on \([0,\pi]\). Now, the function \(u_{7}(x,t)\) is written as follows \[u_{7}(x,t)=\sum_{j=1}^{\infty}\frac{\langle fV_{1},n_{j}\rangle}{2j}e^{-ij^{2 }t}d_{j}(x)+\sum_{j=1}^{\infty}\frac{\langle fV_{1},n_{j}\rangle}{2j}(\gamma_{ j}-1)e^{-ij^{2}t}d_{j}(x).\] The first component of \(u_{7}(x,t)\) is continuous as a consequence of an argument similar to that employed for \(u_{4}(x,t)\). Indeed, since \(\{n_{j}\}\) is an orthonormal basis and \(fV_{1}\in L^{2}(0,\pi)\), \[\sum_{j=1}^{\infty}\left|\frac{\langle fV_{1},n_{j}\rangle e^{-ij^{2}t}}{2j} \right|\leq\|fV_{1}\|_{2}\frac{\pi}{\sqrt{6}},\] so we can use the sequence \[\left\{\frac{\langle fV_{1},n_{j}\rangle e^{ij^{2}t}}{2j}\right\}_{j=1}^{ \infty}\in\ell^{1}\] to ensure continuity, via the Dominated Convergence Theorem. The second component is a \(C^{1}\) function since its Fourier-sine coefficients decay like \(j^{-2}\), due to (9) and the Cauchy-Schwarz inequality. So, in total, \(u_{7}(\cdot,t)\in C([0,\pi])\). Finally, for the function \(u_{8}(x,t)\) we have that \[u_{8}(x,t)=\sum_{j=1}^{\infty}\langle f,R_{j}^{*}\rangle e^{-ij^{2}t}d_{j}(x)+ \sum_{j=1}^{\infty}\langle f,R_{j}^{*}\rangle(\gamma_{j}-1)e^{-ij^{2}t}d_{j}(x).\] Here, the first component is \(C^{1}\) in \(x\). Indeed, since \[|\langle f,R_{j}^{*}\rangle|=\left|\int_{0}^{\pi}f(x)\overline{R_{j}^{*}(x)} \,\mathrm{d}x\right|\leq\frac{\pi\|f\|_{2}c}{j^{2}},\] the first component is a function whose sine-Fourier coefficients decay like \(j^{-2}\), so it is continuously differentiable in \(x\). The second component is twice differentiable in \(x\), since due to (9) it represents a function whose Fourier-sine coefficients decay \(j^{-3}\). Thus, \(u_{8}(\cdot,t)\) belongs to \(C^{1}([0,\pi])\). Collecting the statements about \(u_{k}(x,t)\) from the previous steps, we conclude that \(u(x,t)\) is as claimed in (10). **Remark 1**.: _In the proof of this lemma, note that all components of \(w(x,t)\) are \(C^{1}\), except \(u_{4}(x,t)\) and \(u_{7}(x,t)\)._ The proof of Theorem 1 now follows from Lemma 3 and the combinatorial argument for \(V=0\). Proof of Theorem 1.: We show that, if \(V=0\), then the solution to (2) at rational times \(t_{\rm r}=2\pi\frac{p}{q}\) is \[u(x,t_{\rm r})=\sum_{j=1}^{\infty}\langle f,d_{j}\rangle e^{-ij^{2}t_{\rm r}}d _{j}(x)=\frac{1}{q}\sum_{k,m=0}^{q-1}e^{2\pi i(-m^{2}\frac{p}{q}+m\frac{k}{q})} f^{*}\Big{(}x-2\pi\frac{k}{q}\Big{)}, \tag{11}\] where \(f^{*}\) denotes the odd, \(2\pi\)-periodic extension of \(f\). Therefore, replacing \(V\) with \(V-\langle V\rangle\) if needed, and applying Lemma 3, gives Theorem 1. The proof of (11) is as follows. For \(t\in\mathbb{R}\) we have that \[u(x,t)=\frac{1}{2\pi}\sum_{j=-\infty}^{\infty}e^{-ij^{2}t}\langle f^{*},e^{ij( \cdot)}\rangle e^{ijx}.\] Then, for \(t=t_{\rm r}\) take \(j\equiv m\) so that \(e^{ij^{2}t_{\rm r}}=e^{im^{2}t_{\rm r}}\). Thus, \[u(x,t_{\rm r})=\frac{1}{2\pi}\sum_{m=0}^{q-1}e^{-im^{2}t_{\rm r}}\sum_{ \begin{subarray}{c}j\in\mathbb{Z}\\ j\equiv m\end{subarray}}\langle f^{*},e^{ij(\cdot)}\rangle e^{ijx}.\] Let the summation on the right had side be denoted by \(T\). Since \[\sum_{k=0}^{q-1}e^{2\pi i(m-j)\frac{k}{q}}=\begin{cases}q&j\equivq m\\ 0&j\not\equivq m,\end{cases}\] we have \[T =\frac{1}{q}\sum_{k=0}^{q-1}e^{2\pi im\frac{k}{q}}\sum_{j\in \mathbb{Z}}e^{-2\pi i\frac{k}{q}j}\langle f^{*},e^{ij(\cdot)}\rangle e^{ijx}\] \[=\frac{1}{q}\sum_{k=0}^{q-1}e^{2\pi im\frac{k}{q}}\sum_{j\in \mathbb{Z}}\Big{\langle}f^{*}\Big{(}\cdot-\frac{2\pi k}{q}\Big{)},e^{ij(\cdot )}\Big{\rangle}e^{ijx}\] \[=\frac{1}{q}\sum_{k=0}^{q-1}e^{2\pi im\frac{k}{q}}f^{*}\Big{(}x- \frac{2\pi k}{q}\Big{)}.\] Hence, (11) holds true. **Remark 2**.: _For general bounded complex potential \(V\), we know from the Dyson expansion that (10) holds true for_ \[w(x,t)=\sum_{k=1}^{\infty}w_{k}(x,t),\] _where \(w_{k}(x,t)\) are explicitly given in terms of integrals of regular functions. These functions are continuous for \(x\in(0,\pi)\). By tracking the convergence of the series, it might be possible to establish its continuity, therefore extend the results of this section to all \(\|V\|_{\infty}<\infty\). See the ideas described in [1, SS7]._ ## 4. The hypotheses of Theorem 1 In this section we examine the optimality of the hypotheses of Theorem 1, in terms of the size and the regularity of the potential. Firstly, we note that for \(L\) self-adjoint, the assumptions on \(V\) can be relaxed. According to the asymptotic expansions reported in [10], the identity (4) is valid for any \(V:[0,\pi]\longrightarrow\mathbb{R}\) of bounded variation irrespective of the size of \(\|V\|_{\infty}<\infty\). Therefore, by following the same method of proof presented above, it directly follows that the conclusion of Theorem 1 still holds true under these modified hypotheses on \(V\). That is, the solution to (1) at rational times, is given by \[u\Big{(}x,2\pi\frac{p}{q}\Big{)}=w\Big{(}x,2\pi\frac{p}{q}\Big{)}+\frac{1}{q} \ e^{-2\pi i\langle V\rangle\frac{p}{q}}\sum_{k,m=0}^{q-1}e^{2\pi i(m\frac{k} {q}-m^{2}\frac{p}{q})}f^{*}\Big{(}x-2\pi\frac{k}{q}\Big{)} \tag{12}\] for a suitable function \(w(\cdot,t)\) continuous in \(x\in[0,\pi]\). In the more general non-self-adjoint setting, for \(\|V\|_{\infty}>\frac{3}{2}\), we only know from the available asymptotic formulas that, for large wavenumbers, all the eigenvalues will be simple and the corresponding eigenfunctions will form a basis of a subspace \(\mathcal{S}\) of finite co-dimension. Despite of this, still \(iL\) is the generator of a one-parameter semigroup, see Remark 2 above. The solution to (1) exists and it is unique for all \(f\in L^{2}\). Moreover, (2) will have a solution with an \(L^{2}\)-convergent eigenfunction expansion and a version of Theorem 1 can be recovered for all \(f\in\mathcal{S}\). We now present an example of a purely imaginary \(V\in C^{\infty}\) with \(\|V\|_{\infty}=2\), for which (12) appears to still be valid. For this purpose we choose for \(V\) a purely imaginary Mathieu potential. Let \(q\in\mathbb{C}\) and \(V(x)=2q\cos(2x)\). Then \(\langle V\rangle=0\). The eigenvalue equation associated to the operator \(L\) is Mathieu's equation. The eigenvalues of \(L\) are \(\omega_{j}^{2}=b_{j}(q)\), the Mathieu characteristic values, which satisfy \[b_{j}(q)=j^{2}+\frac{1}{2(j^{2}-1)}q^{2}+O(q^{4})\] as \(|q|\to 0\). The corresponding eigenfunctions are the Mathieu functions \[\phi_{j}(x)=\mathrm{se}_{j}(x,q)\] for \(j\in\mathbb{N}\). See [12, SS7.4] and also [15, 28.2-7]. Figure 1. Here \(V(x)=2q\cos(2x)\) and \(t=\frac{2\pi}{5}\). We show an approximation of \(u(x,t)\) to \(100\) modes. The blue curves are the solutions as complex-valued functions of \(x\), the orange curves correspond to projections of the real and imaginary parts of these solutions, and the black curves are the projections corresponding to the curves traced by the solutions in the complex plane for \(x\in[0,\pi]\). The figures shown match (a) \(q=\frac{i}{4}\), (b) \(q=\frac{i}{2}\), (c) \(q=\frac{3i}{4}\) and (d) \(q=i\). In Figure 1 we set \(f(x)=\chi_{[\frac{3\pi}{8},\frac{5\pi}{8}]}(x)\) and show a numerical approximation to \(100\) modes of \(u(x,t)\) at time \(t=\frac{2\pi}{5}\) for purely imaginary \(q\) with increasing modulus. As \(|q|\) increases, we aim to investigate numerically how the correction \(w(x,t)\) affects the revivals part of the solution, since the conclusions of Theorem 1 only only hold for \(|q|<\frac{3}{4}\). The graphs shown in Figure 2 strongly suggest that the structure of revival of the solution prevails in all cases shown. Figure 2. Here \(V(x)=2q\cos(2x)\) and \(t=\frac{2\pi}{5}\). We show an approximation of \(w(x,t)\) to \(100\) modes. The blue curves are the complex-valued functions of \(x\), the orange curves correspond to projections of the real and imaginary parts of these, and the black curves correspond to the graphs traced by \(w(x,\frac{2\pi}{5})\) on the complex plane for \(x\in[0,\pi]\). The figures shown match (a) \(q=\frac{i}{4}\), (b) \(q=\frac{i}{2}\), (c) \(q=\frac{3i}{4}\) and (d) \(q=i\). Indeed, in Figure 2 we show a 100 modes approximation of \[u(x,t)-\sum_{j=1}^{\infty}\langle f,d_{j}\rangle e^{-ij^{2}t}d_{j}(x)\] for the same data as in Figure 1. For (a)-(b) we confirm the shape of \(w(x,t)\). For (c)-(d), note that even when \(q=\frac{3\pi}{4}i\) and \(q=i\), the difference appears to still be continuous. ## Acknowledgements We kindly thank David Smith for his valuable comments made during the preparation of this manuscript. The work of George Farmakis was funded by EPSRC through Heriot-Watt University support for Research Associate positions, under the Additional Funding Programme for the Mathematical Sciences.
2306.01891
DH-PTAM: A Deep Hybrid Stereo Events-Frames Parallel Tracking And Mapping System
This paper presents a robust approach for a visual parallel tracking and mapping (PTAM) system that excels in challenging environments. Our proposed method combines the strengths of heterogeneous multi-modal visual sensors, including stereo event-based and frame-based sensors, in a unified reference frame through a novel spatio-temporal synchronization of stereo visual frames and stereo event streams. We employ deep learning-based feature extraction and description for estimation to enhance robustness further. We also introduce an end-to-end parallel tracking and mapping optimization layer complemented by a simple loop-closure algorithm for efficient SLAM behavior. Through comprehensive experiments on both small-scale and large-scale real-world sequences of VECtor and TUM-VIE benchmarks, our proposed method (DH-PTAM) demonstrates superior performance in terms of robustness and accuracy in adverse conditions, especially in large-scale HDR scenarios. Our implementation's research-based Python API is publicly available on GitHub for further research and development: https://github.com/AbanobSoliman/DH-PTAM.
Abanob Soliman, Fabien Bonardi, Désiré Sidibé, Samia Bouchafa
2023-06-02T19:52:13Z
http://arxiv.org/abs/2306.01891v3
# DH-PTAM: A Deep Hybrid Stereo Events-Frames Parallel Tracking And Mapping System ###### Abstract This paper presents a robust approach for a visual parallel tracking and mapping (PTAM) system that excels in challenging environments. Our proposed method combines the strengths of heterogeneous multi-modal visual sensors, including stereo event-based and frame-based sensors, in a unified reference frame through a novel spatio-temporal synchronization of stereo visual frames and stereo event streams. We employ deep learning-based feature extraction and description for estimation to enhance robustness further. We also introduce an end-to-end parallel tracking and mapping optimization layer complemented by a simple loop-closure algorithm for efficient SLAM behavior. Through comprehensive experiments on both small-scale and large-scale real-world sequences of VECtor and TUM-VIE benchmarks, our proposed method (DH-PTAM) demonstrates superior performance in terms of robustness and accuracy in adverse conditions, especially in large-scale HDR scenarios. Our implementation's research-based Python API is publicly available on GitHub for further research and development: [https://github.com/AbanobSoliman/DH-PTAM](https://github.com/AbanobSoliman/DH-PTAM). Stereo, events, SuperPoint, R2D2, SLAM. ## I Introduction Simultaneous Localization and Mapping (SLAM) is pivotal in robotics and computer vision, aiming to chart unknown terrains while discerning an agent's location. Among the key SLAM contributions is the Parallel Tracking and Mapping (PTAM) method [1], which uniquely separates tracking and mapping into parallel threads for augmented efficiency and real-time performance in monocular systems. However, PTAM faced scale ambiguity challenges inherent to monocular SLAM. Addressing this issue, its successor, Stereo Parallel Tracking and Mapping (S-PTAM) [2], leveraged stereo vision to extract depth information directly, eliminating scale ambiguity and fortifying robustness. In recent years, learning-based features extraction and description methods [3, 4], and deep learning based approaches [5] have been applied to improve robustness. Visual Odometry (VO), an integral component of SLAM, has predominantly depended on conventional cameras to determine motion. However, such methods often fail in high dynamic range (HDR) scenarios where lighting conditions fluctuate [6] (see Fig. 1). Fortunately, the innovation of event cameras, also termed asynchronous or dynamic vision sensors (DVS) [7], offers a groundbreaking solution. Unlike their traditional counterparts that capture frames at fixed intervals, event cameras relay a continuous stream of "events" showcasing pixel-wise brightness alterations. This not only empowers them to operate at unparalleled speeds and in dimly litl conditions but also markedly diminishes motion blur [8]. With the ability to adeptly handle swift motions, scenes with significant HDR, and challenging lighting, event cameras emerge as superior alternatives, increasing both SLAM and VO's robustness and precision in environments that would confound traditional cameras. The distinctive event-based design makes these sensors incredibly adapted at tracking fast-moving objects, setting them apart as an invaluable asset to visual odometry, especially under adverse conditions. Deep learning-based features are more robust than traditional methods [3, 4], as they can learn from large amounts of data and generalize well to unseen data. They are also more invariant to changes in viewpoint and lighting, making them suitable for real-world applications. Recently, pre-trained models have been widely adopted in computer vision and have achieved state-of-the-art performance in object detection, semantic segmentation, and image classification tasks. This paper proposes a deep hybrid stereo events-frames parallel tracking and mapping system (see Fig. 2) that significantly improves SLAM accuracy and robustness in dynamic Fig. 1: Experiments on school-scooter and corner-slow sequences from the VECtor dataset show the estimated trajectory with the constructed scene map (green dotted rectangle). The red dotted rectangle highlights an HDR use-case where DH-PTAM estimates the trajectory continuously based on the two fusion modes (Dynamic Vision Sensor (DVS) or Active Pixel Sensor (APS) biased). APS: denotes the standard camera’s global shutter frames. environments. This system combines the advantages of stereo RGB and event cameras, which can capture visual information at high temporal resolution. The use of deep learning techniques in this system allows for the extraction of robust features from the stereo hybrid image and event frames, which improves the accuracy of the feature-matching process and the estimation of the camera pose. The main contributions can be summarized as follows: * We propose an end-to-end parallel tracking and mapping (PTAM) approach based on a novel spatio-temporal synchronization of stereo visual frames with event streams. * We propose a simple mid-level feature loop-closure algorithm for prompt SLAM behavior based on a learning-based feature description method to maximize robustness. * DH-PTAM's effectiveness is evaluated in both stereo event-aided and image-based visual SLAM modes, achieving improved accuracy when incorporating event information, shown in an ablation study on the CPU versus the GPU of a consumer-grade laptop. This paper is organized as follows: Section II gives a brief overview of the state-of-the-art SLAM methods. Section III provides an in-detail overview of the proposed method and offers insights into the novel parts of the algorithm. Section IV comprehensively evaluates the algorithm on the most recent VECtor [9] and TUM-VIE [10] benchmarks, along with defining the limitations. Section V summarizes the experiments' main observations, the proposed method's behavioral aspects, and the start points for future works. ## II Related Work **Events-Frames hybridization.** Event-aided systems leverage the high-quality representations that events can produce after processing, especially in dynamic and dimmed environments where RGB camera frames fail. Some of the well-known event representations are event image (EI) [12], Time Surfaces (TS) [13], Event Spike Tensor (EST) [14], and recently Event 3-Channel Tensor (E3CT) [11]. Others [7] build the front-end on an Event Generation Model (EGM) [15] or construct motion-compensated event frames (MEF) [16] aided by a gyroscope. Towards a traditional frame reconstruction from events, [17] proposes a Log Intensity Reconstruction (LIR), a model-based method, and [18] proposes Spade-e2vid, a learning-based method. Indirect methods [19], such as frame-based approaches, extract keypoints from the input data in the front-end. This front-end stage typically involves detecting and matching salient features in the sensory data, such as images or event streams. These keypoints are then passed to the back-end, where state estimation algorithms are used to estimate the robot's pose and build a consistent map of the environment. Conversely, direct methods [7] attempt to process all available sensor data, such as individual pixel intensity changes in images (events) or all RGB frame pixels, without any intermediate filtering or feature extraction in the front-end, relying on the back-end to handle the entire data. The proposed method adopts a hybrid approach where all events are directly processed during the events-frames fusion in the front-end, while only the reliable learning-based features from the fusion frames are fed to the back-end (see Fig. 2). Table I compares the latest event-based and event-aided VO solutions concerning the sensor setup, events pre-processing layer (EPL), direct or indirect event processing, and the loop-closure capability to minimize visual drifts. **Event-aided visual-SLAM.** DH-PTAM builds upon the pioneering work of [7], which introduces a monocular 6-DoF visual odometry model that synergistically integrates events and grayscale frames using a direct approach. In this model, the fusion of events and frames is achieved by applying EGM to the intensity frame, transforming it into a unified reference that aligns with the event stream, termed as brightness increments (event-like). This process is followed by an error minimization cost function, which estimates the extrinsics in real-time for accurate mapping and is augmented by a photometric bundle adjustment for tracking. However, Fig. 2: Block diagram of the proposed hybrid event-aided stereo visual odometry approach (DH-PTAM). \(f\) denotes the fusion function defined in (6). \(\Delta t^{k}\) is the event volume \(\mathcal{V}_{0}(x,y,t)\) accumulation time defined in (2). E3CT denotes the Event 3-Channel Tensor [11], an image-like event representation. DH-PTAM distinguishes itself from [7] by employing a image-like unified reference in its front-end, harmoniously combining stereo event streams with stereo RGB frames. Loop-closure detection is paramount in visual-SLAM, effectively minimizing drifts by allowing a system to recognize previously traversed locations. The realm of loop-closure detection predominantly features two methodologies: mid-level features [20] and the bag-of-words model [21]. While mid-level features offer a more nuanced representation than low-level features such as edges and corners, they don't consider the specificity of high-level features like object recognition. Deep learning descriptors, as referenced in [22], typify mid-level features. They extract more sophisticated information from raw data than low-level features like pixel values, yet they don't reach the specificity of direct task-related features, for instance, object labels. ## III Methodology ### _System Overview_ Fig. 2 illustrates the main components and the process of DH-PTAM. The system establishes a global reference frame based on the camera position in the initial frame. A preliminary map is created by identifying and triangulating distinctive points in the first stereo pair of images. For subsequent frames, the tracking thread calculates the 6D pose of each stereo frame by minimizing the discrepancy between the projected map points and their matches. The system chooses a subset of keyframes used in another thread to update the map at a slower pace. Map points are derived from the stereo matches of each keyframe and added to the map. The mapping thread constantly improves the local discrepancy by adjusting all map points, and stereo poses using Bundle Adjustment. A pose graph is utilized to preserve the global consistency of the map which is a shared resource among the tracking, mapping, and loop-closing threads. Point correspondences are actively searched between keyframes to strengthen the constraints of the pose graph optimization smoothing process. **Notations**. The odometry state representation comprises the 3D points \(X_{w}^{k}\) and a 7-increment vector \(\mu\in\mathfrak{se}(3)\), which is the current pose of the left fusion frame at time \(k\): \[\mu^{k}=[\delta x\ \delta y\ \delta z\ \delta q_{x}\ \delta q_{y}\ \delta q_{z}\ \delta q_{w}]^{\top}\, \tag{1}\] where \([\delta x\ \delta y\ \delta z]^{\top}\) is the incremental translation vector and \([\delta q_{x}\ \delta q_{y}\ \delta q_{z}\ \delta q_{w}]^{\top}\) is the incremental quaternion vector. ### _Temporal Synchronization Approach_ Our temporal synchronization approach (see Fig. 3) considers the general case of global shutter cameras where the exposure time \(t_{exp}\) is known. We adopt the constant-time \(\Delta t^{k}\) events accumulation window \(k\) approach where the number of accumulated events during this temporal window is ablated in the qualitative analysis in Fig. 4. As soon as stereo RGB camera frames are received at timestamps \(t_{\text{CAM}}\), we calculate the fusion frames timestamps assuming the hardware synchronization of stereo RGB images and stereo event streams, using: \[t_{f}=t_{\text{CAM}}+\frac{t_{exp}}{2}\,\ \Delta t^{k}=t_{f}^{k}-t_{f}^{k-1}\, \tag{2}\] where the left \(t_{\text{CAM}}\) is the selected stereo keyframe timestamp. ### _Spatial Hybridization Approach_ Leveraging the findings of [25], where Scale-Consistent monocular depth learning is explored, our method is firmly rooted in the depth scale equivalence principle. Rather than just an assumption, this principle is an inference drawn from the inherent properties of closely-spaced APS-DVS sensors. Fig. 4: Ablation study on reducing the temporal window width versus controlling the number of events in the designed window. All event frames are post-processed E3CTs by median filtering followed by a binary threshold. Fig. 3: Temporal synchronization scheme. \(t_{exp}\) is the global shutter camera exposure time. \(\Delta t\) is the event representation (E3CT) volume accumulation window. \(t_{f}\) is the fusion frame calculated timestamp. \(t_{\text{DVS,CAM}}\) are the DVS events, and RGB camera frames timestamps, respectively. It asserts that the depth scales between frames maintain a consistent relationship. Diverging from the approach in [25], we incorporate the front-end's depth uncertainty into the back-end's stereo bundle adjustment process. Here, it acts as an adaptive mechanism, progressively refining depth estimates, thereby boosting system performance. The E3CT events pre-processing layer is adopted and modeled as two consecutive filtering kernel convolutions on the event volume \(\mathcal{V}_{0}(x,y,t)\) of temporal width \(\Delta t^{k}\) (see Fig. 2). The first kernel to filter the time decaying events in the volume, is the \(\alpha\)-exponential time decay kernel and is modeled as: \[\mathcal{V}_{1}(x,y,t)\doteq\exp\left(-\alpha\left(\frac{\mathcal{V}_{0}(x,y, t)-\eta/2}{\eta/6}\right)^{2}\right)\;, \tag{3}\] where \(\alpha=0.5\) and the decay rate \(\eta=30\) [ms] for our model. Followed by a trilinear voting kernel to stack the events in the three channels tensor so that each event contributes to two consecutive channels depending on their location from a vertex of this trilinear kernel. An event near the top contributes a higher weight to the current channel and a lower weight to the neighboring ones. These contribution weights of the three channels can represent a percentage of an R-G-B color map; hence, the E3CT can be considered a synthetic RGB frame of events. The trilinear voting kernel can be modeled as follows: \[\mathcal{V}_{2}(x,y,t_{i})\doteq\max\left(0,\,1-\left|\frac{\mathcal{V}_{1}(x,y,t_{i})}{\delta t}\right|\right)\;, \tag{4}\] where \(\delta t\) is the temporal bin \(i\) size as discussed in [14]. After applying the trilinear temporal voting kernel on the exponential-decay time surface, we stack the 3-channel tensor temporal bins together, resulting in a synthetically colored 2D frame called the Event 3-Channel Tensor (E3CT). In Fig. 2, we can observe that the constructed synthetic colors are always consistent, meaning that the stereo left and right constructed E3CTs have identical colors for the same scene. Conventional frame-based post-processing operations can be applied to the constructed E3CTs, such as adaptive threshold, contrast stretching, color correction and balance, and denoising functions. We consider a fully calibrated stereo RGB and event cameras stack as represented in Fig. 5, so that the rigid-body transformations \(\mathcal{T}_{cd}\) and the cameras intrinsic parameters \(\mathcal{K}_{c},\mathcal{K}_{d}\) are known. Given that the same post-processing operations are applied on the current stereo E3CT frames (see Fig. 4), the 2D-to-3D-to-2D consecutive inverse-forward projections of the pixels on the E3CT frames \(P_{d}^{h}\) to the RGB camera frames \(P_{dec}^{h}\) can be performed as follows (comparable to Equation (2) in [25]): \[P_{d\bar{c}c}^{h}\doteq\frac{1}{\mathcal{V}}\,\mathcal{K}_{c}\left[R|t|_{cd} \,\lambda\,\mathcal{K}_{d}^{-1}\,P_{d}^{h}+\delta P_{align}^{h}\;,\right. \tag{5}\] where \((.)^{h}\) denotes the pixel location in homogeneous coordinates. The term \(\delta P_{align}^{h}\) denotes the pixel location alignment correction factor for the RGB and event frames so that the same 3D world point \(X_{u}^{h}\) should correspond exactly to the pixel locations \(P_{d\bar{c}c}^{h}\), \(P_{d}^{h}\). This alignment term is observed to be constant for the same sensor rig with non-varying intrinsic and extrinsic parameters. \(\delta P_{align}^{h}\) value can be estimated using an offline optimization process only once on a selected number of frames (the more the accurate) with high confidence feature matches, and this value is given in Section IV for both VECtor and TUM-VIE sequences. Finally, the fusion function (and frame) \(f(.)\) performs a temporal cross-dissolve (linear blending) between both the left (\(D_{0},C_{0}\)) and right (\(D_{1},C_{1}\)) E3CTs and RGB camera frames, respectively, and is formulated as: \[f(C,D)\,=\,(1-\beta)*C+\beta*D\;, \tag{6}\] where \(\beta\in[0,1]\) is the E3CT contribution weight in the current fusion frame. \(\beta\) value is calculated online and depends on the scene lighting and texture conditions. It is set to chirp-shaped values \(\beta=\max(\bar{C}/C^{\text{max}},\,1-\bar{C}/C^{\text{max}})\) when the RGB camera frame fails to detect features due to adverse HDR conditions and low-textured scenes, and this is the DVS-biased fusion mode. For situations where RGB camera frames can detect reliable scene features with good lighting and enough texture, the \(\beta\) value is harmonic with the scene lighting conditions according to \(\beta=\min(\bar{C}/C^{\text{max}},\,1-\bar{C}/C^{\text{max}})\), and this is the APS-biased fusion mode. To reduce the amount of extracted features and maintain the back-end processing complexity and latency in reasonable ranges, \(\beta\) value can be capped at a certain value, as shown in Fig. 6 by setting \(\beta=0.3\) as an example. Dynamic scenes with challenging and adverse conditions can easily trigger rapid switching between these two fusion modes during long-term navigation. This causes a critical problem during the feature tracking process using conventional low-level feature detectors, such as ORB, SIFT, SURF, BRIEF, and FAST. Accordingly, applying mid-level feature detectors that depend mainly on learning-based architectures could solve this fusion frame modes alternation problem. We employ the learning-based feature extractors and descriptors [3, 4] for their high robustness and feature detection speed. Fig. 6 elaborates in a real-world case-study, the fusion mode automatic switch using the \(\beta\) parameter in response to the rapid intensity fluctuations of the RGB camera frame \(C\) in HDR scenarios represented by the parameter \(\bar{C}/C^{\text{max}}\). Fig. 5: Geometry of the stereo hybrid event-RGB cameras stack. \(\mathcal{T}_{cd}\) denotes the rigid-body transformations. \(P_{d\bar{c}c}^{h}\) and \(P_{d}^{h}\) denote pixels locations. ### _Optimization-based State Estimation_ As our work is based on the original S-PTAM system, all the optimization Jacobians mentioned in this section can be found with detailed proofs in [2]. All objective functions are minimized with the Levenberg-Marquardt algorithm implemented in the \(g^{2}o\) optimization library. We employ the Huber loss function for outliers rejection \(\rho(.)\). **System bootstrapping**. The first stereo fusion frames are considered a keyframe. Then, a triangulation for the collected feature matches on the left and right fusion frames is performed to initialize the map. **Pose tracking thread**. Each map point is projected into the viewing frustum of the anticipated stereo position, and we then look nearby for the match. A valid prediction of the current pose is required for such a projection. By contrasting the descriptions, map points, and features are matched. The \(L_{2}\) norm is computed using the binary descriptors of SuperPoint and R2D2 (see Fig. 7). The match is valid if the distance falls below a certain threshold; otherwise, it is ignored. The pose refinement is then applied to recover the current pose knowing the previous one using the following objective function: \[L^{\text{refine}}=\operatorname*{arg\,min}_{\mu}\sum_{i\in N}\rho(||J_{i}^{k} \mu_{k}-\Delta z_{i}(\mu_{k-1},X_{w}^{i})||^{2})\;, \tag{7}\] where \(N=\{z_{1},\,\ldots,\,z_{M}\}\) and \(M\) is the number of matched measurements. The measurement \(z=[u,v]^{\top}\) is a pixel 2D location of the forward projection of a 3D map point \(X_{w}\) using the pinhole model projection function \(\pi(X_{w}^{i})=\mathcal{K}_{c}\mathcal{T}_{f_{ow}}^{i}X_{w}^{i}\): \(J_{i}^{k}=\frac{\partial\Delta z_{i}(\mu)}{\partial\mu_{k}}\) is the re-projection error's Jacobian with respect to the current odometry state vector. \(\Delta z\) is the re-projection error of a matched set of measurements on the current \(k\) stereo fusion frames and is defined as: \[\Delta z_{i}(\mu,X_{w})=z_{i}-\pi(\exp{(\mu)}\mathcal{T}_{f_{ow}}^{k-1}X_{w}^ {i})\;, \tag{8}\] where the 3D point cloud \(X_{w}\) is considered a constant optimization parameter and not updated in the tracking thread and \(\mathcal{T}_{f_{ow}}^{k-1}=exp(\mu)\in SE(3)\) with \(\exp{(.)}\) the exponential map in the \(\mathfrak{Lie}\) group for the previous increment state vector. If the number of observed points is less than 90% of the points recorded in the previous keyframe, a frame is chosen to be a keyframe after the current pose has been evaluated. Then, new map points are created by triangulating the stereo pair's remaining mismatched features. The keyframe is then placed in the local mapping thread for processing. **Mapping thread**. We apply Bundle Adjustment (BA) to fine-tune the camera poses (keyframe map) and the 3D points (point cloud map). Local Bundle Adjustment minimizes the re-projection error of every point in every keyframe \(f_{0}^{k}\). Given an initial set of \(N\) keyframe poses \(\{\mathcal{T}_{f_{ow}}^{1},\,\ldots,\,\mathcal{T}_{f_{ow}}^{k}\}\), an initial set of \(M\) 3D points \(X_{w}^{i}\), and measurement sets \(S\in\{S_{1},\,\ldots,\,S_{N}\}\), where each set comprises the measurement \(z_{i}^{k}\) of the \(i^{th}\) point in the \(k^{th}\) keyframe, the local BA is performed using the following objective function on all keyframes in a pre-defined sliding-window size \(N\): \[L^{\text{BA}}=\operatorname*{arg\,min}_{\mu,\,X_{w}}\sum_{k=1}^{N}\sum_{i\in S _{k}}\rho(||J_{i}^{k}\begin{bmatrix}\mu_{k}\\ X_{w}^{i}\end{bmatrix}-\Delta z_{i}(\mu_{k},X_{w}^{i})||^{2})\;, \tag{9}\] where the 3D point cloud \(X_{w}\) is considered a variable optimization parameter and is updated in the mapping thread. Hence, the \(J_{i}^{k}=[\frac{\partial\Delta z_{i}(\mu_{k},X_{w}^{i})}{\partial\mu_{k}}, \frac{\partial\Delta z_{i}(\mu_{k},X_{w}^{i})}{\partial X_{w}^{i}}]\) is the re-projection error's Jacobian with respect to the current odometry state vector and the 3D point as well. **Loop-closure thread**. Instead of the conventional way of keyframe embedding assignments using a bag-of-words, we adopt a simple loop-closure detection method based on the mean of the mid-level learning-based feature descriptors (SuperPoint and R2D2) for each keyframe and assign this mean value as the embedding identity of each keyframe. Once a potential loop closure is detected, the system performs geometric verification through RANSAC-based pose estimation to validate the candidate. If the verification is successful, a loop closure constraint is added to the pose graph, and a graph optimization is performed to distribute the error and update the global map, thus correcting the accumulated drift. Fig. 6: Experiments on mocap-des2 TUM-VIE dataset that show the capability of the proposed events-frames fusion method to maintain and track features in light and dimmed scenes where grayscale-only frames may fail. For each scenario in top: grayscale frame (left) and fusion frame (right). Fig. 7: Normal versus HDR semi-circular spatio-temporal matching for two stereo consecutive fusion frames. Green and blue lines denote the spatial stereo SuperPoints matching at frames \(k\) and \(k-1\), respectively. Red lines denote the temporal matching for keypoints of two consecutive keyframes (left). ## IV Evaluation We perform a thorough, comprehensive evaluation during navigation in real-world, large-scale, and small-scale areas in challenging settings. In subsection IV-A, we compare DH-PTAM with other RGB image-based and event-based/-aided methods on the HDR large-scale sequences of the publicly available dataset VECTOR [9] due to its high-quality ground truth values and sensors calibration parameters. In subsection IV-B, we evaluate the small-scale (mocap-) sequences of TUM-VIE [10] to test the quality of the DH-PTAM spatio-temporal synchronization method with degraded event camera calibration parameters. Moreover, the first 45 frames of TUM-VIE sequences suffer a high over-/under-exposure global shutter alternation, which tests the DH-PTAM's pose estimation stability. We perform a comparative quantitative analysis to evaluate the accuracy of our system in Table II and a qualitative analysis in Fig. 8. The accuracy of DH-PTAM is measured with absolute trajectory error (ATE), and relative pose error (RPE) metrics calculated using the baseline SLAM evaluation tool [26]. To prevail the advantages of complementing the sensor stack with events information, we compare our event-aided stereo visual odometry solution (DH-PTAM) to the latest best-performing open-source visual-inertial systems in literature in Table II. Table III gives the system parameters configuration for large-scale and small-scale sequences. We keep these parameters constant for all sequences of the same scale group without an online fine-tuning process. All experiments are performed on the CPU and the GPU of a 16 GB RAM laptop computer running 64-bit Ubuntu 20.04.3 LTS with AMD(R) Ryzen 7 4800h x16 cores 2.9 GHz processor and a Radeon RTX NV166 Renoir graphics card. Table IV reports a detailed computational complexity analysis for our DH-PTAM system with minimal and maximal system requirements. The high CPU load observed when detecting SuperPoint and R2D2 features can be attributed to the algorithms' design, which prioritizes feature quality and robustness over computational efficiency. This trade-off is often necessary for computer vision research, where high-quality results are crucial for many applications but come at the cost of increased computational complexity. The back-end runs with real-time performance, and it is recommended to run the front-end on a GPU to achieve a memory efficient, faster, and more stable performance. ing different parameters for different sequences in the same scenarios, even within their open-source projects. Table II shows a good performance for DH-PTAM compared to the competing VI-SLAM systems. Although Fig. 8 shows high visual drifts for our vision-only system in the case of units sequences, DH-PTAM could outperform the VI-SLAM systems based on the ATE metric. Fig. 8 gives an overview of the high-quality loop detection of DH-PTAM in the case of corridors sequences. Loop detection failure can be noticed only when the RAM overflows while running the system with enormous point clouds, as in the case of units sequences. We provide trajectory smoothing and post-processing script with our open-source implementation to join estimated trajectory increments in case of RAM overflow failures. ### _TUM-VIE small-scale experiments_ As noticed in [30], the calibrationA (mocap-desk, mocap-desk2) sequences have more accurate depth estimation results than calibrationB (rest of mocap and TUM-VIE large-scale) sequences due to the significant calibration errors in the latter. Hence, we perform our comparative evaluation on TUM-VIE small-scale (mocap-) sequences using calibrationA parameters. Although the same high-quality calibrationA parameters apply to both desk2 and desk sequences with the same spiral motion, DH-PTAM performs the best with desk2 sequence but the worst with desk sequence. This occurs since the scene of the desk sequence is bounded by a close-by white wall that strict the depth, and hence DH-PTAM front-end detects low quality and fewer features for desk than desk2. Table II shows that the more DoF excited (6dof, desk2) and the consistent loops detection (1d-trans), the better the pose estimation quality. ### _Ablation experiments_ **No event streams (\(\beta=0\)).** In Table II, we show an ablation study where we run DH-PTAM on stereo images. We notice estimation failure with all the conventional and learning-based feature detectors except R2D2. Although the ATE metric shows slightly better results without using events, the RPE metric shows much more accurate values when using events. These better ATE values are due to the high performance of the GPU in loop-closures using R2D2 features (see Fig. 8). **Front-end depth maps.** Qualitatively in Fig. 9, we rigorously assess the front-end of our events-frames fusion technique by comparing its inverse depth maps against established methods such as EVO and EDS. By channeling the stereo fusion frames through the SGM matching method [31], our method capitalizes on direct front-end fusion where the inverse depth maps are derived from events projected onto the RGB frames. Remarkably, the point clouds generated through our method exhibited richer details compared to those of EDS. This affirms the potential of our method to set a new benchmark in event-frame fusion paradigms. ## V Conclusion In this paper, we presented the DH-PTAM system for robust parallel tracking and mapping in dynamic environments using stereo images and event streams. The proposed system builds upon the principles of S-PTAM and extends it with a learning-based approach to handle the sparse and noisy nature of event-based sensors while leveraging the rich information Fig. 8: DH-PTAM (GPU (no events) vs. CPU (event-aided)) qualitative analysis. All trajectories are transformed to a reference frame as the ground truth poses using the extrinsic parameters, followed by an alignment with all poses by Umeyama’s SE(3) method implemented in [26]. Large-scale trajectories show high-quality loop closure detection in the case of R2D2 on GPU. The small-scale trajectory shows the high accuracy of the event-aided version of DH-PTAM. provided by fusion frames. Our experiments demonstrate that DH-PTAM outperforms state-of-the-art visual-SLAM methods, particularly in challenging scenarios such as HDR and occlusions. The proposed system can achieve better performance on a GPU and provides a scalable and accurate solution for 3D reconstruction and pose estimation. Future work includes investigating the potential of improving the spatial synchronization through online optimization and exploring the integration of an adaptive paradigm for choosing the temporal window width to further boost robustness and system latency. DH-PTAM proves to have the potential to provide robust and accurate 3D mapping and localization, which are crucial for the successful operation of long-term navigation systems.
2307.07111
More Than React: Investigating The Role of Emoji Reaction in GitHub Pull Requests
Open source software development has become more social and collaborative, evident GitHub. Since 2016, GitHub started to support more informal methods such as emoji reactions, with the goal to reduce commenting noise when reviewing any code changes to a repository. From a code review context, the extent to which emoji reactions facilitate a more efficient review process is unknown. We conduct an empirical study to mine 1,850 active repositories across seven popular languages to analyze 365,811 Pull Requests (PRs) for their emoji reactions against the review time, first-time contributors, comment intentions, and the consistency of the sentiments. Answering these four research perspectives, we first find that the number of emoji reactions has a significant correlation with the review time. Second, our results show that a PR submitted by a first-time contributor is less likely to receive emoji reactions. Third, the results reveal that the comments with an intention of information giving, are more likely to receive an emoji reaction. Fourth, we observe that only a small proportion of sentiments are not consistent between comments and emoji reactions, i.e., with 11.8% of instances being identified. In these cases, the prevalent reason is when reviewers cheer up authors that admit to a mistake, i.e., acknowledge a mistake. Apart from reducing commenting noise, our work suggests that emoji reactions play a positive role in facilitating collaborative communication during the review process.
Dong Wang, Tao Xiao, Teyon Son, Raula Gaikovina Kula, Takashi Ishio, Yasutaka Kamei, Kenichi Matsumoto
2023-07-14T01:22:06Z
http://arxiv.org/abs/2307.07111v1
# More Than React: Investigating The Role of Emoji Reaction in GitHub Pull Requests ###### Abstract Open source software development has become more social and collaborative, evident GitHub. Since 2016, GitHub started to support more informal methods such as emoji reactions, with the goal to reduce commenting noise when reviewing any code changes to a repository. From a code review context, the extent to which emoji reactions facilitate a more efficient review process is unknown. We conduct an empirical study to mine 1,850 active repositories across seven popular languages to analyze 365,811 Pull Requests (PRs) for their emoji reactions against the review time, first-time contributors, comment intentions, and the consistency of the sentiments. Answering these four research perspectives, we first find that the number of emoji reactions has a significant correlation with the review time. Second, our results show that a PR submitted by a first-time contributor is less likely to receive emoji reactions. Third, the results reveal that the comments with an intention of _information giving_, are more likely to receive an emoji reaction. Fourth, we observe that only a small proportion of sentiments are not consistent between comments and emoji reactions, i.e., with 11.8% of instances being identified. In these cases, the prevalent reason is when reviewers cheer up authors that admit to a mistake, i.e., _acknowledge a mistake_. Apart from reducing commenting noise, our work suggests that emoji reactions play a positive role in facilitating collaborative communication during the review process. Keywords:Emoji Reaction, Code Reviews, Mining Software Repositories + Footnote †: journal: Noname ## 1 Introduction In the past few years, open source software development has become more social and collaborative. Known as social coding, open source development promotes formal and informal collaboration by empowering the exchange of knowledge between developers (Dabbish et al., 2012). GitHub, one of the most popular social coding platforms, attracts more than 72 million developers collaborating across 233 million repositories.1 Since 2016, GitHub introduced a new social function called "_reaction_" for developers to quickly express their feeling in an issue report and a pull request (PR). Especially for discussing a PR, we find that2: Footnote 1: [https://github.com/search,2021](https://github.com/search,2021) Footnote 2: [https://tinyurl.com/3rprd6dp](https://tinyurl.com/3rprd6dp) _"In many cases, especially on popular projects, the result is a long thread full of emoji and not much content, which makes it difficult to have a discussion. With reactions, you can now reduce the noise in these threads" - GitHub_ In the context of code review, we assume that a thread full of emoji reactions may also have ulterior intentions (e.g., confusion or conflicts) during the code review process. For instance, Ebert et al. (2019) pointed out that confusion delays the merge decision decreases review quality, and results in additional discussions. Hirao et al. (2020) found that patches can receive both positive and negative scores due to the disagreement between reviewers, which leads to conflicts in the review process. Figure 1 depicts two typical cases where the emoji reactions occur. Figure 1(a) shows the case where the reaction does reduce unnecessary commenting in the thread. The example illustrates how Author B reduces the commenting by simply reacting with a quick expression of approval through THUMBS UP \(\oplus\). In contrast, as shown in Figure 1(b), there exists a case where the emoji usage has an ulterior intention and does not reduce comments in the discussion thread. In detail, Contributor D uses three positive emoji reactions (THUMBS UP \(\oplus\), HOORAY \(\clubsuit\), and HEART \(\heartsuit\)) to represent the appreciation to this PR. Later the contributor goes on to provide detailed comments on the PR. We posit that the intention of the emoji reaction was to express appreciation for the PR, and did not reduce the amount of commenting in the threads. As per our registered report (Son et al., 2021), positive emoji reactions are been widely used in PRs and do not always reduce the commenting noise. We now execute the study protocols to address four research questions, using a statistical sample of 1,850 repositories with 365,811 PRs across seven popular languages. The goal of the study is to investigate the role of emoji reactions in the context of code review from different perspectives. RQ1 is from the view of the review process, while RQ2 is from the human (i.e., contributor) perspective. Finally, RQ3 and RQ4 take a deeper analysis of the context of the comments between the review team. We now present the details of each research question below: * **RQ1: _Does the emoji reaction used in the review discussion correlate with review time?_****Motivation.** Prior studies (Baysal et al., 2016; Kononenko et al., 2018) have widely analyzed the impact of technical and non-technical factors on the review process (e.g., review outcome, review time). However, little is known about whether or not the emoji reaction can be correlated with the review time. It is possible that emoji reaction may shorten the review time, as it could reduce the noise during the review discussions. Thus, our motivation for the first research question is to explore the correlation between the emoji reaction used in the review discussion and review time. * **RQ2: _Does a PR submitted by a first-time contributor receive more emoji reactions?_****Motivation.** We find that the emoji reaction might be used to express appreciation for submitting a PR. Our motivation for this research question is to understand if contributors that have never submitted to the project before receive more emoji reactions. Furthermore, answering this research question is to understand the impact of emoji reactions on the topic of the review. * **RQ3: _Does a PR submitted by a first-time contributor receive more emoji reactions?_****Motivation.** We find that the emoji reaction might be used to express appreciation for submitting a PR. Our motivation for this research question is to understand if contributors that have never submitted to the project before receive more emoji reactions. Furthermore, answering this research question is to understand the impact of emoji reactions on the topic of the review. Figure 1: Examples of emoji reactions used in GitHub. question will provide insights into a potent ulterior motive for an emoji reaction. Our hypothesis is that _H1: PRs submitted by first-time contributors receive more emoji reactions._ We assume that existing contributors could express positive feelings to attract newcomers to the project. * **RQ3: _What is the relationship between the intention of comments and their emoji reaction?_**** **Motivation.** As shown in Figure 1, emoji reactions may not always reduce the commenting noise. Hence, our motivation for the third research question is to explore the relationship between the intention of comments and their emoji reactions. Our hypothesis is that _(H2): There is a significant relationship between comment intentions and emoji reaction kinds._ We assume that a specific comment intention may explain the ulterior purpose of reacting with an emoji reaction. * **RQ4: _Is emoji reaction consistent with comment sentiment?_**** **Motivation.** The RQ3 results reveal that specific sentiments of the emoji (i.e., THUMBS UP ) are widely used in PRs. Inspired by this, our motivation for this research question is to investigate whether there is any inconsistency between sentiments of the comments and sentiments of the emoji reactions. Furthermore, we manually check the reasons why inconsistency happened. We believe answering RQ4 would help newcomers better understand the emoji usage in the PR discussion. Our hypothesis is that _(H3): There is a significant relationship between comment sentiments and emoji reaction sentiments._ We assume that a specific comment sentiment may explain the ulterior purpose of reacting with an emoji. Our results of each research question are summarized as follows. For _RQ1_, our regression model shows that the number of emoji reactions has a significant correlation with the review time and furthermore PRs with emoji reactions overall tend to take a longer review time than PRs with no emoji reactions. For _RQ2_, the quantitative results show that a PR submitted by a first-time contributor is less likely to receive emoji reactions, being 10.4% of PRs. For _RQ3_, we observe that the PR comments with the intention of information giving are more likely to receive emoji reactions. Moreover, the positive THUMBS UP (67.2% on average) is widely used, while the negative emoji reactions are rarely used (0.5% to 1.5%). For _RQ4_, our results show that Positive-Negative pair-wise of sentiment inconsistency accounts for 11.8% and the most frequent inconsistency reason is to acknowledge a mistake. These results suggest that the usage of emoji reactions might be a sign of an already positive environment and could have the potential to reduce toxicity. Our study demonstrates the crucial role that emoji reactions play in facilitating collaborative communication during the review process. The remainder of this paper is organized as follows. Section 2 describes the dataset preparation. Section 3 presents the results of our empirical study, including the approaches for research questions, while Section 4 discusses the findings and their implications. Section 5 discloses the threats to validity and Section 6 presents the related work. Section 7 discusses the deviations between the execution and the registered report. Finally, we conclude the paper in Section 8. ## 2 Dataset Preparation In this section, we present our dataset that was collected for our experiments. Studied Repositories.We expand on our studied dataset from the active software development repositories shared by Hata et al. (2019). Specifically, each repository has more than 500 commits and at least 100 commits during the most active two-year period. In total, 25,925 repositories were contained across seven popular languages (i.e., C, C++, Java, JavaScript, Python, PHP, and Ruby). Since we focus on the pull request, we did further filtering to collect the repository candidates that actively perform code review activities. To do so, we automatically examined repositories whether they have at least 100 PRs after the GitHub emoji reaction was introduced to the community (Mar. 10, 2016), relying on GitHub GraphQL API.3 After this, 6,695 repositories met the selection criteria. Footnote 3: [https://docs.github.com/en/graphql](https://docs.github.com/en/graphql) However, it is impossible to retrieve the PR metadata for all these repositories due to the limited time-frame and the restriction of GitHub API downloading. We then draw a representative repository sample, taking the seven popular languages into account. With a confidence level of 95% and a confidence interval of 5,4 1,850 representative repositories were finally selected as shown in Table 1 (i.e., 217 C repositories, 258 C++ repositories, 290 Java repositories, 310 JavaScript repositories, 246 PHP repositories, 300 Python repositories, and 229 Ruby repositories.) Data Collection and Cleaning.For the selected 1,850 repositories, we then used GraphQL API to retrieve all PRs that were submitted between January 2020 and April 2022. We argue that the more recent PRs, the more emojis will be reacted to PR comments. Sufficient metadata was collected including PR title, PR status, PR author, created time, closed time, and comments. \begin{table} \begin{tabular}{l r r r r r} \hline \hline & \# Repos. & \# PRs & \# PRs with E.R. & \# PR Comments & \# PR Comments with E.R. \\ \hline C & 217 & 50,847 & 7,330 (14.4\%) & 177,540 & 11,377 (6.4\%) \\ C++ & 258 & 70,877 & 10,920 (15.4\%) & 253,062 & 18,804 (7.4\%) \\ Java & 290 & 69,473 & 6,853 (9.9\%) & 219,477 & 11,102 (5.1\%) \\ JavaScript & 310 & 36,135 & 4,467 (12.4\%) & 82,623 & 6,591 (8.0\%) \\ Python & 300 & 53,996 & 5,893 (10.9\%) & 158,535 & 8,967 (5.7\%) \\ PHP & 246 & 49,680 & 9,165 (18.4\%) & 207,345 & 15,871 (7.7\%) \\ Ruby & 229 & 34,803 & 4,832 (13.9\%) & 89,996 & 7,906 (8.8\%) \\ \hline Total & 1,850 & 365,811 & 49,460 (13.5\%) & 1,188,578 & 80,618 (6.8\%) \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of Studied Dataset Statistics. E.R. refers to Emoji Reaction. For each comment in a PR, we collected its commenters and emoji reactions. In all, 554,112 PRs with 1,697,305 comments were retrieved from these 1,850 repositories across seven languages. We then did two filters to ensure the quality of the studied PRs. First, we excluded PRs that were labeled as open status, since we can not calculate the review time of these PRs. The second filter is to exclude the PRs that were submitted by bots. To do so, we referred to the work of Golzadeh et al. (2022), which systematically compares the performance of the existing bot detection techniques, and we leveraged the combination of two bot detection techniques with the highest precision known as "bot" suffix and list of bots. "bot" suffix refers to the technique that relies on the presence of the string "bot" at the end of the author's name, which has notably been used by other researches (Dey et al., 2020; Saadat et al., 2021). The list of bots denotes the technique that relies on a predefined list of ground-truth bots manually identified by Golzadeh et al. (2021). After the cleaning, 365,811 PRs were left with 1,188,578 PR comments. As shown in Table 1, 13.5% of PRs across seven languages contain at least one emoji reaction. 80,618 PR comments are reacted with emojis, accounting for 6.8%. ## 3 Empirical Results In this section, we present the results for each of our research questions as well as their motivations and approaches. ### Review Time and Emoji Reactions (RQ1) **Approach.** To answer RQ1, we perform a statistical analysis to investigate the correlation between the PR review time and the emoji reaction kinds, using a non-linear regression model. Align with the prior work (Wang et al., 2021b, a; Kononenko et al., 2018), several confounding factors are taken into account. Similarly, the goal of our statistical analysis is not to predict the review time but to understand the associations between the emoji reaction and the review time. The review time is defined as the time interval between pull request creation date and closed date, in hours, referring to the work of Maddila et al. (2019). _Explanatory Variables._ Table 2 presents the 14 studied explanatory variables that are used in our non-linear regression model. Since we investigate the effect of the emoji reaction, we introduce two related variables: With emoji reaction (Whether or not a PR contains any emoji reactions, binary) and # Emoji reactions (The number of emoji reactions in a PR). Previous research suggests a group of different metrics that can affect code review time (Kononenko et al., 2018; Wang et al., 2021b) and we select 12 variables that are governed by our ability to accurately calculate their value from the data. For the PR purpose, similar to the prior work (Thongtanunam et al., 2017; McIntosh et al., 2014), a PR is classified as documentation if the description contains 'doc', 'copyright', or 'license' words, while a PR is classified as bug fixing if the description contains 'fix', 'bug', or 'defect' words. The rest of the PRs are classified as feature introduction. _Model Construction._ To reduce the potential threat resulting from the imbalanced data, we randomly select 49,460 PRs without any emoji reactions, which are equal to the number of PRs with emoji reactions (as shown in Table 1). That is, a total of 98,920 PRs are used in our model construction. To analyze the association between the emoji reaction and review time, we adopt the Ordinary Least Squares (OLS) multiple regression model. This regression model allows us to fit the nonlinear relationship between the dependent variables and the explanatory variables. We carefully follow the construction approach provided by Harrell Jr. et al. (1984) and Mcintosh et al. (2016), consisting of five steps. In step _(I) Estimating budget for degrees of freedom,_ as suggested by Harrell Jr. et al. (1984), we spend no more than \(\frac{n}{15}\) degrees of freedom in our OLS model, where n refers to the number of studied PRs in the dataset. _In step (II) Normality adjustment_, we analyze whether the distribution of review time is skewed using the skewness and kurtosis function of the moments R package, since OLS expects that the dependent variables are normally distributed. If the review time is skewed, we use a log transformation to lessen the skew so as to better fit OLS (Mcintosh et al., 2016). _In step (III) Correlation and redundancy analysis_, similar to the prior work, we use the Spearman rank correlation (\(\rho\)) to assess the correlation between each pair of variables, since highly correlated explanatory variables could interfere with \begin{table} \begin{tabular}{l|l} \hline Confounding variables & Description \\ \hline \hline \# Added lines & The number of added LOC by a PR. \\ \# Deleted lines & The number of deleted LOC by a PR. \\ PR size & The total number of added and deleted LOC by a PR. \\ Purpose & The purpose of a PR, i.e., bug, document, feature. \\ Language & The repository language that a PR belongs to. \\ \# Files & The number of files changed by a PR. \\ \# Commits & The number of commits involved in a PR. \\ Description length & The length of a PR description. \\ PR author experience & The number of prior PRs that were submitted by the \\ & PR author. \\ \# Comments & The number of comments left on a PR. \\ \# Author comments & The number of comments left by the PR author. \\ \# Reviewer comments & The number of comments left by the reviewers who \\ \# Reviewers & The number of developers who participate in the discussion. \\ \hline Emoji reaction variables & Description \\ \hline With emoji reaction & Whether or not a PR contains any emoji reactions (binary). \\ \# Emoji reactions & The number of emoji reactions in a PR. \\ \hline \hline \end{tabular} \end{table} Table 2: The studied explanatory variables in RQ1. each other and further lead to spurious conclusions. We repeat the process until the Spearman correlation coefficient values of all pairs of variables are less than 0.7. In addition, to ensure that each studied variable provides a unique signal, we use the redun function of the rms R package to detect redundant variables and remove them from the model. In step _(IV) Allocating degrees of freedom_, we rely on the spearman2 function of the rms R package to calculate the Spearman multiple \(\rho^{2}\) between the explanatory and dependent variables, and effectively allocate degrees of freedom to the remaining variables. To avoid the over-fitting issue, we only allocate three to five degrees of freedom to those variables with high \(\rho^{2}\) values. In step _(V) Fitting OLS models_, similar to the work (Mcintosh et al., 2016), we use restricted cubic splines to fit our modeled dataset. We assign the allocated degrees of freedom to each explanatory variable, using the rcs function of the rms R package. Last, we adopt the ols function to construct the model. _Model Analysis._ After the model construction, we analyze the model to assess its goodness and examine the relationship between the review time and the emoji reaction. Similar to the prior work (Mcintosh et al., 2016; Wang et al., 2021b), we analyze the model using the following three steps: _(I) Assessing model stability_, _(II) Estimating the power of explanatory variables_, and _(III) Examining relationship_. In step (I), we use an adjusted \(R^{2}\) value (Hastie et al., 2009) to evaluate our model. In order to avoid the overfitted model, we apply the bootstrap validation approach to estimate the optimism of the adjusted \(R^{2}\). Finally, we subtract the average \(R^{2}\) optimism from the initial adjusted \(R^{2}\) value to obtain the optimism-reduced adjusted \(R^{2}\). In step (II), we estimate the power of explanatory variables and their corresponding significance, using Wald \(\chi^{2}\) maximum likelihood tests provided by the anova function. In step (III), we examine the direction of the relationship between the explanatory variables (especially emoji reaction related variables) and the review time, using the Predict function of the rms package. **Results.** We now discuss the RQ1 results in the view of model construction and model analysis. _Model Construction._ Table 3 shows the model performance and statistics of the studied variables that are adopted in the regression model. In the correlation and redundancy analysis, we remove those explanatory variables that are highly correlated with one another (\(\rho\) value is greater than 0.7), i.e., PR size, #Added lines, # Comments, # Reviewers, and With Emoji Reaction. For the remaining explanatory variables, we do not find any redundant variable, i.e., the variable that has a fit with an \(R^{2}\) greater than 0.9. We then carefully allocated the degrees of freedom to the surviving variables, based on their potential for sharing a nonlinear relationship with the dependent variable. As shown in the table, 24 degrees of freedom were spent on our constructed models. _Model Analysis._ We first examine the goodness of our model fit. Table 3 shows that the model achieves an adjusted \(R^{2}\) score of 0.2830. Similar to the prior work (Kononenko et al., 2018), such an adjusted \(R^{2}\) score is acceptable as our model is supposed to be explanatory not for the predictive purpose. After applying the bootstrap validation approach, we observe that the optimism of an adjusted R2 is 0.0004, indicating that the constructed model does not have an overestimation issue and is stable to provide meaningful insight. We now discuss the explanatory power of the focused variable (i.e., Emoji Reaction Variables) and describe the relationship between this variable and the review time. _With Emoji Reactions_ variable was removed due to its high correlation with another variable. For the _# Emoji Reactions_ variable, as shown in Table 3, the explanatory power of the focused variable (i.e., Emoji Reaction Variables) and describe the relationship between this variable and the review time. _With Emoji Reactions_ variable was removed due to its high correlation with another variable. For the _# Emoji Reactions_ variable, as shown in Table 3, the explanatory power of the focused variable (i. in the table, we find that it has a significant correlation with the PR review time, with _p-value_\(<\)_0.001_. However, according to the Wald \(\chi^{2}\) values, we observe that the explanatory power of the number of emoji reactions is not as large as the explanatory powers of the other dominant confounding variables. The larger the \(\chi^{2}\) of an explanatory variable is, the larger the contribution that the variable makes to the model. Specifically, the Wald \(\chi^{2}\) value of the _# Emoji Reactions_ variable is 624.1, while the Wald \(\chi^{2}\) values of the _# Reviewer Comments_, _# Commits_, _# Author Comments_, _Description Length_ are 3349.9, 2960.0, 1518.4, and 1338.4, respectively. Figure 2 depicts the nonlinear relationship between the number of emoji reactions and the review time. As we can see, the number of emoji reactions shares a positive relationship when the number of emoji reactions is less than three. But a negative relationship is observed if the number of emoji reactions is greater than three. Based on the overall likelihood, the result suggests that compared to the PRs with no emoji reactions, it tends to take a relatively longer time for PRs with emoji reactions to be closed. RQ1: Dres the emoji reaction used in the review discussion code with review time? Findings from the non-linear regression models indicate that the number of emoji reactions has a significant correlation with the review time. In other words, PRs with emoji reactions are likely to have a longer review time when compared to those that do not. Figure 2: The nonlinear relationship between the likelihood that the review time is taken for a PR and the number of Emoji reactions (RQ1). ### First-time Contributors and Emoji Reactions (RQ2) **Approach.**To answer RQ2, we perform a quantitative analysis to investigate to what extent are PRs submitted by first-time contributors. The first-time contributor in our study is defined as a contributor who has never submitted any PRs to the project. Below we describe our approach in detail. _Proportion of PRs submitted by first-time contributors._ We use the dataset of PRs that contain at least one emoji reaction (i.e., 49,460 PRs as shown in Table 1). We notice that a PR author is able to react emojis to comments. Since we focus on the emoji reactions that are received from developers (not the PR author), we first remove the emoji reactions that are from the PR author. In the end, 36,902 PRs remained. According to the explanatory variable (# PR author experience) from RQ1, the dataset then is split into two groups: one group is labeled as the first-time contributor where the count of # PR author experience is 0, the other group is labeled as the non first-time contributor where the count of # PR author experience is greater than 0. Afterward, we calculate the proportion of the PRs submitted by first-time contributors and non first-time contributors, respectively. To validate the proposed hypothesis _(H1): PRs submitted by first-time contributors receive more emoji reactions._, we use the one proportion Z-test (Paternoster et al., 1998). One proportion Z-test compares an observed proportion to a theoretical one when the categories are binary. **Results.** Table 4 presents the proportion of PRs that contain emoji reactions (excluding the cases where the emojis are reacted by the PR authors) submitted by first-time contributors and non first-time contributors. As shown in the table, PRs submitted by non first-time contributors are more likely to receive emoji reactions, accounting for 89.6% of the instances. On the other hand, only 10.4% of PRs submitted by first-time contributors received emoji reactions. _Significant Testing._ The statistical test (Z-test) reveals that there is a significant difference between the proportion of PRs that receive emoji reactions submitted by the first-time contributors and the non first-time contributors, with a p-value \(<\)0.001. This result indicates that the proposed hypothesis _"(H1): PRs submitted by first-time contributors receive more emoji reactions."_ is not established. \begin{table} \begin{tabular}{l c} \hline **PRs that contain emoji reactions** & **Percent (\%)** \\ \hline By first-time contributors & 3,855 (10.4\%) \\ By non first-time contributors & 33,065 (89.6\%) \\ \hline Total PRs & 36,920 \\ \hline \end{tabular} \end{table} Table 4: The proportion of PRs containing emoji reactions submitted by first-time contributors and non first-time contributors (RQ2). RQ2: Does a PR submitted by a first-time contributor receive more andj reactions? A PR submitted by a first-time contributor is less likely to receive emoji reactions (i.e., 10.4% of all PRs by first-contributors). Statistically, the hypothesis that PRs submitted by first-time contributors receive more emoji reactions is not established. ### Comment Intentions and Emoji Reactions (RQ3) **Approach.** To answer RQ3, we conduct a quantitative analysis to investigate the PR comments that contain emoji reactions in two aspects: (I) the popularity of the comment intentions that contain emoji reactions, and (II) association mining between emoji reaction kinds, and between comment intentions and emoji reaction kinds. _Popularity of comment intentions with emoji reactions._ To categorize the intentions of the comments, we use the taxonomy of intentions proposed by Huang et al. (2018). They manually categorized 5,408 sentences from issue reports of four projects on GitHub to generalize the linguistic pattern for category identification. The definitions of intentions are described as follows: * _Information Giving (IG)_: Share knowledge and experience with other people, or inform other people about new plans/updates (e.g., "The typeahead from Bootstrap v2 was removed."). * _Information Seeking (IS)_: Attempt to obtain information or help from other people (e.g., "Are there any developers working on it?"). * _Feature Request (FR)_: Require to improve existing features or implement new features (e.g., "Please add a titled panel component to Twitter Bootstrap."). * _Solution Proposal (SP)_: Share possible solutions for discovered problems (e.g., "I fixed this for UI Kit using the following CSS."). * _Problem Discovery (PD)_: Report bugs, or describe unexpected behaviors (e.g., "the firstletter issue was causing a crash."). * _Aspect Evaluation (AE)_: Express opinions or evaluations on a specific aspect (e.g., "I think BS3's new theme looks good, it's a little flat style."). * _Meaningless (ML)_: Sentences with little meaning or importance (e.g., "Thanks for the feedback!"). To facilitate the automation, they proposed a convolution neural network (CNN) based classifier with outstanding performance, which represents the state-of-the-art in the area of automatic intention mining. _Satiny Check._ To ensure that their automatic classifier is reliable enough to be used, we conducted a satiny check. To do so, we first randomly selected 30 comment samples and ran the automation to label these comments into the above seven intentions. Then, the first two authors opened up a discussion to manually check whether the labeled intentions of the comments are correct or not, referring to the golden datasets from Huang et al. (2018). After the check, we observe that 24 samples out of 30 samples (80%) are correctly labeled by the automatic classifier. We argue that this classifier is reliable enough to be used, since the average accuracy obtained in the original work in the context of issue comments is around 0.83. Encouraged by the satiny check results, we will use the CNN based classifier to automatically label the intention of the comments that have emoji reactions. After automatically labeling the intentions of comments, we group all the comments into seven intention categories and count their popularity. Then, for each intention category, we calculate the occurrence of the emoji reaction kinds (i.e., To complement the insights, we also investigate how frequently each kind of emoji reaction is used across the eight comment intentions. Table 5 shows the related results. In general, we observe that positive emoji reactions are frequently used (\(\includegraphics[]{100.0000101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101001 the aspect evaluation intention (74.4%). For HEART, HOORAY, and LAUGH, they are likely to be used in Others intention (sentences with little meaning or importance, e.g., "Thanks for the feedback!"), i.e., 14.5%, 13.9%, and 5.5%, respectively. The EYES is commonly used in problem discovery intention (3.9%). Significant Testing.First of all, the Shapiro-Wilk test suggests that the classified intentions of comments are normally distributed, with alpha = 0.12 (greater than 0.05). The Pearson's Chi-Squared test confirms that there is a significant relationship between comment intentions and emoji reactions, i.e., p-value \(<\)0.001, indicating that the hypothesis (H2) is established. Association mining.We apply the association rule mining at two levels: between emoji reaction kinds, and between emoji reaction kinds and comment intentions. With regard to the extracted rules at the emoji reaction level, we only consider rules with the support of at least 0.0013 (i.e., the rule must apply to at least six emoji reaction kinds) and the confidence of at least 0.8, similar to the prior work (Prana et al., 2019). Table 6 shows the three extracted association rules at the emoji reaction level. For example, the comment is reacted by the group of {\(\lx@sectionsign\), \(\includegraphics[width=14.226378pt]{}\), \(\includegraphics[width=14.226378pt]{}\) is likely to contain, with the confidence of 0.86. In terms of the intention level, we set the support of at least 0.15 and the confidence of at least 0.7. Table 7 shows the two extracted association rules at the intention level. We observe that the comment intentions of aspect evaluation or information giving are likely to be reacted with \(\includegraphics[width=14.226378pt]{}\) (THUMBS UP), the confidence being 0.80 and 0.72, separately. ### Consistently of emoji Sentiments (RQ4) Approach.To answer RQ4, we conduct a mixed analysis (qualitative and quantitative) to investigate the PR comments that contain emoji reactions in terms of (I) frequency of sentiment consistency between emoji reactions and comments, and (II) reasons behind the inconsistency. Below, we describe the approach of two aspects in detail. Frequency of sentiment consistency.To investigate the sentiment consistency, first we need to label the sentiment of emoji reactions and the sentiment of comments, separately. _For the sentiment of emoji reactions_, we refer to our preliminary study in the registered report and we classify the sentiments of the emoji reactions into the following four types: Positive, Negative, Neutral, and Mixed. Positive refers to the single usage or the combination usage of THUMBS UP \(\oplus\), LAUGH \(\oplus\), HOORAY \(\oplus\), HEART \(\oplus\), and ROCKET \(\not\) reactions. Negative denotes to the single usage or the combination usage of THUMBS DOWN \(\oplus\),and CONFUSED \(\oplus\) reactions. Neutral represents the usage of EYES \(\oplus\) reaction. Mixed refers to the combination usage of the four categories mentioned above. _For the sentiment of comments_, we use SentiStrength-SE (Islam and Zibran, 2018), a state-of-the-art sentiment analysis tool that utilizes domain dictionary and heuristics for software engineering text. In our study, input is the PR comment that contains emoji reactions and output is a sentiment score ranging from -5 (very negative) to 5 (very positive). Note that, to reduce the potential threat due to false positives, we exclude the PR descriptions that contain emoji reactions. After we obtain the sentiment labels for the emoji reactions and the comments, we then map them into the level of a PR comment and count the frequency of the possible patterns (e.g., Positive-Positive, Positive-Neutral, Positive-Negative, and so on). To validate the proposed hypothesis _(H3): There is a significant relationship between comment sentiments and emoji reaction sentiments_, similar to RQ3, we perform the Pearson's Chi-Squared test to confirm if a significant relationship exists or not. _Reasons behind the inconsistency._ To investigate the reasons of the inconsistency between the sentiment of emoji reaction and PR comment, we perform a manual coding. In this study, we focus on the inconsistency in terms of the positive and negative aspects. It refers to the case where the sentiment of the emoji reaction is positive while the sentiment of the comment is negative, or vice versa. Since there is no specifically available reason taxonomy to refer to, we apply an open coding approach (Charmaz, 2014) to classify randomly sampled comments in inconsistent cases. To discover as complete of a list of reasons as possible, we strive for _theoretical saturation_(Eisenhardt, 1989) to achieve analytical generalization. Similar to the prior work (Xiao et al., 2021), we initially set our saturation criterion to 50. Then the first two authors continue to code randomly selected inconsistent comments until no new codes have been discovered for 50 consecutive comments. If the new codes occur, we performed another pass over all of the comments to correct miscoded entries and tested the level of agreement of our constructed codes, since codes that emerge late in the process may apply to earlier reviews. The third author joins the open discussion when disagreements occur and validates the suggested codes. Finally, we reach saturation after coding 100 samples. **Results.**_Frequency of sentiment consistency._ Figure 4 shows the pair-wise distribution between the sentiments of emoji reactions and comments. As shown in the figure, we observe that the major pair-wise is Positive-Neutral (i.e., Positive emoji reaction and Neutral comment), accounting for 52.7%. The second most frequent pair-wise is Positive-Positive, being 31.1%. At the same time, we find that the proportion of Positive-Negative (i.e., Positive emoji reaction and Negative comment) pair-wise is not high as expected, 11.6% being identified, ranked as the third most common pair-wise. Conversely, the Negative-Positive (i.e., Negative emoji reaction and Positive comment) accounts for only 0.2%. Significant TestingThe Pearson's Chi-Squared test suggests that there is a significant relationship between comment sentiments and emoji reaction sentiments, i.e., p-value \(<\)0.001, indicating that our proposed hypothesis (H3) is established. Reasons behind sentiment inconsistencyNine reasons are identified from our open coding process (i.e., 100 samples). Table 8 presents the frequency of these reasons, with their representative samples. We find that _Acknowledge a mistake_ is the most common reason behind sentiment inconsistency, accounting for 22%. The following reason is _Acknowledge a proposal_, being 15% classified. The rare reasons include _Irony_ and _Disagree with optimistic proposal_ (i.e., 1% for them, respectively). On the other hand, we observe that the rate of False-Positive is relatively high, accounting for 23%. False-Positive refers to those cases, where the sentiments of comments are not correctly labeled by the tool after the manual validation. Results indicate that a high percent of emoji sentiments are consistent with their comments, with only 11.8% (i.e., 11.6% + 0.2%) being inconsistent. The statistical test confirms that there is a significant relationship between comment sentiments and emoji reaction sentiments. We then identify nine reasons behind these inconsistencies (acknowledgments, confirmations, counters, irony, etc.). A positive acknowledgment of a negative mistake is the most frequent reason (i.e., 22%). Figure 4: Pair-wise between emoji reactions and comment sentiments (RQ4). The annotations in the plot are in the form of %. ## 4 Discussion We now discuss the implications of our empirical findings, provide possible suggestions to facilitate code review and social communication, and outline the limitations and potential research topics. Implications for project development on GitHub.Developers on GitHub should be aware that using an emoji reaction is correlated with the review process. Our RQ1 results indicate that emoji reactions have a statistical correlation with review time, and PRs with emoji reactions overall tend to take a longer review time than the ones with no emoji reactions. It is important to note that we do not claim causality, as there could be other confounding factors that play a role in the regression models. We speculate that PRs with emoji reactions are likely to include complex contexts that require further discussions and may be involved more participants compared to the ones that do not contain emojis. Usually, simple PRs are handled quickly, even when they do not have reactions. Hence, such a finding does not imply that emoji reactions have a negative effect on review efficiency as the efficiency could be affected by the nature of PRs. Potential reasoning may be related to the findings observed in the context of issue reports. Prior work (Borges et al., 2019) reported that issues with reactions usually take more time to be handled and have longer discussions, especially for those complex bugs and enhancements. \begin{table} \begin{tabular}{l|c c} \hline Furthermore, due to longer discussions, cognitive loads combined with textual and visual expressions may increase. For example, Tigwell and Flatla (2016); Bai et al. (2019) pointed out that the difference in how emoji are understood could bring ambiguities in the interpretation of communication, and leads to inefficiency. The usage of emoji reactions signals an already friendly environment on GitHub. This is evident by RQ3, where positive emoji reactions are commonly used during PR comments across different intentions, especially THUMBS UP being 67.2% on average (Table 5). Negative emoji reactions are rarely used, only ranging from 0.5% to 1.5%. Our RQ3 results suggest that contributors tend to react to those comments with positive emojis that share knowledge and require opinion expressions or evaluations. Specifically, We find that those intentions of information giving and aspect evaluation are more likely to receive emoji reactions (25.0% and 19.4%, respectively, shown in Figure 3). Emoji reactions on GitHub have the potential to reduce toxicity. During the reason analysis behind the sentiment inconsistency (RQ4), results show that in most cases, emoji reactions are used as a kind of encouragement to react to negative comments. Even in cases where the reaction is not consistent with the sentiment of the comment, as shown in Table 8, the most prevalent reason is to acknowledge a mistake by replying to a negative comment with a positive emoji reaction. This indicates that emoji reactions can reduce or counter negative sentiments, and potentially lighten tense situations. The negative effect of toxicity in open-source projects is widely studied. Literature shows evidence that some newcomers disengage due to negative interactions (Qiu et al., 2019), and frequent contributors often suffer from stress and burnout facing toxic interactions (Raman et al., 2020). Along the same line to support social communication, our study results prove the positive role of emoji reactions play in the collaboration environment. Suggestions.We provide several possible suggestions for the stakeholders. On the one hand, contributors should not be afraid of expressing negative or contentious comments, as emoji reactions have the potential to diffuse any toxicity and can lighten the tense situation. Especially for newcomers, they should not expect immediate emoji reactions as a sign of hostility. Instead, emoji reactions could be considered an indicator of familiarity and positiveness in the environment. On the other hand, in addition to diffusing tense situations, reviewers are encouraged to keep using emoji reactions to properly express their sentiments in order to construct a friendly and positive interaction with review participants. In terms of governance and maintaining consistency, GitHub projects are encouraged to document, broadcast, and mentor the proper usage of emoji reactions for the sustained livelihood of Open Source projects. Limitations and potential research topics.Our research foresees research directions for code review efficiency, the health of communication channels, and the attraction, onboarding, and sustainability of contributions to Open Source Software. To address the limitation of the statistical models, future work should further investigate the causality of the relationship between emoji reactions and review time to understand whether or not the longer time has some negative or positive effects on the project. RQ2 results show that non first-time contributors are more likely to receive emoji reactions, hence future work could further explore what features will contribute to this. Several studies demonstrated that the usage of emojis could be affected by developer gender, age, and cultural difference (Guntuku et al., 2019; Herring and Dainas, 2020). Therefore, another potential direction is to look into whether such factors play a role in the usage of emojis within the scope of code review. ## 5 Threats to Validity In this section, we disclose the threats to the validity of our study. External validity.This validity refers to the generalization ability of our results. We conduct an empirical study on 1,850 GitHub repositories across seven popular languages. The threat may occur due to the number of studied repositories and the choice of languages. To relieve this threat, we construct a representative dataset, with a confidence level of 95% and a confidence interval of 5, by taking each language-based repository population into account. Meanwhile, these seven popular languages are commonly studied in the prior study (Hata et al., 2019). We believe that our study on these 1,850 repositories is sufficient enough to shed light on the role of the emoji reactions in the code review process. Construct validity.This validity denotes the degree to which our measurements capture. We summarize four threats. The first threat may exist in the bot removal in the data preparation process. We rely on the combination of two bot detection techniques ("bot" suffix and list of ground-truth bots). It is reasonable that we can not ensure we are able to remove all the GitHub bots. However, the highest accuracy of these two bot detection techniques is recognized in the latest research. Second, identifying the purpose of PRs in RQ1 may introduce a potential threat. The purposes in fact would not only include documentation, bug fixing, and feature introduction but also include others like refactoring activity. However, there is no reliable method to automatically identify this activity. Hence, along the same line as the existing work (Thongtanunam et al., 2017; McIntosh et al., 2014; Wang et al., 2021), we decided to adopt the same automated identification method. Third, other diversity-related factors (e.g., gender, age, and culture differences) could potentially influence the findings on the usage of emoji reactions in RQ3 and RQ4. Thus, future work should further investigate the effect of these factors in the context of code review. The last threat could exist in our RQ4 qualitative analysis to categorize the reasons behind the sentiment inconsistency. To mitigate this threat, we performed an open coding to ensure all the potential codes, carefully following the guidelines provided by the literature. For instance, we reached saturation until no new codes emerged (i.e., 100 samples). Internal validity.Internal validity refers to the approximate truth about inferences. Two potential threats are concluded. The first threat may occur, resulting from the selected classifier or tool. In RQ3, to label the comment intention, we use the automatic classifier, which is considered as SOTA in the context of intention mining. The comment intention could be mislabeled due to the classifier uncertainty. Thus, to mitigate this threat, we performed a satiny check on 30 samples and the accuracy was promising, i.e., 80%. In RQ4, the comment sentiment could as well be mislabeled since we rely on the SentiStrength-SE tool. However, the SentiStrength-SE tool is recognized as one of the SOTA sentiment tools and is widely adopted in the SE domain. The second threat could exist in the selection of statistical significance testing techniques. To validate the establishment of our three proposed hypotheses, we apply several kinds of significance testing techniques (i.e., Z-test, Pearson's Chi-Squared test). The casual-effect may vary on the selection of these techniques. We are however confident, as these techniques are commonly used in empirical studies. ## 6 Related Work This section situates this study with respect to the related work, including the effect of non-technical factors in code review, and modern communication in software development. ### Effect of Non-technical Factors in Code Review Software code review is established as one of the best practices to improve software quality (Rigby and Storey, 2011). Tool-based code reviews, a lightweight variation of traditional code reviews, have been widely studied in the last decade. A number of studies point out that the effect of tool-based review is not only affected by technical factors, but also by non-technical factors (Baysal et al., 2013; Baysal et al., 2016). In terms of developer participation, McIntosh et al. (2014) reported that developer participation in code review is also associated with the incidence of post-release defects. Similarly, Kononenko et al. (2015) found that personal metrics and participation metrics shared a relationship with the quality of the review process. Review comment is one of the main building blocks and is crucial to the review quality (Sadowski et al., 2018). Jiang et al. (2013) showed that the amount of discussion is a significant indicator of whether a patch will be accepted for integration into the Linux kernel. El Asri et al. (2019) empirically studied the impact of sentiment embodied within developers' comments and they observed that the reviews with negative comments on average took more time to complete than the reviews with positive/neutral comments. Hirao et al. (2020) studied divergent review scores in the review discussion and suggested that divisive patches are often integrated through discussion, integration timing, and careful revision. Zhang et al. (2022) conducted an empirical study to systematically investigate the factors that affect pull request latency. They found that comments have a significant impact on the latency of pull requests. With the wider usage of emoji reactions in the code review (i.e., pull requests), we argue that as a non-technical factor, emoji reactions may also have an effect on the review process. ### Modern Communication in Software Development For purposes of collaboration and communication between developers, communication channels (e.g., issue reports, mailing lists, pull requests, and GitHub discussions) are integrated or supplemented in development tools (Chui et al., 2012; Tantisuwankul et al., 2019). Various non-textual information is embedded in these communication channels to enrich the knowledge sharing between developers. Nayebi (2020) showed that the increasing trend of image usage and images facilitated Q&A communication in Stack Overflow. Recent studies also investigate the usage of these visual contents in terms of issue reports (Agrawal et al., 2022; Kuramoto et al., 2022). Link sharing is also another common modern communication strategy. Hata et al. (2019) investigated the characteristics and phenomenons of links in source code comments and found that referencing links is prevalent in source code comments (e.g., software homepages, and articles or tutorials). Wang et al. (2021b) observed that the practice of link sharing has a significant correlation with the code review time. Fu et al. (2022) found that code snippets are not frequently used in code review, however, code snippets with the aim of suggestion are actively accepted by the patch author. Developers also use emojis, visual symbols in computer mediated communication, to represent their opinions (Bai et al., 2019). The prior work has shown that emoji use in the East and the West reveals recognizable normative and culture specific patterns (Guntuku et al., 2019). Claes et al. (2018) also observed that various types of emojis can be used in two studied issues trackers and from different developers (i.e., western developers use more emojis than eastern developers) at different times (e.g., negative emojis are used during weekends). Herring and Dainas (2020) pointed out that females use emoji and emoticons more frequently than males do. Borges et al. (2019) explored the usage of GitHub reactions in issues reports. They found that the usage of emoji reactions is increasingly growing and furthermore emoji reactions make more discussion for bug and enhancement issue reports. To support visual representations of sentiments, Venigalla and Chimalakonda (2021) proposed a plugin namely StackEmo to augment comments on Stack Overflow with emojis. Chen et al. (2021) learned representations of SE-related texts through emoji prediction by leveraging Tweets and GitHub posts containing emojis. Rong et al. (2022) discovered the usage of emojis in software development. Their results showed that emojis tend to be used during every phase of a conversation on GitHub regardless of the developers' roles. Although the phenomenon of emoji usage in textual content has been commonly addressed, it is still unclear what is the role of the emoji as a reaction during the software development (i.e., code review process). Our study would complement the knowledge lying in the context of modern communication. ## 7 Deviations from the Registered Report Our study requires unavoidable deviations from the research protocol (Son et al., 2021) that we have carefully documented below: 1. _Studied Repository Dataset_. Due to the restriction of GitHub API downloading within our given time-frame, we decided to select a representative sample of the studied repositories instead of the full dataset. Hence, we constructed a representative repository dataset that contains 1,850 repositories across seven languages, instead of the 25,925 repositories outlined in the protocol. As described in the data collection (Section 3), this is a statistical sample that was systematically collected. 2. _Research Approach_. We outline three deviations related to the methodology, and have a minimal effect on the results. First, in the explanatory variable selection (RQ1), we added another two variables (i.e., Language and Description length) that show the effect in the prior work. Meanwhile, we removed the variable "commit size" and instead we introduced another two related variables "# Added lines" and "#Deleted lines". In addition, we altered the calculation of the dependent variable "Review time" by taking into account the PR closed time. Second, in the proportion of PRs submitted by the first-time contributors (RQ2), we did not follow a control study where we planned to construct a balanced control group. Instead, we divided all PRs that contain emoji reactions into the ones by first-time contributors and the other ones by non first-time contributors. Third, during the manual classification of reasons behind sentiment inconsistency (RQ3), we did not calculate the Kappa score as the open coding process does not require it (Hirao et al., 2019; Xiao et al., 2021). 3. _Hypothesis Testing_. While conducting the experiments, we realized that the hypotheses and corresponding statistical tests reported in the protocol were not appropriate from RQs2-4. For RQ2, the result was in the form of binary categories, thus we changed to use the one proportion Z-test. For RQ3, we changed the prior hypothesis (H2) to the hypothesis "There is a significant relationship between comment intentions and emoji reaction kinds". To validate it, we used the Pearson's Chi-Squared test. For RQ4, we changed the hypothesis (H3) to the hypothesis "There is a significant relationship between comment sentiments and emoji reaction sentiments". Similarly, we adopted the Pearson's Chi-Squared test to validate it. ## 8 Conclusion In this work, we conducted an empirical study on 1,850 repositories to investigate the role of emoji reactions in GitHub pull requests. Specifically, we analyzed the following four aspects: (i) the correlation between the emoji reactions and the review time, (ii) whether first-time contributors being likely to receive emoji reactions, (iii) relationship between the comment intentions and emoji reactions, and (iv) consistency between comment sentiments and emoji reaction sentiments. The results show that (i) the number of emoji reactions have a significant correlation with the review time; (ii) a PR submitted by a first-time contributor is less likely to receive emoji reactions; (iii) the PR comments with the intention of information giving are more likely to receive emoji reactions; and (iv) Positive-Negative inconsistency pair-wise accounts for 11.8%, and to acknowledge a mistake is the most common reason of sentiment inconsistency. These empirical results highlight the role of emoji reactions play in collaborative communication and specifically suggest that the usage of emoji reactions signals an already positive environment on GitHub and it has the potential to reduce toxicity. Future research directions include a deeper study of investigating the causality of emoji reactions and understanding the reasons why it takes a longer time to complete the review, and analyzing the diversity-related perspectives of emoji reactions in the scope of code review. **Acknowledgements** This work is supported by Japanese Society for the Promotion of Science (JSPS) KAKENHI Grant Numbers 18H04094 and 20K19774 and 20H05706. **Data Availability** The datasets generated during and/or analysed during the current study are available in the GitHub repository, [https://github.com/NAIST-SE/EmojiReaction_PR](https://github.com/NAIST-SE/EmojiReaction_PR). ## Declarations **Conflict of Interests** The authors declare that Raula Gaikovina Kula and Yasutaka Kamei are members of the EMSE Editorial Board. All co-authors have seen and agree with the contents of the manuscript and there is no financial interest to report.
2310.03960
Steklov eigenvalues of nearly hyperspherical domains
We consider Steklov eigenvalues of nearly hyperspherical domains in $\mathbb{R}^{d + 1}$ with $d\ge 3$. In previous work, treating such domains as perturbations of the ball, we proved that the Steklov eigenvalues are analytic functions of the domain perturbation parameter. Here, we compute the first-order term of the asymptotic expansion and show that the first-order perturbations are eigenvalues of a Hermitian matrix, whose entries can be written explicitly in terms of the Pochhammer's and Wigner $3j$-symbols. We analyse the asymptotic expansion and show the following isoperimetric results among domains with fixed volume: (1) for an infinite subset of Steklov eigenvalues, the ball is not optimal, and (2) for a different infinite subset of Steklov eigenvalues, the ball is a stationary point.
Chee Han Tan, Robert Viator
2023-10-06T01:03:45Z
http://arxiv.org/abs/2310.03960v1
# Steklov eigenvalues of nearly hyperspherical domains ###### Abstract. We consider Steklov eigenvalues of nearly hyperspherical domains in \(\mathbb{R}^{d+1}\) with \(d\geq 3\). In previous work, treating such domains as perturbations of the ball, we proved that the Steklov eigenvalues are analytic functions of the domain perturbation parameter. Here, we compute the first-order term of the asymptotic expansion and show that the first-order perturbations are eigenvalues of a Hermitian matrix, whose entries can be written explicitly in terms of the Pochhammer's and Wigner \(3j\)-symbols. We analyse the asymptotic expansion and show the following isoperimetric results among domains with fixed volume: (1) for an infinite subset of Steklov eigenvalues, the ball is not optimal, and (2) for a different infinite subset of Steklov eigenvalues, the ball is a stationary point. Key words and phrases:Steklov eigenvalues, perturbation theory, hyperspherical harmonics, isoperimetric inequality 2010 Mathematics Subject Classification: 35C20, 35P05, 41A58 ## 1. Introduction Let \(\Omega\subset\mathbb{R}^{d+1}\) be a bounded domain with \(d\geq 1\). The Steklov eigenvalue problem for \((\lambda,u)\) on \(\Omega\) is given by \[\Delta u =0\quad\quad\text{in }\Omega, \tag{1b}\] \[\partial_{\mathbf{n}}u =\lambda u\quad\text{on }\partial\Omega, \tag{1a}\] where \(\Delta\) is the Laplacian acting on \(H^{1}(\Omega)\), \(\partial_{\mathbf{n}}u=\nabla u\cdot\mathbf{n}\) is the unit outward normal derivative on the boundary \(\partial\Omega\), and \(\lambda\) is the spectral parameter. It is well-known that the Steklov spectrum is discrete as long as the trace operator \(T\colon H^{1}(\Omega)\to L^{2}(\partial\Omega)\) is compact [1]. Moreover, the eigenvalues are real and we enumerate them, counting multiplicity, in increasing order \[0<\lambda_{0}(\Omega)<\lambda_{1}(\Omega)\leq\lambda_{2}(\Omega)\leq\cdots \nearrow\infty.\] The Steklov eigenvalue problem has received considerable attention in the literature; see the survey papers [1, 2] and references therein. The Steklov eigenvalue problem was first introduced by Vladimir Steklov in [10] to describe the stationary heat distribution in a body \(\Omega\) whose heat flux through the boundary is proportional to the temperature. For planar domains, the Steklov eigenvalues are the squares of the natural frequencies of a vibrating free membrane with all its mass concentrated along the boundary [1, p. 95]. Steklov eigenvalues also have applications in optimal material design for both electromagnetism and torsional rigidity [13, 14]. Recently, Cakoni et al. [1] used Steklov eigenvalues in nondestructive testing, where they established a crucial relationship between small changes in the (possibly complex valued) refractive index of a scattering object and the corresponding change in the eigenvalue of a modified Steklov problem. For this problem, numerical results in [1] revealed that a localised defect of the refractive index in a disc perturbs only a small number of modified Steklov eigenvalues. Isoperimetric inequalities for Steklov eigenvalues have been explored since the mid-twentieth century. The first major result was obtained by Weinstock in his 1954 seminal paper [10], where he showed that the disc uniquely maximises the first nontrivial perimeter-normalised Steklov eigenvalue \(\lambda_{1}(\Omega)|\partial\Omega|\) among all bounded simply connected planar domains with smooth boundary. For higher eigenvalues, Girouard and Polterovich showed that the \(n\)th perimeter-normalised Steklov eigenvalue is maximised in the limit by a sequence of simply connected planar domains degenerating to the disjoint union of \(n\) identical discs. At the same time, it is known that Weinstock's result fails for non simply-connected planar domains [12, Example 4.2.5]. In dimension \(3\) or higher, Fraser and Schoen [10] showed that Weinstock's result fails for general contractible domains, but Bucur et al. [1, Theorem 3.1] showed that Weinstock's result holds for all bounded convex domains with Lipschitz boundary. While it is natural to consider the maximisation of Steklov eigenvalues with prescribed perimeter (because the spectral parameter \(\lambda\) appears on the boundary \(\partial\Omega\)), in this paper we focus on the maximisation of Steklov eigenvalues with prescribed volume. For \(\Omega\subset\mathbb{R}^{d+1}\), let \(\Lambda(\Omega)\coloneqq\lambda(\Omega)\cdot|\Omega|^{\frac{1}{d+1}}\) denote the volume-normalised Steklov eigenvalue. Brock proved that the ball uniquely maximises \(\Lambda_{1}(\Omega)\) for bounded Lipschitz domains \(\Omega\subset\mathbb{R}^{d+1}\) in all dimensions. For higher eigenvalues, Bogosel, Bucur, and Giacomini [1] obtained the existence and regularity results for the shape optimiser for \(\Lambda_{n}(\Omega)\), \(n\geq 2\), on bounded Lipschitz domains. In dimension \(2\), numerical results from [1, 1, 1] suggested that the optimal domain is unique (up to dilations and rigid transformations), has \(n\)-fold symmetry, and has at least one axis of symmetry. In particular, the ball is not a maximiser for an infinite subset of Steklov eigenvalues for planar domains; this was confirmed for reflection-symmetric domains in [14]. Motivated by the asymptotic work of Lord Rayleigh [10] and Wolf and Keller [11] on the minimisation of Laplace-Dirichlet eigenvalues on planar domains, Viator and Osting adopted their perturbative approach to study Steklov eigenvalues on reflection-symmetric nearly circular planar domains [14] and nearly spherical domains [14]. In dimension \(3\), Viator and Osting [14, Theorem 1.1] proved that for \(n=1,2,\dots,\)\(\Lambda_{(n+1)^{2}-1}\) is not maximised by the ball but \(\Lambda_{n^{2}}\) is stationary for a ball, suggesting that the ball is a natural candidate for maximiser of \(\Lambda_{n^{2}}\). However, recent numerical results from Antunes [1] suggest that the ball maximises \(\Lambda_{4}\) but not \(\Lambda_{9}\) and \(\Lambda_{16}\). The same numerical results also suggested that the optimal domain for \(\Lambda_{n}\) seems to have \(n\) "buds" and some of the optimal domains seem to have symmetries that can be related with Platonic solids. Tuning of mixed Steklov-Neumann boundary conditions have also been recently studied by Ammari, Imeri, and Nigam [1], where an algorithm was designed to generate the proper mixed boundary conditions necessary to obtain desired resonance effects. Besides shape optimisation and isoperimetric results, there have been numerous recent results connecting Steklov eigenvalues to free-boundary minimal surfaces, inverse problems, and more; see [15] for an extensive, though not exhaustive, review of recent work in Steklov eigenvalues. ### Nearly hyperspherical domains Given \(d\geq 3\), we consider the Steklov eigenvalue problem on a _nearly hyperspherical domain_\(\Omega_{\varepsilon}\subset\mathbb{R}^{d+1}\) in hyperspherical coordinates \((r,\hat{\theta})\), where \(\Omega_{\varepsilon}\) has the form \[\Omega_{\varepsilon}=\left\{(r,\hat{\theta}):0\leq r\leq 1+\varepsilon \rho(\hat{\theta}),\,\hat{\theta}\in S^{d}\right\},\ \ \rho(\hat{\theta})=\sum_{p=0}^{\infty}\sum_{q=1}^{N(d,p)}A_{p,q}Y_{p,q}(\hat{ \theta}). \tag{2}\] Here, \(\varepsilon\geq 0\) is a small perturbation parameter, \(\rho\in C^{1}(S^{d})\) is a perturbation function which we expand in the basis of real hyperspherical harmonics (see Section 2.2), and \(S^{d}\subset\mathbb{R}^{d+1}\) is the \(d\)-dimensional unit sphere. For \(\varepsilon=0\), \(\Omega_{0}\) is the \((d+1)\)-dimensional unit ball \(B\subset\mathbb{R}^{d+1}\) and the eigenvalues are nonnegative integers \(\lambda_{\ell,m}=\ell\), with multiplicity \(N(d,\ell)\) given by \[N(d,0)=1\quad\text{and}\quad N(d,\ell)=\binom{d+\ell}{d}-\binom{d+\ell-2}{d},\ \ \ell\geq 1. \tag{3}\] The corresponding eigenfunctions in hyperspherical coordinates \((r,\hat{\theta})\) are given by \[u_{\ell,m}(r,\hat{\theta})=r^{\ell}Y_{\ell}^{m}(\hat{\theta}),\ \ \ell\in\mathbb{N}=\{0,1,2,\dots\},\ \ 1\leq m\leq N(d,\ell),\] where \(Y_{\ell}^{m}(\hat{\theta})\) is a _complex hyperspherical harmonics_ of degree \(\ell\) on \(S^{d}\). Viator and Osting proved that the Steklov eigenvalues \(\lambda^{\varepsilon}\) of a nearly circular (\(d=1\)) and nearly spherical (\(d=2\)) domains are analytic with respect to \(\varepsilon\)[23]. This analyticity result was recently extended to nearly-hyperspherical domains [14]. The proof relies on the fact that the Steklov eigenvalues can be interpreted as the eigenvalues of the Dirichlet-to-Neumann map \(G_{\rho,\varepsilon}\colon H^{1/2}(\partial\Omega_{\varepsilon})\to H^{-1/2} (\partial\Omega_{\varepsilon})\). ### Main results In previous work [23, 24], Viator and Osting used perturbation methods to study the asymptotic expansion of Steklov eigenvalues \(\lambda^{\varepsilon}\) for reflection symmetric, nearly circular domains and nearly spherical domains. Moreover, these asymptotic results were used to establish local versions of the isoperimetric inequalities for certain Steklov eigenvalues. In this paper we extend their results to nearly hyperspherical domains. Given \(d\geq 3\), we recall the volume-normalised Steklov eigenvalue \(\Lambda(\Omega)\coloneqq\lambda(\Omega)\cdot|\Omega|^{\frac{1}{d+1}}\). For \(k\in\mathbb{Z}^{+}\), define the index \(N_{d,k}=\sum_{\ell=1}^{k-1}N(d,\ell)\). **Theorem 1.1**.: _Let \(d\geq 3\) and \(k\in\mathbb{Z}^{+}\). Then \(\Lambda_{1+N_{d,k}}\) is stationary in the sense that, for every perturbation function \(\rho\in C^{2}(S^{d})\), the map \(\varepsilon\mapsto\Lambda_{1+N_{d,k}}(\Omega_{\varepsilon})\) is nonincreasing in \(|\varepsilon|\) for \(|\varepsilon|\) sufficiently small._ Theorem 1.1 suggests that the ball is a natural candidate for maximiser of \(\Lambda_{1+N_{d,k}}\) in dimensions \(d+1\geq 4\). However, the recent numerical result from Antunes for \(d+1=4\) suggested that \(\Lambda_{1+N_{3,1}}=\Lambda_{5}\) is maximised by the ball. On the other hand, we show that the \((d+1)\)-dimensional ball doesn't maximise another infinite subset of Steklov eigenvalues. **Theorem 1.2**.: _Let \(d\geq 3\) and \(k\in\mathbb{Z}^{+}\). Then \(\Lambda_{N_{d,k+1}}\) is not maximised by the \((d+1)\)-dimensional ball._ ### Outline This paper is organised as follows. We begin by reviewing hyperspherical coordinates, hyperspherical harmonics, and compute the first-order asymptotic expansions for geometrical quantities related to \(\Omega_{\varepsilon}\) in Section 2. In Section 3, we derive the first-order asymptotic expansion for Steklov eigenvalues of \(\Omega_{\varepsilon}\); see Theorem 3.1. Section 4 and Section 5 are devoted to proving Theorem 1.1 and Theorem 1.2, respectively. In Section 5, we also include the asymptotic result for the special case where the domain perturbation function is given by \(\rho=Y_{p,q}(\hat{\theta})\); see Theorem 5.4. ## 2. Preliminaries In this section, we first review vector calculus in hyperspherical coordinates. We then define and record several important properties of the hyperspherical harmonics. In particular, we derive the addition theorem for the derivatives of hyperspherical harmonics, which will be crucial in proving Theorem 1.2. Finally, we compute the first-order asymptotic expansions for the volume of \(\Omega_{\varepsilon}\) and the unit outward normal vector \(\mathbf{n}_{\rho,\varepsilon}\) to \(\partial\Omega_{\varepsilon}\). ### Hyperspherical coordinates in \(\mathbb{R}^{d+1}\) Let \((x_{1},x_{2},\dots,x_{d+1})\) denote the \((d+1)\)-dimensional Cartesian coordinates. The \((d+1)\)-dimensional hyperspherical coordinates \((r,\theta_{1},\theta_{2},\dots,\theta_{d})\) are defined by the following equations: \[x_{d+1} =r\cos\theta_{d},\] \[x_{d} =r\sin\theta_{d}\cos\theta_{d-1},\] \[x_{d-1} =r\sin\theta_{d}\sin\theta_{d-1}\cos\theta_{d-2},\] \[\vdots \vdots \vdots \vdots\] \[x_{3} =r\sin\theta_{d}\sin\theta_{d-1}\dots\cos\theta_{2},\] \[x_{2} =r\sin\theta_{d}\sin\theta_{d-1}\dots\sin\theta_{2}\sin\theta_{1},\] \[x_{1} =r\sin\theta_{d}\sin\theta_{d-1}\dots\sin\theta_{2}\cos\theta_{1},\] where the azimuth \(0\leq\theta_{1}=\phi<2\pi\) and inclinations \(0\leq\theta_{2},\theta_{3},\dots,\theta_{d}\leq\pi\) define a \((d+1)\)-dimensional sphere with radius \(r\geq 0\). The hyperspherical coordinates are an orthogonal curvilinear coordinate system in \(\mathbb{R}^{d+1}\). Define \(\theta_{0}:=r\). The associated metric tensor \(g\) is diagonal with components \[g_{ij}=\sum_{k=1}^{d+1}\frac{\partial x_{k}}{d\theta_{i}}\frac{\partial x_{k}} {\partial\theta_{j}}=h_{i}^{2}\delta_{i,j},\ \ 0\leq i,j\leq d, \tag{4}\] where the scale factors are given by \(h_{0}=1\) and \(h_{i}=r\prod_{k=i+1}^{d}\sin\theta_{k}\) for \(i=1,2,\dots,d\); the latter includes the empty product which gives \(h_{d}=r\). Here, \(\delta_{i,j}\) denotes the usual Kronecker delta. Let \(\hat{\mathbf{r}}\), \(\hat{\mathbf{\theta}}_{1}\), \(\hat{\mathbf{\theta}}_{2}\),..., \(\hat{\mathbf{\theta}}_{d}\) be orthonormal hyperspherical basis vectors and define \(\eta_{j}=h_{j}/r\) for \(j=1,2,\dots,d\). The gradient operator in hyperspherical coordinates is given by \[\nabla=\frac{\partial}{\partial r}\hat{\mathbf{r}}+\frac{1}{r}\nabla_{S^{d}},\] where \(\nabla_{S^{d}}\) is the gradient on \(S^{d}\): \[\nabla_{S^{d}}=\sum_{j=1}^{d}\frac{1}{\eta_{j}}\frac{\partial}{\partial\theta _{j}}\hat{\mathbf{\theta}}_{j}. \tag{5}\] The Laplacian in hyperspherical coordinates is given by \[\Delta=\frac{1}{r^{d}}\frac{\partial}{\partial r}\left(r^{d}\frac{\partial}{ \partial r}\right)+\frac{1}{r^{2}}\Delta_{S^{d}},\] where \(\Delta_{S^{d}}\) is the spherical Laplacian (Laplace-Beltrami operator) on \(S^{d}\): \[\Delta_{S^{d}}=\sum_{j=1}^{d}\frac{1}{\eta_{j}^{2}\sin^{j-1}(\theta_{j})}\frac {\partial}{\partial\theta_{j}}\left(\sin^{j-1}(\theta_{j})\frac{\partial}{ \partial\theta_{j}}\right). \tag{6}\] The volume element in hyperspherical coordinates is given by \(dV=r^{d}dr\,d\sigma_{d}\), where \(d\sigma_{d}\) is the surface element over \(S^{d}\): \[d\sigma_{d}(\hat{\theta})=\left(\prod_{j=2}^{d}\sin^{j-1}(\theta_{j})\right)d \theta_{1}\,d\theta_{2}\dots d\theta_{d}.\] _Remark 2.1_.: Throughout this paper, we denote with \(\partial_{r}\) and \(\partial_{j}\) the partial derivative with respect to \(r\) and \(\theta_{j}\), respectively, for \(j=1,2,\dots,d\). ### Hyperspherical harmonics on \(S^{d}\) For \(\ell\in\mathbb{N}=\{0,1,2,\dots\}\), let \(\mathbf{H}_{\ell}^{d}\) denote the space of all hyperspherical harmonics of order \(\ell\) on \(S^{d}\). The dimension of \(\mathbf{H}_{\ell}^{d}\) is the same as the multiplicity \(N(d,\ell)\) of the Steklov eigenvalue \(\lambda=\ell\) of the \((d+1)\)-dimensional unit ball \(B\). Let \(\{Y_{\ell}^{m}\}_{m=1}^{N(d,\ell)}\) be an orthonormal basis of \(\mathbf{H}_{\ell}^{d}\) with respect to the complex inner product on \(L^{2}(S^{d})\). The spaces \(\mathbf{H}_{\ell}^{d}\) are pairwise orthonormal, _i.e.,_ \[\int_{S^{d}}Y_{\ell}^{m}(\hat{\theta})\overline{Y_{k}^{n}(\hat{\theta})}\,d \sigma_{d}=\delta_{\ell,k}\delta_{m,n}.\] and the family \(\{Y_{\ell}^{m}\}_{\ell\in\mathbb{N},1\leq m\leq N(d,\ell)}\) forms a complete orthonormal basis of \(L^{2}(S^{d})\). It is well-known that each \(Y_{\ell}^{m}\) is an eigenfunction of the spherical Laplacian \(\Delta_{S^{d}}\) corresponding to the eigenvalue \(-\ell(\ell+d-1)\), _i.e.,_ \[\Delta_{S^{d}}Y_{\ell}^{m}(\hat{\theta})=-\ell(\ell+d-1)Y_{\ell}^{m}(\hat{ \theta}),\ \ 1\leq m\leq N(d,\ell). \tag{7}\] Multiplying (7) with \(\overline{Y_{k}^{n}}\) and integrating by parts over \(S^{d}\), we obtain the following integral identity: \[\int_{S^{d}}\nabla_{S^{d}}Y_{\ell}^{m}(\hat{\theta})\cdot\nabla_{S^{d}} \overline{Y_{\ell}^{n}(\hat{\theta})}\,d\sigma_{d}=\ell(\ell+d-1)\delta_{m,n}, \ \ 1\leq m,n\leq N(d,\ell). \tag{8}\] Let \(P_{n}^{\alpha}\) and \(C_{n}^{(\alpha)}\) denote the _associated Legendre polynomial_ and the _Gegenbauer (ultraspherical) polynomial_ of degree \(n\), respectively, which can be defined through the Rodrigues formulas (see [20, Table 18.5.1] and [21, Eqs. 6.27 & 6.29]): \[P_{n}^{\alpha}(z) =\frac{(-1)^{\alpha}}{2^{n}n!}(1-z^{2})^{\alpha/2}\frac{d^{n+ \alpha}}{dz^{n+\alpha}}\left(z^{2}-1\right)^{n},\] \[C_{n}^{(\alpha)}(z) =\frac{(2\alpha)_{n}}{(-2)^{n}\left(\alpha+\frac{1}{2}\right)_{n} n!}\left(1-z^{2}\right)^{-\alpha+\frac{1}{2}}\frac{d^{n}}{dz^{n}}\left(1-z^{2} \right)^{n+\alpha-\frac{1}{2}},\ \ \alpha>-\frac{1}{2},\,\alpha\neq 0,\] where \((z)_{n}=\Gamma(z+n)/\Gamma(z)\) is the Pochhammer's symbol; see [20, Eq. 5.2.5]. We define the complex hyperspherical harmonics \(Y_{\ell}^{m}\) of degree \(\ell\) on \(S^{d}\) as \[Y_{\ell}^{m}(\hat{\theta})=\widetilde{Y}_{m_{2}}^{m_{1}}(\phi,\theta_{2}) \prod_{j=3}^{d}Y(\theta_{j};m_{j-1},m_{j}),\] where \(m\coloneqq(m_{1},m_{2},\cdots,m_{d-1})\) is any \((d-1)\)-tuple satisfying the inequality \[0\leq|m_{1}|\leq m_{2}\leq\cdots\leq m_{d-1}\leq m_{d}\coloneqq\ell, \tag{9}\] and \(\widetilde{Y}_{m_{2}}^{m_{1}}\) is the three-dimensional complex spherical harmonics \[\widetilde{Y}_{m_{2}}^{m_{1}}(\phi,\theta_{2})=\sqrt{\frac{(2m_{2}+1)}{4\pi} \frac{(m_{2}-m_{1})!}{(m_{2}+m_{1})!}}\,e^{im_{1}\phi}P_{m_{2}}^{m_{1}}(\cos \theta_{2}).\] The functions \(Y(\theta_{j};m_{j-1},m_{j})\) are real-valued and they are defined by (see [21, Section II]) \[Y(\theta_{j};m_{j-1},m_{j})=\frac{1}{\mu_{j}}\,(\sin\theta_{j})^{m_{j-1}}\,C_ {m_{j}-m_{j-1}}^{\left(m_{j-1}+\frac{j-1}{2}\right)}(\cos\theta_{j}),\ \ j=3,4,\dots,d,\] where \(\mu_{j}\) is the normalisation constant of \(Y(\theta_{j};m_{j-1},m_{j})\) (with respect to the measure \(\sin^{j-1}(\theta_{j})\,d\theta_{j}\)) satisfying \[\mu_{j}^{2}=\frac{4\pi\Gamma(m_{j}+m_{j-1}+j-1)}{2^{2m_{j-1}+j}(m_{j}-m_{j-1})! \left(m_{j}+\frac{j-1}{2}\right)\Gamma^{2}\left(m_{j-1}+\frac{j-1}{2}\right)}.\] Here, \(\Gamma(z)\) is the standard Gamma function and we define \(\Gamma^{2}(z)=\left[\Gamma(z)\right]^{2}\). The real hyperspherical harmonics \(Y_{\ell,m}\) of degree \(\ell\) on \(S^{d}\) can be defined in the same way as the three-dimensional real spherical harmonics \(\widetilde{Y}_{m_{2},m_{1}}\), _i.e.,_ \[Y_{\ell,m}(\hat{\theta})=\widetilde{Y}_{m_{2},m_{1}}(\phi,\theta_{2})\prod_{j=3 }^{d}Y(\theta_{j};m_{j-1},m_{j}),\] where \[\widetilde{Y}_{m_{2},m_{1}}(\phi,\theta_{2})=\left\{\begin{aligned} &\frac{i}{\sqrt{2}}\left[\widetilde{Y}_{m_{2}}^{m_{1}}(\phi, \theta_{2})-(-1)^{m_{1}}\widetilde{Y}_{m_{2}}^{-m_{1}}(\phi,\theta_{2})\right] &\text{if }m_{1}<0,\\ &\widetilde{Y}_{m_{2}}^{0}(\phi,\theta_{2})&\text{ if }m_{1}=0,\\ &\frac{1}{\sqrt{2}}\left[\widetilde{Y}_{m_{2}}^{-m_{1}}(\phi, \theta_{2})+(-1)^{m_{1}}\widetilde{Y}_{m_{2}}^{m_{1}}(\phi,\theta_{2})\right] &\text{if }m_{1}>0.\end{aligned}\right.\] It is straightforward to verify that the set of real hyperspherical harmonics are pairwise orthonormal on \(L^{2}(S^{d})\). For notational simplicity, throughout this paper we will suppress the dependence of \(\hat{\theta}\) on \(\rho\) and hyperspherical harmonics whenever appropriate. _Remark 2.2_.: Whenever we are counting all possible hyperspherical harmonics as \(1\leq m\leq N(d,\ell)\) for a fixed degree \(\ell\in\mathbb{N}\), this should be understood as counting over all tuples \(m=(m_{1},m_{2},\ldots,m_{d-1})\) satisfying the condition (9). Take for instance \(d=3\) and \(\ell=2\). We then have \(1\leq m\leq N(3,2)=9\) and the \(9\) possible tuples \((m_{1},m_{2})\) satisfying \(0\leq|m_{1}|\leq m_{2}\leq m_{3}=\ell=2\) are \[(0,0),\,(0,1),\,(0,2),\,(1,1),\,(1,2),\,(-1,1),\,(-1,2),\,(2,2),\,(-2,2).\] _Remark 2.3_.: For any \(\ell\in\mathbb{N}\), we will assume that the index \(m=1\) corresponds to the trivial tuple \((0,0,\ldots,0)\). With this convention, the constant hyperspherical harmonic is \(Y_{0}^{1}=Y_{0,1}=|S^{d}|^{-1/2}\). Another important result about hyperspherical harmonics is the addition theorem (see [11, Eq. 51]), which states that \[\sum_{m=1}^{N(d,\ell)}Y_{\ell}^{m}(\hat{\theta})\overline{Y_{\ell}^{m}(\hat{ \theta}^{\prime})}=K(d,\ell)C_{\ell}^{\left(\frac{d-1}{2}\right)}(\hat{\mathbf{u}} \cdot\hat{\mathbf{u}}^{\prime}),\ \ K(d,\ell)\coloneqq\frac{N(d,\ell)}{|S^{d}|\,C_{\ell}^{\left(\frac{d-1}{2} \right)}(1)}, \tag{10}\] for any unit vector \(\hat{\mathbf{u}},\hat{\mathbf{u}}^{\prime}\in S^{d}\) with corresponding angular coordinates \(\hat{\theta},\hat{\theta}^{\prime}\). Setting \(\hat{\mathbf{u}}=\hat{\mathbf{u}}^{\prime}\), we have \(\hat{\mathbf{u}}\cdot\hat{\mathbf{u}}=1\) and \[\sum_{m=1}^{N(d,\ell)}|Y_{\ell}^{m}(\hat{\theta})|^{2}=\frac{N(d,\ell)}{|S^{d }|}. \tag{11}\] We now establish the addition theorem for the partial derivatives of hyperspherical harmonics when \(\hat{\mathbf{u}}=\hat{\mathbf{u}}^{\prime}\). Our proof is inspired by [12]. **Theorem 2.4**.: _Let \(d\geq 3\) and \(\ell\in\mathbb{N}\). For all \(j=1,2,\ldots,d\), we have_ \[\sum_{m=1}^{N(d,\ell)}\frac{1}{\eta_{j}^{2}}|\partial_{j}Y_{\ell}^{m}(\hat{ \theta})|^{2}=(d-1)K(d,\ell)C_{\ell-1}^{\left(\frac{d+1}{2}\right)}(1).\] Proof.: For simplicity of notation, we write \(C_{\ell}^{\left(\frac{d-1}{2}\right)}(\hat{\mathbf{u}}\cdot\hat{\mathbf{u}}^{\prime})= C(\hat{\mathbf{u}}\cdot\hat{\mathbf{u}}^{\prime})=C(z)\). For any fixed \(j=1,2,\ldots,d\), differentiating (10) with respect to \(\theta_{j}\) first and then \(\theta_{j}^{\prime}\) yields \[\sum_{m=1}^{N(d,\ell)}\partial_{j}Y_{k}^{m}(\hat{\theta})\partial_{j^{\prime}} Y_{k}^{m}(\hat{\theta}^{\prime})=K(d,\ell)\left[\frac{d^{2}C}{dz^{2}}\left[\hat{ \mathbf{u}}\cdot\partial_{j^{\prime}}\hat{\mathbf{u}}^{\prime}\right]\left[\partial_ {j}\hat{\mathbf{u}}\cdot\hat{\mathbf{u}}^{\prime}\right]+\frac{dC}{dz}\left[\partial_ {j}\hat{\mathbf{u}}\cdot\partial_{j^{\prime}}\hat{\mathbf{u}}\right]\right]. \tag{12}\] In the case of \(\hat{\mathbf{u}}=\hat{\mathbf{u}}^{\prime}\), we know that \(z=\hat{\mathbf{u}}\cdot\hat{\mathbf{u}}=1\) and this implies \(\hat{\mathbf{u}}\cdot\partial_{j}\hat{\mathbf{u}}=0\). Since \(\hat{\mathbf{u}}\in S^{d}\) can be written as \(\hat{\mathbf{u}}=\left(x_{1},x_{2},\ldots,x_{d+1}\right)/r\), computing \(|\partial_{j}\hat{\mathbf{u}}|^{2}\) gives \[|\partial_{j}\hat{\mathbf{u}}|^{2}=\frac{1}{r^{2}}\sum_{k=1}^{d+1}\left(\frac{ \partial x_{k}}{\partial\theta_{j}}\right)^{2}=\frac{h_{j}^{2}}{r^{2}}=\eta_{ j}^{2},\] thanks to (4). Consequently, setting \(\hat{\mathbf{u}}=\hat{\mathbf{u}}^{\prime}\) in (12) and rearranging yields \[\sum_{m=1}^{N(d,\ell)}\frac{1}{\eta_{j}^{2}}|\partial_{j}Y_{\ell}^{m}(\hat{ \theta})|^{2}=K(d,\ell)\frac{dC}{dz}\bigg{|}_{z=1}=K(d,\ell)\cdot(d-1)C_{\ell -1}^{\left(\frac{d+1}{2}\right)}(1),\] where we use the derivative formula [20, Eq. 18.9.19]. The desired result now follows. ### Asymptotic expansions for geometric quantities Let \(|S^{d}|\) and \(|B|=|S^{d}|/(d+1)\) denote the surface area of \(S^{d}\) and the volume of the \((d+1)\)-dimensional unit ball \(B\), respectively. Using the orthogonality of hyperspherical harmonics, we see from (2) that \[\int_{S^{d}}\rho(\hat{\theta})\,d\sigma_{d}=\int_{S^{d}}A_{0,1}Y_{0,1}\,d \sigma_{d}=A_{0,1}|S^{d}|^{1/2}.\] Thus, an asymptotic expansion for the volume of \(\Omega_{\varepsilon}\) is given by \[|\Omega_{\varepsilon}|=\int_{S^{d}}\int_{0}^{1+\varepsilon\rho( \hat{\theta})}r^{d}\,dr\,d\sigma_{d} =\frac{1}{d+1}\int_{S^{d}}\left(1+\varepsilon\rho(\hat{\theta}) \right)^{d+1}\,d\sigma_{d}\] \[=\frac{|S^{d}|}{d+1}+\varepsilon\int_{S^{d}}\rho(\hat{\theta})\, d\sigma_{d}+O(\varepsilon^{2})\] \[=|B|+\varepsilon A_{0,1}|S^{d}|^{1/2}+O(\varepsilon^{2}).\] In particular, we have that \[|\Omega_{\varepsilon}|^{\frac{1}{d+1}} =|B|^{\frac{1}{d+1}}+\varepsilon\left(\frac{1}{d+1}|B|^{\frac{1} {d+1}-1}A_{0,1}|S^{d}|^{1/2}\right)+O(\varepsilon^{2})\] \[=|B|^{\frac{1}{d+1}}+\varepsilon\left(\frac{A_{0,1}|B|^{\frac{1} {d+1}}}{|S^{d}|^{1/2}}\right)+O(\varepsilon^{2}). \tag{13}\] Next we find an asymptotic expansion for the unit outward normal vector \(\mathbf{n}_{\rho,\varepsilon}\) to \(\partial\Omega_{\varepsilon}\). By identifying \(\partial\Omega_{\varepsilon}\) as the zero level set of an implicit function, it can be shown that (see [14, Section 5]) \[\mathbf{n}_{\rho,\varepsilon}=\Big{(}(1+\varepsilon\rho)^{2}+\varepsilon^{2}| \nabla_{S^{d}}\rho|^{2}\Big{)}^{-1/2}\Big{[}(1+\varepsilon\rho)\hat{\mathbf{r}}- \varepsilon\nabla_{S^{d}}\rho\Big{]}.\] It follows that \[\mathbf{n}_{\rho,\varepsilon}=\Big{(}1-\varepsilon\rho+O(\varepsilon^{2}) \Big{)}\Big{[}\hat{\mathbf{r}}+\varepsilon\rho\,\hat{\mathbf{r}}-\varepsilon\nabla_{S ^{d}}\rho\Big{]}=\hat{\mathbf{r}}-\varepsilon\nabla_{S^{d}}\rho+O(\varepsilon^{2}). \tag{14}\] ## 3. An Asymptotic Expansion for Steklov Eigenvalues of Nearly Hyperspherical Domains In this section, we derive an asymptotic expansion for the Steklov eigenvalues \(\lambda(\varepsilon)\coloneqq\lambda^{\varepsilon}\) on a nearly hyperspherical domain \(\Omega_{\varepsilon}\) of the form in (2). Recall that the unperturbed eigenvalues of \(\Omega_{0}=B\) are the nonnegative integers \(\ell\in\mathbb{N}\), with corresponding eigenfunctions \(r^{\ell}Y_{\ell}^{m}(\hat{\theta})\), \(1\leq m\leq N(d,\ell)\). Following [13], for fixed positive integer \(k\in\mathbb{Z}^{+}\), we make the following perturbation ansatz in \(\varepsilon\) for a Steklov eigenpair \((\lambda_{k}^{\varepsilon},u_{k}^{\varepsilon})\) of \(\Omega_{\varepsilon}\) (not counting multiplicity): \[\lambda_{k}^{\varepsilon}=k+\varepsilon\lambda_{k}^{(1)}+O(\varepsilon^{2}), \tag{15a}\] \[u_{k}^{\varepsilon}(r,\hat{\theta})=\sum_{\ell=0}^{\infty}\sum_{m=1}^{N(d,\ell)} \Big{(}\delta_{\ell,k}\alpha_{m}+\varepsilon\beta_{\ell,m}+O(\varepsilon^{2}) \Big{)}r^{\ell}Y_{\ell}^{m}(\hat{\theta}). \tag{15b}\] Note that we cannot apriori determine the coefficients \(\alpha_{m}\) that will select the \(O(1)\) eigenfunction from the \(N(d,k)\)-dimensional eigenspace. The ansatz (15b) satisfies (1a) exactly and we will determine the eigenvalue perturbation \(\lambda_{k}^{(1)}\) and the coefficients \(\alpha_{m}\) and \(\beta_{\ell,m}\) so that the boundary condition (1b) is satisfied. Using the gradient (5) in hyperspherical coordinates, we have that \[\nabla u_{k}^{\varepsilon}(r,\hat{\theta})=\sum_{\ell=0}^{\infty}\sum_{m=1}^{ N(d,\ell)}\Big{(}\delta_{\ell,k}\alpha_{m}+\varepsilon\beta_{\ell,m}+O( \varepsilon^{2})\Big{)}r^{\ell-1}\,\hat{\boldsymbol{v}}_{\ell,m}, \tag{16}\] where \[\hat{\boldsymbol{v}}_{\ell,m}=\ell Y_{\ell}^{m}\hat{\boldsymbol{r}}+\nabla_{S ^{d}}Y_{\ell}^{m}. \tag{17}\] The boundary condition (1b) reads \[\nabla u_{k}^{\varepsilon}\cdot\mathbf{n}_{\rho,\varepsilon}=\lambda_{k}^{ \varepsilon}u_{k}^{\varepsilon}\quad\text{on }r=1+\varepsilon\rho(\hat{\theta}). \tag{18}\] Substituting (16) and the asymptotic expansion (14) for \(\mathbf{n}_{\rho,\varepsilon}\) into the left-hand side (LHS) of (18) and collecting terms in powers of \(\varepsilon\), we obtain \[\nabla u_{k}^{\varepsilon}\cdot\mathbf{n}_{\rho,\varepsilon}=\left(\sum_{m=1 }^{N(d,k)}k\alpha_{m}Y_{k}^{m}\right)+\varepsilon L_{1}+O(\varepsilon^{2}),\] where \[L_{1} =\sum_{\ell=0}^{\infty}\sum_{m=1}^{N(d,\ell)}\Big{(}\delta_{\ell, k}\alpha_{m}\left((\ell-1)\rho\,\hat{\boldsymbol{r}}-\nabla_{S^{d}}\rho\right)+ \beta_{\ell,m}\hat{\boldsymbol{r}}\Big{)}\cdot\hat{\boldsymbol{v}}_{\ell,m}\] \[\stackrel{{\eqref{eq:LHS}}}{{=}}\sum_{m=1}^{N(d,k)} \alpha_{m}\Big{(}k(k-1)\rho Y_{k}^{m}-\nabla_{S^{d}}\rho\cdot\nabla_{S^{d}}Y_{ k}^{m}\Big{)}+\sum_{\ell=0}^{\infty}\sum_{m=1}^{N(d,\ell)}\ell\beta_{\ell,m}Y_{ \ell}^{m}.\] Substituting the perturbation ansatz (15) into the right-hand side (RHS) of (18) and collecting terms in powers of \(\varepsilon\), we obtain \[\lambda_{k}^{\varepsilon}u_{k}^{\varepsilon}=\left(\sum_{m=1}^{N(d,k)}k\alpha _{m}Y_{k}^{m}\right)+\varepsilon R_{1}+O(\varepsilon^{2}),\] where \[R_{1} =\sum_{\ell=0}^{\infty}\sum_{m=1}^{N(d,\ell)}\Big{(}k\,(\beta_{ \ell,m}+\delta_{\ell,k}\alpha_{m}\ell\rho)+\lambda_{k}^{(1)}\delta_{\ell,k} \alpha_{m}\Big{)}Y_{\ell}^{m}\] \[=\sum_{\ell=0}^{\infty}\sum_{m=1}^{N(d,\ell)}k\beta_{\ell,m}Y_{ \ell}^{m}+\sum_{m=1}^{N(d,k)}\Big{(}k^{2}\rho+\lambda_{k}^{(1)}\Big{)}\alpha_ {m}Y_{k}^{m}.\] The \(O(1)\) terms in the LHS and RHS of (18) coincide, as expected. Rearranging the \(O(\varepsilon)\) equation \(L_{1}=R_{1}\), we obtain \[\sum_{m=1}^{N(d,k)}\lambda_{k}^{(1)}\alpha_{m}Y_{k}^{m}=-\sum_{m=1}^{N(d,k)} \alpha_{m}\Big{(}k\rho Y_{k}^{m}+\nabla_{S^{d}}\rho\cdot\nabla_{S^{d}}Y_{k}^{m }\Big{)}+\sum_{\ell=0}^{\infty}\sum_{m=1}^{N(d,\ell)}(\ell-k)\beta_{\ell,m}Y_{ k}^{m}. \tag{19}\] If we now multiply (19) by \(\overline{Y_{k}^{m}}\) for \(1\leq n\leq N(d,k)\), integrate over \(S^{d}\) with respect to \(d\sigma_{d}\), and use the pairwise orthonormality of the hyperspherical harmonics, we see that the resulting sum on the left is nonzero only for \(m=n\) and the third sum vanish for all \(m\). This yields \[\lambda_{k}^{(1)}\alpha_{n}=\sum_{m=1}^{N(d,k)}M_{m,n}^{(d,k)}\alpha_{m},\ \ 1\leq n \leq N(d,k),\] or more succinctly, \(M^{(d,k)}\hat{\boldsymbol{\alpha}}=\lambda_{k}^{(1)}\hat{\boldsymbol{\alpha}}\), where the complex matrix \(M^{(d,k)}\in\mathbb{C}^{N(d,k)\times N(d,k)}\) has entries given by \[M_{m,n}^{(d,k)}=-\int_{S^{d}}k\rho Y_{k}^{m}\overline{Y_{k}^{n}}\,d\sigma_{d} -\int_{S^{d}}\left(\nabla_{S^{d}}\rho\cdot\nabla_{S^{d}}Y_{k}^{m}\right) \overline{Y_{k}^{n}}\,d\sigma_{d} \tag{20}\] This shows that the first-order perturbation of these \(N(d,k)\) eigenvalues are characterised by the eigenvalues of \(M^{(d,k)}\). Since \(M^{(d,k)}\) is Hermitian, there are \(N(d,k)\) real eigenvalues \(\lambda_{k,j}^{(1)}\), \(1\leq j\leq N(d,k)\), which we enumerate in increasing order. Moreover, the components of the corresponding eigenvectors \(\hat{\boldsymbol{\alpha}}_{j}\) are the coefficients of the \(O(1)\) eigenfunctions in \(u_{k,j}^{\varepsilon}\), _i.e.,_ \[u_{k,j}^{(0)}(r,\hat{\theta})=\sum_{m=1}^{N(d,k)}\left(\hat{\boldsymbol{\alpha }}_{j}\right)_{m}r^{k}Y_{k}^{m}(\hat{\theta}),\ \ 1\leq j\leq N(d,k).\] We summarise these results and the analyticity result from [13] in the following theorem. **Theorem 3.1**.: _Given \(d\geq 3\) and \(k\in\mathbb{Z}^{+}\), define \(N_{k}=\sum_{\ell=1}^{k-1}N(d,\ell)\), where \(N(d,\ell)\) is defined by (3). For \(N_{k}+1\leq n\leq N_{k+1}\), the Steklov eigenvalues \(\lambda_{n}(\varepsilon)\) of a nearly hyperspherical domain \(\Omega_{\varepsilon}\) of the form (2) consist of at most \(N(d,k)\) branches of analytic functions which have at most algebraic singularities near \(\varepsilon=0\). At first-order in \(\varepsilon\), the perturbation is given by the real eigenvalues of the Hermitian matrix \(M^{(d,k)}\) of size \(N(d,k)\), whose entries are given by (20)._ ## 4. Analysis of \(M^{(d,k)}\) This section is dedicated to the proof of Theorem 1.2. Following [10, Theorem 1.1], the crux of the proof lies in showing that the trace of \(M^{(d,k)}\) is proportional to \(\int_{S^{d}}\rho\,d\sigma_{d}\), the mean of the domain perturbation function \(\rho\). This is achieved by rewriting \(M^{(d,k)}\) as the sum of a scalar multiple of the identity and a trace-zero Hermitian matrix. For notational simplicity, we write the surface element over \(S^{d}\) as \(d\sigma_{d}=\sin^{j-1}(\theta_{j})\,d\theta_{j}\,d\sigma_{d-1,j}\), where \[d\sigma_{d-1,j}=\prod_{i=2,i\neq j}^{d}\sin^{i-1}(\theta_{i})\prod_{i=1,i\neq j }^{d}d\theta_{i}.\] **Lemma 4.1**.: _Given \(d\geq 3\) and \(k\in\mathbb{Z}^{+}\), let \(M^{(d,k)}\) be the Hermitian matrix defined in Theorem 3.1. The entries of \(M^{(d,k)}\) can be written as_ \[M_{m,n}^{(d,k)}=\int_{S^{d}}\rho(\hat{\theta})\left(-k(k+d)Y_{k}^{m}(\hat{ \theta})\overline{Y_{k}^{n}(\hat{\theta})}+\nabla_{S^{d}}Y_{k}^{m}(\hat{\theta })\cdot\nabla_{S^{d}}\overline{Y_{k}^{n}(\hat{\theta})}\right)d\sigma_{d}. \tag{21}\] Proof.: From Theorem 3.1 and (5), we have \[M_{m,n}^{(d,k)}=-\int_{S^{d}}k\rho Y_{k}^{m}\overline{Y_{k}^{n}}\,d\sigma_{d} -\sum_{j=1}^{d}\int_{S^{d}}\frac{\partial_{j}\rho}{\eta_{j}^{2}}\partial_{j}Y _{k}^{m}\cdot\overline{Y_{k}^{n}}\,d\sigma_{d}. \tag{22}\] For each integral from the sum above, we integrate by parts with respect to \(\theta_{j}\). Note crucially that \(\eta_{j}\) is independent of \(\theta_{j}\). For the case \(j=1\), the determinant of the Jacobian in \(d\sigma_{d}\) is independent of \(\theta_{1}\) and the boundary term vanishes due to the \(2\pi\)-periodicity of \(\rho,\partial_{1}Y_{k}^{m},Y_{k}^{n}\) with respect to \(\theta_{1}\). This yields \[-\int_{S^{d}}\frac{\partial_{1}\rho}{\eta_{1}^{2}}\partial_{1}Y_{k }^{m}\cdot\overline{Y_{k}^{n}}\,d\sigma_{d} =\int_{S^{d}}\frac{\rho}{\eta_{1}^{2}}\partial_{1}\left(\partial_{ 1}Y_{k}^{m}\cdot\overline{Y_{k}^{n}}\right)d\sigma_{d}\] \[=\int_{S^{d}}\frac{\rho}{\eta_{1}^{2}}\Big{[}\partial_{1}Y_{k}^{m }\cdot\partial_{1}\overline{Y_{k}^{n}}+\partial_{1}^{2}Y_{k}^{m}\cdot\overline {Y_{k}^{n}}\Big{]}\,d\sigma_{d}.\] For the case \(j=2,3,\ldots,d\), the boundary term vanishes because \(\sin^{j-1}(0)=\sin^{j-1}(\pi)=0\) for \(j\geq 2\). This yields \[-\int_{S^{d-1}}\int_{\theta_{j}}\frac{\partial_{j}\rho}{\eta_{j}^ {2}}\partial_{j}Y_{k}^{m}\cdot\overline{Y_{k}^{n}}\sin^{j-1}(\theta_{j})\,d \theta_{j}\,d\sigma_{d-1,j}\] \[=\int_{S^{d-1}}\int_{\theta_{j}}\frac{\rho}{\eta_{j}^{2}} \partial_{j}\Big{(}\partial_{j}Y_{k}^{m}\cdot\overline{Y_{k}^{n}}\sin^{j-1}( \theta_{j})\Big{)}d\theta_{j}\,d\theta_{d-1,j}\] \[=\int_{S^{d}}\frac{\rho}{\eta_{j}^{2}}\left[\partial_{j}Y_{k}^{m }\cdot\partial_{j}\overline{Y_{k}^{n}}+\frac{\partial_{j}\left(\sin^{j-1}( \theta_{j})\partial_{j}Y_{k}^{m}\right)}{\sin^{j-1}(\theta_{j})}\,\overline{Y_ {k}^{n}}\right]d\sigma_{d}.\] Summing over all \(j=1,2,\ldots,d\) and recalling the gradient (5) and the spherical Laplacian (6) on \(S^{d}\), we obtain \[-\sum_{j=1}^{d}\int_{S^{d}}\frac{\partial_{j}\rho}{\eta_{j}^{2}} \partial_{j}Y_{k}^{m}\cdot\overline{Y_{k}^{n}}\,d\sigma_{d}\] \[=\sum_{j=1}^{d}\int_{S^{d}}\rho\frac{\partial_{j}Y_{k}^{m}}{\eta_ {j}}\cdot\frac{\partial_{j}\overline{Y_{k}^{n}}}{\eta_{j}}\,d\sigma_{d}+\sum_ {j=1}^{d}\int_{S^{d}}\rho\left(\frac{\partial_{j}\left(\sin^{j-1}(\theta_{j}) \partial_{j}Y_{k}^{m}\right)}{\eta_{j}^{2}\sin^{j-1}(\theta_{j})}\right) \overline{Y_{k}^{n}}\,d\sigma_{d}\] \[=\int_{S^{d}}\rho\nabla_{S^{d}}Y_{k}^{m}\cdot\nabla_{S^{d}} \overline{Y_{k}^{n}}\,d\sigma_{d}+\int_{S^{d}}\rho\left(\Delta_{S^{d}}Y_{k}^{ m}\right)\overline{Y_{k}^{n}}\,d\sigma_{d}\] \[\stackrel{{\eqref{eq:23}}}{{=}}\int_{S^{d}}\rho \nabla_{S^{d}}Y_{k}^{m}\cdot\nabla_{S^{d}}\overline{Y_{k}^{n}}\,d\sigma_{d}- \int_{S^{d}}k(k+d-1)\rho Y_{k}^{m}\overline{Y_{k}^{n}}\,d\sigma_{d}. \tag{23}\] Substituting (23) into (22) and rearranging gives the desired expression (21) for \(M_{m,n}^{(d,k)}\). We are now ready to prove that the trace of \(M^{(d,k)}\) is proportional to the mean of \(\rho\). **Lemma 4.2**.: _Given \(d\geq 3\) and \(k\in\mathbb{Z}^{+}\), let \(M^{(d,k)}\) and \(\rho\) be the Hermitian matrix and the perturbation function defined in Theorem 3.1 and (2), respectively. The trace of \(M^{(d,k)}\) is given by_ \[\operatorname{tr}\Big{(}M^{(d,k)}\Big{)}=-\frac{kN(d,k)}{|S^{d}|}\int_{S^{d}} \rho\,d\sigma_{d}=-\frac{kA_{0,1}N(d,k)}{|S^{d}|^{1/2}}.\] Proof.: From Lemma (4.1), we have \[\operatorname{tr}\Big{(}M^{(d,k)}\Big{)}=\sum_{m=1}^{N(d,k)}M_{m,m}^{(k)}=\sum _{m=1}^{N(d,k)}\int_{S^{d}}\rho\Big{(}-k(k+d)Y_{k}^{m}\overline{Y_{k}^{m}}+ \nabla_{S^{d}}Y_{k}^{m}\cdot\nabla_{S^{d}}\overline{Y_{k}^{m}}\Big{)}\,d \sigma_{d}.\] The lemma is an direct consequence of the addition theorem for hyperspherical harmonics and its gradient; see (10), (11), and Theorem 2.4. Indeed, \[\operatorname{tr}\Big{(}M^{(d,k)}\Big{)}=\int_{S^{d}}\rho\left(-\frac{k(k+d)N( d,k)}{|S^{d}|}+d(d-1)K(d,k)C_{\ell-1}^{\left(\frac{d+1}{2}\right)}(1) \right)d\sigma_{d}\] \[=-\frac{N(d,k)}{|S^{d}|}\left[-k(k+d)+d(d-1)\frac{C_{k-1}^{\left(\frac{d+1}{2} \right)}(1)}{C_{k}^{\left(\frac{d-1}{2}\right)}(1)}\right]\int_{S^{d}}\rho\,d \sigma_{d}.\] We need only show the expression in the bracket above is equal to \(k\). From [Nis, Table. 18.6.1] and the definition of Pochhammer's symbol [Nis, Eq. 5.2.5], we have that \[C_{n}^{(\alpha)}(1)=\frac{(2\alpha)_{n}}{n!}=\frac{\Gamma(2\alpha+n)}{n!\Gamma (2\alpha)}.\] Consequently, \[d(d-1)\cdot\frac{C_{k-1}^{\left(\frac{d+1}{2}\right)}(1)}{C_{k}^{\left(\frac{d -1}{2}\right)}(1)}=\frac{\Gamma(d+k)}{\Gamma(d+1)(k-1)!}\cdot\frac{d(d-1) \Gamma(d-1)k!}{\Gamma(d+k-1)}=k(d+k-1),\] where we use the fact that \(\Gamma(z+1)=z\Gamma(z)\) for any positive integer \(z\). The claim now follows. **Corollary 4.3**.: _Given \(d\geq 3\) and \(k\in\mathbb{Z}^{+}\), let \(M^{(d,k)}\) and \(\rho\) be the Hermitian matrix and the perturbation function defined in Theorem 3.1 and (2), respectively. We have_ \[M^{(d,k)}=-\frac{kA_{0,1}}{|S^{d}|^{1/2}}I_{N(d,k)}+E^{(d,k)}, \tag{24}\] _where \(I_{N(d,k)}\) is the identity matrix of size \(N(d,k)\) and \(E^{(d,k)}\) is a Hermitian, zero-trace matrix, whose entries are given by_ \[E^{(d,k)}_{m,n}=\sum_{p=1}^{\infty}\sum_{q=1}^{N(d,p)}\int_{S^{d}}A_{p,q}Y_{p, q}\left(-k(k+d)Y_{k}^{m}(\hat{\theta})\overline{Y_{k}^{n}(\hat{\theta})}+ \nabla_{S^{d}}Y_{k}^{m}(\hat{\theta})\cdot\nabla_{S^{d}}\overline{Y_{k}^{n}( \hat{\theta})}\right)d\sigma_{d}.\] Proof.: We begin by substituting the expression for \(\rho\) (see (2)) into (21) to obtain \[M^{(k)}_{m,n}=\sum_{p=0}^{\infty}\sum_{q=1}^{N(d,p)}\int_{S^{d}}A_{p,q}Y_{p,q} \left(-k(k+d)Y_{k}^{m}(\hat{\theta})\overline{Y_{k}^{n}(\hat{\theta})}+\nabla _{S^{d}}Y_{k}^{m}(\hat{\theta})\cdot\nabla_{S^{d}}\overline{Y_{k}^{n}(\hat{ \theta})}\right)d\sigma_{d}.\] Separating the infinite sum into \(p=0\) and \(p>0\), we may write \(M^{(k)}_{m,n}=D^{(d,k)}_{m,n}+E^{(k)}_{m,n}\), where \(E^{(k)}_{m,n}\) has the desired expression and \[D^{(d,k)}_{m,n}=\int_{S^{d}}A_{0,1}Y_{0,1}\left(-k(k+d)Y_{k}^{m}(\hat{\theta}) \overline{Y_{k}^{n}(\hat{\theta})}+\nabla_{S^{d}}Y_{k}^{m}(\hat{\theta})\cdot \nabla_{S^{d}}\overline{Y_{k}^{n}(\hat{\theta})}\right)d\sigma_{d}.\] Using the integral identity (8) and the orthonormality of the hyperspherical harmonics, we deduce that the matrix \(D^{(d,k)}\) is diagonal. Moreover, \[D^{(d,k)}_{m,m}=A_{0,1}Y_{0,1}\Big{(}-k(k+d)+k(k+d-1)\Big{)}=-\frac{kA_{0,1}}{ |S^{d}|^{1/2}},\ \ 1\leq m\leq N(d,k),\] as desired. Thanks to Lemma 4.2, we see that \(E^{(d,k)}\) has zero trace since the trace is linear. We are now ready to prove Theorem 1.1. Proof of Theorem 1.1.: We recall the volume-normalised Steklov eigenvalue \[\Lambda_{k,j}(\Omega_{\varepsilon})=\lambda_{k,j}^{\varepsilon}|\Omega_{ \varepsilon}|^{\frac{1}{d+1}}. \tag{25}\] Substituting the ansatz (15a) for \(\lambda_{k,j}^{\varepsilon}\) and the asymptotic expansion (13) for \(|\Omega_{\varepsilon}|^{\frac{1}{d+1}}\), we obtain \[\Lambda_{k,j}(\Omega_{\varepsilon})=\left(k+\varepsilon\lambda_{k,j}^{(1)}+O( \varepsilon^{2})\right)\left(|B|^{\frac{1}{d+1}}+\varepsilon\left(\frac{A_{0, 1}|B|^{\frac{1}{d+1}}}{|S^{d}|^{1/2}}\right)+O(\varepsilon^{2})\right)\] \[=k|B|^{\frac{1}{d+1}}+\varepsilon\left(\lambda_{k,j}^{(1)}|B|^{\frac{1}{d+1}}+ \frac{kA_{0,1}|B|^{\frac{1}{d+1}}}{|S^{d}|^{1/2}}\right)+O(\varepsilon^{2}).\] From Corollary 4.3, we have that \[\lambda_{k,j}^{(1)}=-\frac{kA_{0,1}}{|S^{d}|^{1/2}}+e_{k,j},\] where \(e_{k,j}\) is the \(j\)th eigenvalue (in increasing order) of the matrix \(E^{(d,k)}\) which is real. It follows that \[\Lambda_{k,j}(\Omega_{\varepsilon})=k|B|^{\frac{1}{d+1}}+\varepsilon\left(e_{ k,j}|B|^{\frac{1}{d+1}}\right)+O(\varepsilon^{2}).\] Since \(E^{(d,k)}\) is Hermitian with zero trace, either \(e_{k,j}=0\) for all \(j=1,2,\ldots,N(d,k)\) or \(e_{k,1}<0\). Together, we see that \(e_{k,1}\leq 0\) and this completes the proof since \(\Lambda_{k,1}=\Lambda_{1+N_{d,k}}\). ## 5. \(M^{(d,k)}\) and the Wigner \(3j\)-symbols To prove Theorem 1.2, it suffices to find a perturbation function \(\rho\) such that the corresponding matrix \(M^{(d,k)}\) has at least one positive eigenvalue. The first step is to express \(M^{(d,k)}\) in terms of the integral of the triple product of hyperspherical harmonics. **Lemma 5.1**.: _Given \(d\geq 3\) and \(k\in\mathbb{Z}^{+}\), let \(M^{(d,k)}\) and \(\rho\) be the Hermitian matrix and the perturbation function defined in Theorem 3.1 and (2), respectively. Then the entries of \(M^{(d,k)}\) can be written as_ \[M^{(d,k)}_{m,n}=-\frac{1}{2}\sum_{p=0}^{\infty}\sum_{q=1}^{N(d,k)}A_{p,q} \Big{(}p(p+d-1)+2k\Big{)}W^{p,k}_{q,m,n}, \tag{26}\] _where_ \[W^{p,k}_{q,m,n}=\int_{S^{d}}Y_{p,q}(\hat{\theta})Y^{m}_{k}(\hat{\theta}) \overline{Y^{n}_{k}(\hat{\theta})}\,d\sigma_{d}.\] Proof.: From Theorem 3.1 and (5), we have \[M^{(d,k)}_{m,n}\stackrel{{\eqref{eq:2.1}}}{{=}}-\int_{S^{d}}k \rho Y^{m}_{k}\overline{Y^{n}_{k}}\,d\sigma_{d}-\sum_{j=1}^{d}\int_{S^{d}} \frac{\partial_{j}\rho}{\eta^{2}_{j}}\partial_{j}Y^{m}_{k}\cdot\overline{Y^{ n}_{k}}\,d\sigma_{d}. \tag{27}\] Integrating by parts with respect to \(\theta_{j}\) and noting that \(\eta_{j}\) is independent of \(\theta_{j}\), we have that \[-\sum_{j=1}^{d}\int_{S^{d}}\frac{\partial_{j}\rho}{\eta^{2}_{j}} \partial_{j}Y^{m}_{k}\cdot\overline{Y^{n}_{k}}\,d\sigma_{d} =-\sum_{j=1}^{d}\int_{S^{d-1}}\int_{\theta_{j}}\frac{\partial_{j} Y^{m}_{k}}{\eta^{2}_{j}}\Big{(}\sin^{j-1}(\theta_{j})\partial_{j}\rho \cdot\overline{Y^{n}_{k}}\Big{)}\,d\theta_{j}\,d\sigma_{d-1,j}\] \[=\sum_{j=1}^{d}\int_{S^{d-1}}\int_{\theta_{j}}\frac{Y^{m}_{k}}{ \eta^{2}_{j}}\partial_{j}\Big{(}\sin^{j-1}(\theta_{j})\partial_{j}\rho\cdot \overline{Y^{n}_{k}}\Big{)}\,d\theta_{j}\,d\sigma_{d-1,j}\] \[=\sum_{j=1}^{d}\int_{S^{d}}\frac{Y^{m}_{k}}{\eta^{2}_{j}}\left[ \partial_{j}\rho\cdot\partial_{j}\overline{Y^{n}_{k}}+\frac{\partial_{j}\left( \sin^{j-1}(\theta_{j})\partial_{j}\rho\right)}{\sin^{j-1}(\theta_{j})} \overline{Y^{n}_{k}}\right]d\sigma_{d}\] \[\stackrel{{\eqref{eq:2.2}}}{{=}}\left(\sum_{j=1}^{d} \int_{S^{d}}\frac{\partial_{j}\rho}{\eta^{2}_{j}}Y^{m}_{k}\partial_{j}\overline {Y^{n}_{k}}\,d\sigma_{d}\right)+\int_{S^{d}}\left(\Delta_{S^{d}}\rho\right)Y^ {m}_{k}\overline{Y^{n}_{k}}\,d\sigma_{d}, \tag{28}\] where the boundary term vanishes due to (1) \(\partial_{1}\rho\) and hyperspherical harmonics are \(2\pi\)-periodic with respect to \(\theta_{1}\) for \(j=1\), and (2) \(\sin^{j-1}(0)=\sin^{j-1}(\pi)=0\) for \(j=2,3,\ldots,d\). On the other hand, we deduce from (23) from the proof of Theorem 3.1 that \[-\sum_{j=1}^{d}\int_{S^{d}}\frac{\partial_{j}\rho}{\eta_{j}^{2}}\partial_{j}Y_ {k}^{m}\cdot\overline{Y_{k}^{n}}\,d\sigma_{d}=-\sum_{j=1}^{d}\int_{S^{d}}\frac {\partial_{j}\rho}{\eta_{j}^{2}}Y_{k}^{m}\partial_{j}\overline{Y_{k}^{n}}\,d \sigma_{d}. \tag{29}\] Taking the average of (28) and (29), it follows that \[-\sum_{j=1}^{d}\int_{S^{d}}\frac{\partial_{j}\rho}{\eta_{j}^{2}}\left(\partial _{j}Y_{k}^{m}\right)\overline{Y_{k}^{n}}\,d\sigma_{d}=\frac{1}{2}\int_{S^{d}} \left(\Delta_{S^{d}}\rho\right)Y_{k}^{m}\overline{Y_{k}^{n}}\,d\sigma_{d}. \tag{30}\] Finally, we substitute (30) and the expansion (2) for \(\rho\) into (27) to obtain \[M_{m,n}^{(d,k)} =-\frac{1}{2}\int_{S^{d}}\left(-\Delta_{S^{d}}\rho+2k\rho\right)Y _{k}^{m}\overline{Y_{k}^{n}}\,d\sigma_{d}\] \[=-\frac{1}{2}\sum_{p=0}^{\infty}\sum_{q=0}^{N(d,p)}A_{p,q}\int_{S ^{d}}\left(-\Delta_{S^{d}}Y_{p,q}+2kY_{p,q}\right)Y_{k}^{m}\overline{Y_{k}^{n} }\,d\sigma_{d}\] \[\stackrel{{\eqref{eq:2}}}{{=}}-\frac{1}{2}\sum_{p=0 }^{\infty}\sum_{q=0}^{N(d,p)}\left(p(p+d-1)+2k\right)\int_{S^{d}}Y_{p,q}Y_{k}^{ m}\overline{Y_{k}^{n}}\,d\sigma_{d},\] which gives the desired result. In order to use Lemma 5.1, we require the evaluation of \(W_{q,m,n}^{p,k}\). We introduce additional notation that simplifies our presentation in deriving the explicit expression for \(W_{q,m,n}^{p,k}\). For any \((d-1)\)-tuples \(q,m,n\) satisfying (9) with \(q_{d}=p\), \(m_{d}=n_{d}=k\), we define the 3-tuple \(T_{j}=(t_{j}^{1},t_{j}^{2},t_{j}^{3})\coloneqq(q_{j},m_{j},n_{j})\) and introduce the following variables for \(j=1,2,\ldots,d\): \[s_{j}=q_{j}+m_{j}+n_{j},\qquad\operatorname{diff}_{j}^{i}=t_{j}^{i}-t_{j-1}^{i },\qquad\nu_{j}=\frac{j-1}{2}.\] Since the real and complex hyperspherical harmonics are both separable in hyperspherical coordinates (see Section 2.2), it follows that \(W_{q,m,n}^{p,k}\) can be written as a product of integrals \[W_{q,m,n}^{p,k}=I(T_{1},T_{2})\prod_{j=3}^{d}I(T_{j-1},T_{j}),\ \ 1\leq q\leq N(d,p),\,1\leq m,n\leq N(d,k),\] where the integrals \(I(T_{1},T_{2})\) and \(I(T_{j-1},T_{j})\), \(j=3,4,\ldots,d\), are given by \[I(T_{1},T_{2}) =\int_{0}^{2\pi}\int_{0}^{\pi}\widetilde{Y}_{q_{2},q_{1}}(\phi, \theta_{2})\widetilde{Y}_{m_{2}}^{m_{1}}(\phi,\theta_{2})\overline{\widetilde {Y}_{n_{2}}^{n_{1}}(\phi,\theta_{2})}\sin(\theta_{2})\,d\theta_{2}\,d\phi,\] \[I(T_{j-1},T_{j}) =\int_{0}^{\pi}\left(\prod_{i=1}^{3}Y(\theta_{j};t_{j-1}^{i},t_{j }^{i})\right)\sin^{j-1}(\theta_{j})\,d\theta_{j}\] \[=\left(\prod_{i=1}^{3}\mu_{j}^{(i)}\right)^{-1}\int_{0}^{\pi} \left(\prod_{i=1}^{3}C_{\operatorname{diff}_{j}^{i}}^{(t_{j-1}^{i}+\nu_{j})}( \cos\theta_{j})\right)(\sin\theta_{j})^{s_{j-1}+2\nu_{j}}\,d\theta_{j} \tag{31}\] \[=\left(\prod_{i=1}^{3}\mu_{j}^{(i)}\right)^{-1}\int_{-1}^{1}(1-z^ {2})^{\frac{s_{j-1}}{2}+\nu_{j}-\frac{1}{2}}\prod_{i=1}^{3}C_{\operatorname{ diff}_{j}^{i}}^{(t_{j-1}^{i}+\nu_{j})}(z)\,dz.\] The constant \(\mu_{j}^{(i)}\) for \(i=1,2,3\) and \(j=3,4,\ldots,d\) satisfies \[\Big{(}\mu_{j}^{(i)}\Big{)}^{2}=\frac{4\pi\Gamma(t_{j}^{i}+t_{j-1}^{i}+2\nu_{j})} {2^{2t_{j-1}^{i}+j}\left(\mathrm{diff}_{j}^{i}\right)!\left(t_{j}^{i}+\nu_{j} \right)\Gamma^{2}(t_{j-1}^{i}+\nu_{j})}.\] Our next theorem provides an explicit expression for these \((d-1)\) integrals above involving the Pochhammer's symbol and the Wigner \(3j\)-symbol \(\begin{pmatrix}j_{1}&j_{2}&j_{3}\\ m_{1}&m_{2}&m_{3}\end{pmatrix}\); see [20] for the definition of Wigner \(3j\)-symbols. A crucial property of the Wigner \(3j\)-symbol is the following: If \(\begin{pmatrix}j_{1}&j_{2}&j_{3}\\ m_{1}&m_{2}&m_{3}\end{pmatrix}\neq 0\), then all the following _selection rules_ must be satisfied: 1. \(m_{i}\in\{-j_{i},-j_{i}+1,-j_{i}+2,\ldots,j_{i}\}\) for \(i=1,2,3\). 2. \(m_{1}+m_{2}+m_{3}=0\). 3. The triangle conditions \(|j_{1}-j_{2}|\leq j_{3}\leq j_{1}+j_{2}\). 4. \((j_{1}+j_{2}+j_{3})\geq 0\) is an integer (and, moreover, an even integer if \(m_{1}=m_{2}=m_{3}=0\)). **Theorem 5.2**.: _Fix \(d\geq 3\), \(p\in\mathbb{N}\), and \(k\in\mathbb{Z}^{+}\). Write \(c_{T_{2}}=\sqrt{\frac{(2q_{2}+1)(2m_{2}+1)(2n_{2}+1)}{4\pi}}\) and \(Q(q_{1})=\begin{pmatrix}q_{2}&m_{2}&n_{2}\\ q_{1}&m_{1}&-n_{1}\end{pmatrix}\). We have \(I(T_{1},T_{2})=c_{T_{2}}\begin{pmatrix}q_{2}&m_{2}&n_{2}\\ 0&0&0\end{pmatrix}Q(T_{1},T_{2})\), where_ \[Q(T_{1},T_{2})=\begin{cases}(-1)^{m_{1}}\delta_{m_{1},n_{1}}\begin{pmatrix}q_{2 }&m_{2}&n_{2}\\ 0&m_{1}&-m_{1}\end{pmatrix}&\text{if $q_{1}=0$},\\ \frac{(-1)^{n_{1}}}{\sqrt{2}}\Big{[}Q(-q_{1})+(-1)^{q_{1}}Q(q_{1}) \Big{]}&\text{if $q_{1}>0$},\\ \frac{i(-1)^{n_{1}}}{\sqrt{2}}\Big{[}Q(q_{1})-(-1)^{q_{1}}Q(-q_{1}) \Big{]}&\text{if $q_{1}<0$}.\end{cases}\] _For \(j=3,4,\ldots,d\), we have \(I(T_{j-1},T_{j})=\Big{(}\mu_{j}^{(1)}\mu_{j}^{(2)}\mu_{j}^{(3)}\Big{)}^{-1}\,H( T_{j-1},T_{j})\), where_ \[H(T_{j-1},T_{j})= \sum_{\begin{subarray}{c}0\leq\ell_{1}\leq\lfloor\mathrm{diff}_{j }^{1}/2\rfloor\\ 0\leq\ell_{2}\leq\lfloor\mathrm{diff}_{j}^{2}/2\rfloor\\ 0\leq\ell_{3}\leq\lfloor\mathrm{diff}_{j}^{3}/2\rfloor\end{subarray}}\prod_{i=1 }^{3}V(\mathrm{diff}_{j}^{\mathrm{ri}},t_{j-1}^{i}+\nu_{j},\ell_{i})\sum_{ \begin{subarray}{c}\tau_{1},\tau_{2}\in\mathbb{N}\\ \tau_{2}\text{ even}\end{subarray}}(2\tau_{1}+1)(2\tau_{2}+1)L(s_{j-1},\nu_{j}, \tau_{2})\] \[\times\begin{pmatrix}\mathrm{diff}_{j}^{2}-2\ell_{2}&\mathrm{ diff}_{j}^{3}-2\ell_{3}&\tau_{1}\\ 0&0&0\end{pmatrix}^{2}\begin{pmatrix}\mathrm{diff}_{j}^{1}-2\ell_{1}&\tau_{1}& \tau_{2}\\ 0&0&0\end{pmatrix}^{2}.\] _The constants \(V\) and \(L\) are defined by_ \[V(\beta,\alpha,\ell) =(1+2\beta-4\ell)\frac{(\alpha)_{\beta-\ell}}{\left(\frac{3}{2} \right)_{\beta-\ell}}\frac{(\alpha-\frac{1}{2})_{\ell}}{\ell!},\] \[L(s_{j-1},\nu_{j},\tau_{2}) =\frac{\pi\Gamma^{2}\left(\frac{s_{j-1}}{2}+\nu_{j}+\frac{1}{2} \right)}{\Gamma\left(\frac{s_{j-1}}{2}+\nu_{j}+1+\frac{\tau_{2}}{2}\right) \Gamma\left(\frac{s_{j-1}}{2}+\nu_{j}+\frac{1}{2}-\frac{\tau_{2}}{2}\right) \Gamma\left(\frac{\tau_{2}}{2}+1\right)\Gamma\left(-\frac{\tau_{2}}{2}+\frac {1}{2}\right)}.\] _Here, \(\lfloor\cdot\rfloor\) and \((\alpha)_{n}=\frac{\Gamma(\alpha+n)}{\Gamma(\alpha)}\) denote the floor function and the Pochhammer's symbol, respectively._ Proof.: Write \(d\sigma_{2}=\sin\theta_{2}\,d\theta_{2}\,d\phi\) with \(0\leq\phi<2\pi\) and \(0\leq\theta_{2}\leq\pi\). Recall that the product of three complex three-dimensional spherical harmonics can be written in terms of the Wigner \(3j\)-symbol by [Nis, Eq. 34.3.22]: \[\int_{S^{2}(\phi,\theta_{2})}\widetilde{Y}_{a_{2}}^{a_{1}}\widetilde{Y}_{b_{2}}^ {b_{1}}\widetilde{Y}_{c_{2}}^{c_{1}}\,d\sigma_{2}=\sqrt{\frac{(2a_{2}+1)(2b_{2} +1)(2c_{2}+1)}{4\pi}}\begin{pmatrix}a_{2}&b_{2}&c_{2}\\ 0&0&0\end{pmatrix}\begin{pmatrix}a_{2}&b_{2}&c_{2}\\ a_{1}&b_{1}&c_{1}\end{pmatrix}.\] Now, using the complex conjugate formula for the normalised three-dimensional complex spherical harmonics, we have that \[I(T_{1},T_{2})=(-1)^{n_{1}}\int_{S^{2}(\phi,\theta_{2})}\widetilde{Y}_{q_{2},q _{1}}\widetilde{Y}_{m_{2}}^{m_{1}}\widetilde{Y}_{n_{2}}^{-n_{1}}\,d\sigma_{2}.\] If \(q_{1}=0\), then \[I(T_{1},T_{2}) =(-1)^{n_{1}}\int_{S^{2}}\widetilde{Y}_{q_{2}}^{0}\widetilde{Y}_ {m_{2}}^{m_{1}}\widetilde{Y}_{n_{2}}^{-n_{1}}\,d\sigma_{2}\] \[=(-1)^{n_{1}}c_{T_{2}}\begin{pmatrix}q_{2}&m_{2}&n_{2}\\ 0&0&0\end{pmatrix}\begin{pmatrix}q_{2}&m_{2}&n_{2}\\ 0&m_{1}&-n_{1}\end{pmatrix}\] \[=(-1)^{m_{1}}\delta_{m_{1},n_{1}}c_{T_{2}}\begin{pmatrix}q_{2}&m_{ 2}&n_{2}\\ 0&0&0\end{pmatrix}\begin{pmatrix}q_{2}&m_{2}&n_{2}\\ 0&m_{1}&-m_{1}\end{pmatrix}.\] If \(q_{1}>0\), then \[I(T_{1},T_{2}) =\frac{(-1)^{n_{1}}}{\sqrt{2}}\int_{S^{2}}\left[\widetilde{Y}_{q _{2}}^{-q_{1}}\widetilde{Y}_{m_{2}}^{m_{1}}\widetilde{Y}_{n_{2}}^{-n_{1}}+(-1) ^{q_{1}}\widetilde{Y}_{q_{2}}^{q_{1}}\widetilde{Y}_{m_{2}}^{m_{1}}\widetilde{ Y}_{n_{2}}^{-n_{1}}\right]d\sigma_{2}\] \[=\frac{(-1)^{n_{1}}}{\sqrt{2}}c_{T_{2}}\begin{pmatrix}q_{2}&m_{2} &n_{2}\\ 0&0&0\end{pmatrix}\left[\begin{pmatrix}q_{2}&m_{2}&n_{2}\\ -q_{1}&m_{1}&-n_{1}\end{pmatrix}+(-1)^{q_{1}}\begin{pmatrix}q_{2}&m_{2}&n_{2} \\ q_{1}&m_{1}&-n_{1}\end{pmatrix}\right].\] If \(q_{1}<0\), then \[I(T_{1},T_{2}) =\frac{i(-1)^{n_{1}}}{\sqrt{2}}\int_{S^{2}}\left[\widetilde{Y}_{q _{2}}^{q_{1}}\widetilde{Y}_{m_{2}}^{m_{1}}\widetilde{Y}_{n_{2}}^{-n_{1}}-(-1) ^{q_{1}}\widetilde{Y}_{q_{2}}^{-q_{1}}\widetilde{Y}_{m_{2}}^{m_{1}}\widetilde{ Y}_{n_{2}}^{-n_{1}}\right]d\sigma_{2}\] \[=\frac{i(-1)^{n_{1}}}{\sqrt{2}}c_{T_{2}}\begin{pmatrix}q_{2}&m_{2} &n_{2}\\ 0&0&0\end{pmatrix}\left[\begin{pmatrix}q_{2}&m_{2}&n_{2}\\ q_{1}&m_{1}&-n_{1}\end{pmatrix}-(-1)^{q_{1}}\begin{pmatrix}q_{2}&m_{2}&n_{2} \\ -q_{1}&m_{1}&-n_{1}\end{pmatrix}\right].\] This completes the proof for \(I(T_{1},T_{2})\). To establish the formula for \(I(T_{j-1},T_{j})\), we need only show the integral in (31) is equal to \(H(T_{j-1},T_{j})\). We follow the proof in [20, Section V]. Let \(P_{\beta}(z)\) and \((\alpha)_{\beta}=\Gamma(\alpha+\beta)/\Gamma(\alpha)\) denote the Legendre polynomial of degree \(\beta\) and the Pochhammer's symbol, respectively. The first step is to combine the connection sum formula [Nis, Eq. 18.18.16] for Gegenbauer polnomials with \(\lambda=1/2\) together with the fact that \(C_{\beta}^{(1/2)}(z)=P_{\beta}(z)\) [Nis, Eq. 18.7.8]: \[C_{\beta}^{(\alpha)}(z)=\sum_{\ell=0}^{\lfloor\beta/2\rfloor}V(\beta,\alpha, \ell)P_{\beta-2\ell}(z),\quad\text{where }V(\beta,\alpha,\ell)\coloneqq(1+2\beta-4\ell)\frac{(\alpha)_{\beta-\ell}}{ (\frac{3}{2})_{\beta-\ell}}\frac{(\alpha-\frac{1}{2})_{\ell}}{\ell!}.\] Fix \(j\in\{3,4,\ldots,d\}\) and choose \(\beta_{j}^{i}=\text{diff}_{j}^{i}\) and \(\alpha_{j}^{i}=t_{j-1}^{i}+\nu_{j}\). The integral in (31) is equal to \[\begin{split}\int_{-1}^{1}&(1-z^{2})^{\frac{s_{j-1}}{2}+ \nu_{j}-\frac{1}{2}}\left(\prod_{i=1}^{3}\sum_{\ell_{i}=0}^{\lfloor\beta_{j} ^{i}/2\rfloor}V(\beta_{j}^{i},\alpha_{j}^{i},\ell_{i})P_{\beta_{j}^{i}-2\ell_{ i}}(z)\right)dz\\ &=\sum_{\begin{subarray}{c}0\leq t_{1}\leq\lfloor\beta_{j}^{1}/2 \rfloor\\ 0\leq\ell_{2}\leq\lfloor\beta_{j}^{2}/2\rfloor\\ 0\leq\ell_{3}\leq\lfloor\beta_{j}^{3}/2\rfloor\end{subarray}}\prod_{i=1}^{3}V( \beta_{j}^{i},\alpha_{j}^{i},\ell_{i})\int_{-1}^{1}(1-z^{2})^{\frac{s_{j-1}}{2 }+\nu_{j}-\frac{1}{2}}\prod_{i=1}P_{\beta_{j}^{i}-2\ell_{i}}(z)\,dz.\end{split} \tag{32}\] To evaluate the integral involving the triple product of Legendre polynomials, we use the fact that the product of Legendre polynomials can be written in terms of the Wigner \(3j\)-symbol by [20, Eq. 34.3.19] \[P_{b_{1}}(z)P_{b_{2}}(z)=\sum_{\tau_{1}\in\mathbb{N}}(2\tau_{1}+1)\begin{pmatrix} b_{1}&b_{2}&\tau_{1}\\ 0&0&0\end{pmatrix}^{2}P_{\tau_{1}}(z).\] Applying this identity twice and using the fact that odd permutations of columns of Wigner \(3j\)-symbols produce a phase factor [20, Eq. 34.3.9], we obtain \[\begin{split}&\prod_{i=1}^{3}P_{\beta_{j}^{i}-2\ell_{i}}(z)=P_{ \beta_{j}^{1}-2\ell_{1}}(z)\sum_{\tau_{1}\in\mathbb{N}}(2\tau_{1}+1)\begin{pmatrix} \beta_{j}^{2}-2\ell_{2}&\beta_{j}^{3}-2\ell_{3}&\tau_{1}\\ 0&0&0\end{pmatrix}^{2}P_{\tau_{1}}(z)\\ &=\sum_{\tau_{1},\tau_{2}\in\mathbb{N}}(2\tau_{1}+1)(2\tau_{2}+1) \begin{pmatrix}\beta_{j}^{2}-2\ell_{2}&\beta_{j}^{3}-2\ell_{3}&\tau_{1}\\ 0&0&0\end{pmatrix}^{2}\begin{pmatrix}\beta_{j}^{1}-2\ell_{1}&\tau_{1}&\tau_{2} \\ 0&0&0\end{pmatrix}^{2}P_{\tau_{2}}(z).\end{split} \tag{33}\] Substituting (33) into (32) and comparing the resulting expression with the given expression for \(H(T_{j-1},T_{j})\), we need only show \[\int_{-1}^{1}(1-z^{2})^{\frac{s_{j-1}}{2}+\nu_{j}-\frac{1}{2}}P_{\tau_{2}}(z) \,dz=L(s_{j-1},\nu_{j},\tau_{2}).\] This follows from applying the following integration formula for Legendre polynomials with \(\gamma=(s_{j-1}+2\nu_{j}+1)/2\)[1, Eq. 7.132.1 with \(\mu=0\)]: \[\int_{-1}^{1}(1-z^{2})^{\gamma-1}P_{\tau_{2}}(z)\,dz=\frac{\pi\Gamma^{2}( \gamma)}{\Gamma\left(\gamma+\frac{\tau_{2}}{2}+\frac{1}{2}\right)\Gamma\left( \gamma-\frac{\tau_{2}}{2}\right)\Gamma\left(\frac{\tau_{2}}{2}+1\right)\Gamma \left(-\frac{\tau_{2}}{2}+\frac{1}{2}\right)},\ \ \text{Re}(\gamma)>0.\] Finally, we may assume that \(\tau_{2}\) is even, since otherwise \((1-z^{2})^{(s_{j-1}+2\nu_{j}-1)/2}P_{\tau_{2}}(z)\) is an odd function which results in \(L(s_{j-1},\nu_{j},\tau_{2})=0\). We now combine Lemma 5.1 and Theorem 5.2 to prove Theorem 1.2. Proof of Theorem 1.2.: Choose the perturbation function \(\rho=Y_{2,q}\) with the tuple \(q=(0,2,\ldots,2)\). With this choice of \(\rho\), Lemma 4.2 tells us that the trace of \(M^{(d,k)}\) is \(0\). Since \(M^{(d,k)}\) is Hermitian which is diagonalisable, it suffices to show that \(M^{(d,k)}\) has a nonzero entry. In that case, we must have \(\lambda_{k,N(d,k)}^{(1)}=\lambda_{N_{d,k+1}}^{(1)}>0\) since we defined it to be the largest eigenvalue of \(M^{(d,k)}\). We claim that the diagonal entry of \(M^{(d,k)}\) corresponding to the tuple \(m=(k,k,\ldots,k)\) is one such nonzero entry. From Lemma 5.1, we need only show that \(W_{q,m,m}^{2,k}\neq 0\) with our choice of tuples. Recall the definition of the \(3\)-tuple \(T_{j}=(q_{j},m_{j},n_{j})\) for \(j=1,2,\ldots,d\). From Theorem 5.2, we have \(c_{T_{2}}=\sqrt{\frac{5}{4\pi}}(2k+1)\neq 0\) and it follows from [20, Eqs. 34.3.5 & 34.3.7] that \[I(T_{1},T_{2})=(-1)^{k}c_{T_{2}}\begin{pmatrix}2&k&k\\ 0&0&0\end{pmatrix}\begin{pmatrix}2&k&k\\ 0&k&-k\end{pmatrix}\neq 0.\] Next, observe that for each \(j=3,4,\ldots,d\), we have \[\operatorname{diff}_{j}^{i}=t_{j}^{i}-t_{j-1}^{i}=0,\ \ i=1,2,3,\quad\text{and} \quad s_{j-1}=q_{j-1}+m_{j-1}+n_{j-1}=2+2k.\] For a fixed \(j\in\{3,4,\ldots,d\}\), Theorem 5.2 gives \[I(T_{j-1},T_{j})=\frac{V(0,2+\nu_{j},0)V(0,k+\nu_{j},0)^{2}}{\mu_{j}^{(1)}\mu_{ j}^{(2)}\mu_{j}^{(3)}}\sum_{\begin{subarray}{c}\tau_{1},\tau_{2}\in\mathbb{N}\\ \tau_{2}\text{ even}\end{subarray}}(2\tau_{1}+1)(2\tau_{2}+1)L(2+2k,\nu_{j}, \tau_{2})\] \[\times\begin{pmatrix}0&0&\tau_{1}\\ 0&0&0\end{pmatrix}^{2}\begin{pmatrix}0&\tau_{1}&\tau_{2}\\ 0&0&0\end{pmatrix}^{2}.\] Since the Pochhammer's symbol satisfies \((\alpha)_{0}=1\) for any \(\alpha\geq 0\), we see that \(V(0,2+\nu_{j},0)\) and \(V(0,k+\nu_{j},0)\) are both \(1\). Next, using the triangle conditions for Wigner \(3j\)-symbols, we must have \(\tau_{1}=0\) and subsequently \(\tau_{2}=0\). Recalling \(\nu_{j}=(j-1)/2\) and using the fact that \(\begin{pmatrix}0&0&0\\ 0&0&0\end{pmatrix}=1\) (see [20, Eq. 34.3.1]), we find \[I(T_{j-1},T_{j})=\frac{L(2+2k,\nu_{j},0)}{\mu_{j}^{(1)}\mu_{j}^{(2)}\mu_{j}^{ (3)}}\begin{pmatrix}0&0&0\\ 0&0&0\end{pmatrix}^{4}=\frac{\sqrt{\pi}\Gamma\left(k+1+\frac{j}{2}\right)}{ \mu_{j}^{(1)}\mu_{j}^{(2)}\mu_{j}^{(3)}\Gamma\left(k+\frac{j+3}{2}\right)}>0.\] This completes the proof since \(W_{q,m,m}^{2,k}\) is a product of \(I(T_{1},T_{2})\) and \(\{I(T_{j-1},T_{j})\}_{j=3}^{d}\). Following [11, Corollary 2.3], we wish to establish the first-order behaviour of the Steklov eigenvalues for a nearly hyperspherical domain \(\Omega_{\varepsilon}\) of the form (2) with \(\rho=Y_{p,q}(\hat{\theta})\), for both \(p\in\mathbb{N}\) and the \((d-1)\)-tuple \(q\) fixed. The proof in [11] is based on a delicate analysis of the integral \(W_{q,m,n}^{p,k}\) using the selection rules for Wigner \(3j\)-symbols, where the authors proved that the entry (26) for \(M^{(d,k)}\) involving the infinite sum reduced to a finite sum over the set \(\{p\leq 2k,p\text{ even}\}\). **Lemma 5.3**.: _Given \(d\geq 3\) and \(k\in\mathbb{Z}^{+}\), let \(M^{(d,k)}\) and \(\rho\) be the Hermitian matrix and the perturbation function defined in Theorem 3.1 and (2), respectively. Then the entries of \(M^{(d,k)}\) can be written as_ \[M^{(d,k)}_{m,n}=-\frac{1}{2}\sum_{\begin{subarray}{c}p=0\\ p\text{ even}\end{subarray}}^{\infty}\sum_{q=1}^{N(d,k)}A_{p,q}\Big{(}p(p+d-1) +2k\Big{)}W_{q,m,n}^{p,k}. \tag{34}\] Proof.: Fix \(p\in\mathbb{N}\) and \(k\in\mathbb{Z}^{+}\). We claim that if \(p\) is odd, then \(W_{q,m,n}^{p,k}=0\) for all \(q,m,n\) satisfying (9). Recall that \(s_{j}=q_{j}+m_{j}+n_{j}\) for \(j=1,2,\ldots,d\). Looking at the formula for \(I(T_{1},T_{2})\) in Theorem 5.2, we see that \(W_{q,m,n}^{p,k}=0\) if \(\begin{pmatrix}q_{2}&m_{2}&n_{2}\\ 0&0&0\end{pmatrix}\) is zero. Thus, we may assume \(\begin{pmatrix}q_{2}&m_{2}&n_{2}\\ 0&0&0\end{pmatrix}\neq 0\) without loss of generality. By the fourth selection rule for Wigner \(3j\)-symbols, we must have \(s_{2}=q_{2}+m_{2}+n_{2}\) even. Next, we turn our attention to \(I(T_{2},T_{3})\). Recall that \(\operatorname{diff}_{j}^{i}=t_{j}^{i}-t_{j-1}^{i}\) and \(\tau_{2}\) is even. The Wigner \(3j\) symbols appearing in \(I(T_{2},T_{3})\) have the form \[\begin{pmatrix}\operatorname{diff}_{3}^{2}-2\ell_{2}&\operatorname{diff}_{3}^{ 3}-2\ell_{3}&\tau_{1}\\ 0&0&0\end{pmatrix}^{2}\begin{pmatrix}\operatorname{diff}_{1}^{1}-2\ell_{1}& \tau_{1}&\tau_{2}\\ 0&0&0\end{pmatrix}^{2}.\] Looking at the Wigner \(3j\)-symbols from \(I(T_{2},T_{3})\), the fourth selection rule tells us that we may assume both \(\operatorname{diff}_{3}^{1}+\tau_{1}\) and \((\operatorname{diff}_{3}^{2}-2\ell_{2}+\operatorname{diff}_{3}^{3}-2\ell_{3}+ \tau_{1})\) are even. In this case, \[\operatorname{diff}_{3}^{1}+\tau_{1} =q_{3}-q_{2}+\tau_{1}\] \[\operatorname{diff}_{3}^{2}-2\ell_{2}+\operatorname{diff}_{3}^{3}-2 \ell_{3}+\tau_{1} =m_{3}-m_{2}-2\ell_{2}+n_{3}-n_{2}-2\ell_{3}+\tau_{1}\] \[=(s_{3}-q_{3})-(s_{2}-q_{2})+\tau_{1}-2\ell_{2}-2\ell_{3}\] \[=(s_{3}-s_{2})-(\operatorname{diff}_{3}^{1}-\tau_{1})-2\ell_{2}-2 \ell_{3}.\] Since \(\operatorname{diff}_{3}^{1}-\tau_{1}=\operatorname{diff}_{3}^{1}+\tau_{1}-2 \tau_{1}\) and \(s_{2}\) are both even, we may conclude that \(s_{3}\) is even as well. Continuing inductively, we conclude that \(\operatorname{diff}_{j}^{1}+\tau_{1}\) and \(s_{j}\) is even (so that \(I(T_{j-1},T_{j})\) is nonzero) for each \(j=3,4,\ldots,d-1\). We now consider \(I(T_{d-1},T_{d})\). By the arguments above, we can assume that \(\operatorname{diff}_{d}^{1}+\tau_{1}\) is even, which in turn implies that \(\operatorname{diff}_{d}^{1}-\tau_{1}\) is also even. Thus, we compute \[\operatorname{diff}_{d}^{2}-2\ell_{d-1}+\operatorname{diff}_{d}^{3} -2\ell_{d}+\tau_{1} =m_{d}-m_{d-1}-2\ell_{d-1}+n_{d}-n_{d-1}-2\ell_{d}+\tau_{1}\] \[=(s_{d}-q_{d})-(s_{d-1}-q_{d-1})+\tau_{1}-2\ell_{d-1}-2\ell_{d}\] \[=(s_{d}-s_{d-1})-(\operatorname{diff}_{d}^{1}-\tau_{1})-2\ell_{d- 1}-2\ell_{d}\] \[=p+2k-s_{d-1}-(\operatorname{diff}_{d}^{1}-\tau_{1})-2\ell_{d-1}- 2\ell_{d},\] where we have used that that \((t_{d}^{1},t_{d}^{2},t_{d}^{3})=(q_{d},m_{d},n_{d})=(p,k,k)\). But if \(p\) is odd, then \(\operatorname{diff}_{d}^{2}-2\ell_{d-1}+\operatorname{diff}_{d}^{3}-2\ell_{d}+\tau _{1}\) must be odd as well. By the fourth selection rule, we conclude that \(I(T_{d-1},T_{d})=0\), so that \(W_{q;m,n}^{p,k}=0\) as well, completing the proof. The following theorem is immediate from Corollary 4.3 and Lemma 5.3. **Theorem 5.4**.: _Given \(d\geq 3\), consider a nearly hyperspherical domain \(\Omega_{\varepsilon}\) of the form (2) with \(A_{p,q}=\delta_{p,p^{\prime}}\delta_{q,q^{\prime}}\). For any positive integer \(k\in\mathbb{Z}^{+}\) and any \(j=1,2,\ldots,N(d,k)\),_ 1. _if_ \(p^{\prime}=0\) _and_ \(q^{\prime}=(0,0,\ldots,0)\) _is the trivial_ \((d-1)\)_-tuple, then_ \(\lambda_{k,j}^{(1)}=-k|S^{d}|^{-1/2}\)_._ 2. _if_ \(p^{\prime}\) _is odd, then the Steklov eigenvalue_ \(\lambda_{k,j}^{\varepsilon}\) _is unperturbed at first-order in_ \(\varepsilon\)_._ Reducing the infinite sum (26) to the case where \(p\leq 2k\) is intractable for \(d\geq 3\). More specifically, we compute the entry of the matrix \(M^{(d,k)}\) using (26) for \(p=2k+2\xi\) with \(\xi\in\mathbb{Z}^{+}\), \(q=(0,2k,\ldots,2k)\), \(m=n=(k,k,\ldots,k)\), and we see numerically that this particular entry is zero due to cancellations of Wigner \(3j\)-symbols appearing in the formula of \(W_{q,m,n}^{p,k}\) (see Theorem 5.2). #### Acknowledgements We would like to express our gratitude to Chiu-Yen Kao and Nathan Schroeder for sharing their helpful insights into the Steklov eigenvalue problem in arbitrary dimensions. We would also like to thank Yerim Kone, Lucas Alland, and Amy Liu for their work in 2D Steklov shape perturbation, and Swarthmore College for funding their summer work.
2305.01319
Long-Term Rhythmic Video Soundtracker
We consider the problem of generating musical soundtracks in sync with rhythmic visual cues. Most existing works rely on pre-defined music representations, leading to the incompetence of generative flexibility and complexity. Other methods directly generating video-conditioned waveforms suffer from limited scenarios, short lengths, and unstable generation quality. To this end, we present Long-Term Rhythmic Video Soundtracker (LORIS), a novel framework to synthesize long-term conditional waveforms. Specifically, our framework consists of a latent conditional diffusion probabilistic model to perform waveform synthesis. Furthermore, a series of context-aware conditioning encoders are proposed to take temporal information into consideration for a long-term generation. Notably, we extend our model's applicability from dances to multiple sports scenarios such as floor exercise and figure skating. To perform comprehensive evaluations, we establish a benchmark for rhythmic video soundtracks including the pre-processed dataset, improved evaluation metrics, and robust generative baselines. Extensive experiments show that our model generates long-term soundtracks with state-of-the-art musical quality and rhythmic correspondence. Codes are available at \url{https://github.com/OpenGVLab/LORIS}.
Jiashuo Yu, Yaohui Wang, Xinyuan Chen, Xiao Sun, Yu Qiao
2023-05-02T10:58:29Z
http://arxiv.org/abs/2305.01319v2
# Long-Term Rhythmic Video Soundtracker ###### Abstract We consider the problem of generating musical soundtracks in sync with rhythmic visual cues. Most existing works rely on pre-defined music representations, leading to the incompetence of generative flexibility and complexity. Other methods directly generating video-conditioned waveforms suffer from limited scenarios, short lengths, and unstable generation quality. To this end, we present Long-Term Rhythmic Video Soundtracker (LORIS), a novel framework to synthesize long-term conditional waveforms. Specifically, our framework consists of a latent conditional diffusion probabilistic model to perform waveform synthesis. Furthermore, a series of context-aware conditioning encoders are proposed to take temporal information into consideration for a long-term generation. Notably, we extend our model's applicability from dances to multiple sports scenarios such as floor exercise and figure skating. To perform comprehensive evaluations, we establish a benchmark for rhythmic video soundtracks including the pre-processed dataset, improved evaluation metrics, and robust generative baselines. Extensive experiments show that our model generates long-term soundtracks with state-of-the-art musical quality and rhythmic correspondence. Codes are available at [https://github.com/OpenGVLab/LORIS](https://github.com/OpenGVLab/LORIS). Machine Learning, ICML, ICML ## 1 Introduction Automatic music generation has always been regarded as an iconic step towards a creative AI-generated content system. Continuous efforts (Dhariwal et al., 2020; Pasini and Schluter, 2022; Caillon and Esling, 2021; Kumar et al., 2019; Huang et al., 2018; von Rutte et al., 2022; Roberts et al., 2018; Ren et al., 2020; Dong et al., 2022; Mittal et al., 2021) have been made to drive machines interactively generating melodious music steered by given conditionings such as genre, tempo, and style. In this paper, we work on a tightly-coupled conditioning scenario, that is, video-conditioned music generation (a.k.a. video soundtracks) which is more challenging than other conditional music generation tasks due to its cross-modality and temporal-correlated nature. Rhythmic video soundtracks require the model to consider the intrinsic correlations between human movements and music rhythms, and further leverage such temporal alignments as guidance for a conditional generation. To date in the literature, some works (Gan et al., 2020; Di et al., 2021; Su et al., 2020; Qi et al., 2020; Qi et al., 2021) investigate cross-modality soundtracks by using pre-defined symbolic musical representations such as MIDI, REMI, and piano-roll that can be autoregressively generated. However, this kind of representation is not expressive enough to cover the diverse range of sounds we hear in typical soundtracks, which hinders the model from synthesizing complex and diverse music. Recently, some advances (Zhu et al., 2022; 20) directly generate waveforms in a non-autoregressive manner, yet these works heavily rely on the computationally-expensive pre-trained music encoder (Dhariwal et al., 2020), thereby resulting in the short-length (2\(\sim\)6s) and low-quality consequences. Moreover, owing to the insufficiency of paired music-video data, video soundtracks are limited to the dancing scenarios, which severely restrains the model's generalizability for Figure 1: Overview of the approach. We tackle the task of generating long-term soundtracks based on given rhythmic videos. The LORIS framework is first proposed to generate long-term rhythm-correlated waveforms. We then establish a novel benchmark, including a large-scale dataset varying from dancing to sports and a set of improved metrics for long-term soundtracks. downstream applications. In this paper, we introduce LORIS, the Long-Term Rhythmic Video Soundtracker to efficiently synthesize high-quality waveforms. At the heart of our model lies a latent conditional diffusion model where multiple conditionings (e.g., RGB, motions, genre) are hierarchically infused into the diffusion procedure. Specifically, we extract visual rhythms based on the cadent movement of human motions, then introduce the Hawkes Process (Hawkes, 1971; Mei and Eisner, 2017) on the visual rhythms to take temporal context into consideration. Besides, we also model the temporal relationship by adding a Bi-LSTM (Hochreiter and Schmidhuber, 1997) over the RGB embedding. These visual and motion features are conditioned via a cross-modal attention block. For the music generation part, we adopt a latent diffusion model (LDM) to encode the input waveforms into the latent feature spaces, then add and remove the Gaussian noise to/from the compressed features according to a discrete T-step schedule (Karras et al., 2022). We also establish a comprehensive benchmark to facilitate the exploration of the rhythmic video soundtrack task. First, we build a large-scale dataset based on existing dancing and sports datasets to provide 86.43h long-term, high-quality raw videos with corresponding 2D poses, RGB features, and ameliorated audio waveforms. Next, we show the incapability of existing short-length music metrics in assessing long-term video soundtracks and propose an improved version. Finally, we conduct experiments on the established benchmark to fully evaluate LORIS on music quality and rhythmic correspondence. We show that our model, surpassing the existing methods on all metrics, can play the role of a strong baseline for the following works. In conclusion, our main contributions are three-fold: * We are the first to propose a context-aware conditional diffusion framework to perform long-term video soundtrack generation on complex rhythmic scenarios. * We propose a robust benchmark, including a large-scale rhythmic video soundtrack dataset, a set of improved evaluation metrics, and a carefully-designed baseline for the subsequent research. * Extensive experiments demonstrate that our framework is capable of generating long-term, visual-correlated musical waveforms, which benefits the creation of the musical art community. ## 2 Related Work **Uni-modal Music Generation.** The family of uni-modal music generation embraces two branches. The first is in favor of using pre-defined music representations for editable music generation. Some methods (Huang et al., 2018; Huang and Yang, 2020; Ren et al., 2020; Dong et al., 2022; von Rute et al., 2022; Su et al., 2020) focus on transformer-based (Vaswani et al., 2017) autoregressive models, while other advances utilize generative models such as VAE (Brunner et al., 2018; Roberts et al., 2018), GAN (Dong et al., 2018), and DDPM (Hawthorne et al., 2022) for the fast and conditional music synthesis. The other line of work tries to directly generate musical waveforms with less explicit constraints. WaveNet (Oord et al., 2016) first shows the feasibility of autoregressively generating audio waveforms. RAVE (Caillon and Esling, 2021) and Jukebox (Dhariwal et al., 2020) leverage the variational autoencoder to perform high-quality audio synthesis. Some GAN-based models (Kumar et al., 2019; Pasini and Schluter, 2022) also manifest promising performance on conditional music generation. **Cross-Modal Music Generation.** To create more flexible music compositions, cross-modality generation has been studied to synthesize music correlated with inter-modality conditionings, e.g. images-to-music generation (Zhang et al., 2022; Sheffer and Adi, 2022) and text-based music generation (Yang et al., 2022; Kreuk et al., 2022; Schneider et al., 2023; Agostinelli et al., 2023; Huang et al., 2023). These tasks usually rely on the correspondence of overall styles, while do not require fine-grained temporal alignments. More recently, several advances try to extend image-to-music generation to videos, the multi-frame scenario which needs the correlation of visual movements and musical melodies. Though some MIDI-based works (Gan et al., 2020; Su et al., 2020; ) generate music in a non-regressive way, the synthesized results are highly formulated and usually mono-instrumental. More recently, D2M-GAN (Zhu et al., 2022) and CDCD (Zhu et al., 2022) directly generate video-conditioned musical waveforms. Though the results are diverse, frames are compressed into a single image as conditioning, thus temporal information is overlooked and synthesized results cannot reflect the alternation of visual movements. Besides, due to their reliance on a large music encoder (Dhariwal et al., 2020), the computation cost is extremely high, thus the music length is constrained to 2\(\sim\)6 seconds. Unlike prior works, our framework generates long-term waveforms (25s\(\sim\)50s) with affordable costs, and our context-aware design learns the audio-visual rhythmic correlation to ensure inter-modality coherence. **Generalized Cross-Modal Generation.** Remarkable advances also exist in other inter-modality generation tasks. Text-to-image synthesis has drawn increasing attention (Ramesh et al., 2022; Rombach et al., 2022; Gu et al., 2022; Tang et al., 2022) in pace with the gorgeous growth of contrastive language-image pre-training (Radford et al., 2021) and diffusion models (Ho et al., 2020), where the synthesized images exhibit high-resolution quality with great diversity and compute-efficiency. Some works (Singer et al., 2022; Hong et al., 2022) also extend text-to-image to conditional video generation, while more recent pioneering methods investigate text-based pose sequences (Xie et al., 2022) and 3D scenes (Poole et al., 2022) generation. In this work, we refer to the latent conditional diffusion mechanism utilized by text-to-image approaches and attach more visual cues to stabilize long-term music synthesis. ## 3 Methodology LORIS is depicted in Figure 2. Given a music-video pair, the latent diffusion model (Section 3.1) is used to synthesize auditory waveforms, and a set of conditioning encoders (Section 3.2) is designed to generate context-aware visual cues. Besides, a hierarchical conditional diffusion module (Section 3.3) is proposed to add cross-modality constraints. ### Unconditional Latent Diffusion Inspired by the recent image-to-text generation advance (Rombach et al., 2022), we use a similar open-source architecture audio-diffusion-pytorch (Schneider, 2023) pretrained on a large-scale YouTube music dataset as our unimodal diffusion backbone. Due to the numerous amount of audio sampling points, we encode the input waveforms into a compressed latent representation \(z\sim p_{data}\) to lower the computation cost. Unconditional latent diffusion consists of a forward diffusion process and a reverse denoising procedure. The forward process can be regarded as a Markov chain that progressively corrupts the initial latent codes \(z_{0}\) into Gaussian noise \(z_{T}\sim\mathcal{N}(0,\mathbf{I})\) with a sequence of \(T\) steps. In contrast, the objective of the denoising process is to reverse the Gaussian distribution to the original vectors in identity steps. The denoising error can be optimized via the L2 objective. We seek to directly predict \(z\) rather than utilizing the \(\epsilon\)-prediction formulation (Ho et al., 2020): \[L_{LD}:=\mathbb{E}_{z\sim p_{data},t\sim[1,T]}\left[\lambda(\sigma_{t})\|D_{ \theta}(z,\sigma_{t})-z\|\right]_{2}^{2}], \tag{1}\] where \(D_{\theta}\) is the denoising network parameterized by \(\theta\), \(\lambda(\sigma_{t})\) denotes an optional weighting function. Figure 2: Illustration of our LORIS framework. We adopt a latent diffusion probabilistic model to perform conditional audio generation. Given an input of music-video pairs, a set of context-aware conditioning encoders first transform video frames, human poses, and categorical labels into visual embeddings, visual rhythm, and genre embeddings. Then a hierarchical conditional diffusion procedure is employed to serially attend these conditionings into the audio diffusion model, where visual rhythm is first embedded into rhythm conditioning via a Hawkes position encoding module. The entire LORIS framework is optimized jointly. In practice, EDM (Karras et al., 2022) is employed to improve the denoiser: \[D_{\theta}(z,\sigma_{t})=c_{skip}(\sigma_{t})z+c_{out}(\sigma_{t})f_{\theta}(c_{ in}(\sigma_{t})z,\frac{1}{4}ln(\sigma_{t})), \tag{2}\] where \(c_{skip}(\sigma_{t}),c_{out}(\sigma_{t})\), and \(c_{in}(\sigma_{t})\) are scaling parameters and \(\lambda(\sigma_{t})\) is ameliorated as \(1/c_{out}(\sigma_{t})^{2}\). Details about scaling parameters are listed in Appendix A.2. ### Context-Aware Conditioning Encoders Previous waveform-based methods (Zhu et al., 2022;b) share a common paradigm that compresses the temporal dimension and encodes frames into a global visual embedding \(f_{e}\in\mathbb{R}^{1\times C}\), where C denotes the hidden dimension. Although these global features are flexible to act as conditional guidance, contextual information is overlooked, thence the model is incapable of synthesizing correlated music that responds to the change of visual contents. Such a phenomenon also elucidates why existing waveform-based methods can only tackle short-length videos. To this end, we model the temporal correspondence explicitly and construct visual conditioning \(c_{v}\), rhythm conditioning \(c_{r}\), and genre conditioning \(c_{g}\) (if necessary) via different conditioning encoders. **Visual Conditioning.** For the visual encoder, we follow previous methods (Zhu et al., 2022;b) that use pre-trained I3D (Carreira and Zisserman, 2017) network as feature extractor. Differently, we do not perform feature aggregation across the temporal dimension and leverage a Bi-LSTM (Hochreiter and Schmidhuber, 1997) layer to capture long-range temporal dependencies: \[c_{v},(h,mc)=BiLSTM(Enc(i_{1},i_{2},...,i_{T}),h_{0},mc_{0}), \tag{3}\] where \(Enc\) is the visual encoder, \(I=\{i_{1},i_{2},...,i_{T}\}\) are input visual frames, \(h\) and \(mc\) denote hidden state and memory cell state vectors, \(h_{0},mc_{0}\) indicate their initial state. The parameters of I3D are frozen during training while the Bi-LSTM layer is involved in optimization. **Rhythm Conditioning.** Several approaches have been proposed to extract dance rhythms, such as measuring the rapid changes of optical flows (Davis and Agrawala, 2018), performing Short-Time-Fourier Transform on human skeletons (Su et al., 2021), or merging neural networks with traditional graphic functions (Yu et al., 2022). Considering the commonality of rhythmic videos, we put forward an improved rule-based method to encode all frames of each video into a binary vector to represent visual rhythm points. Concretely, we first extract 2D poses \(P(t,j,x,y)\) via pre-trained models, where \(t\) and \(j\) denote the current temporal position and key joint, \(x,y\) denotes the joint coordinate, then calculate the first-order difference of as 2D motions \(M(t,j,x,y)\). To comprehensively estimate the kinematic amplitude and strength, we utilize the directogram (Davis and Agrawala, 2018), a 2D matrix \(D\) analogous to the audio spectrogram to represent the change of motions. Motions in each timestamp are first divided into \(K\) bins based on their angles with x-axis by \(tan^{-1}\frac{y}{x}\), and the weighted summation is computed as the directogram: \[D(t,\theta)=\sum_{j}\|M(t,j)\|_{2}\mathbb{1}_{\theta}(\angle M(t,j)), \tag{4}\] \[\mathbb{1}_{\theta}(\phi):=\begin{cases}1,&\text{if }|\theta-\phi|\leq\frac{2 \pi}{K},\\ 0,&\text{otherwise}.\end{cases} \tag{5}\] Similar to audio onset envelopes, we calculate the bin-wise difference of the directogram, sum all positive values in each angular column, and normalize the resulting curves into the range of \([0,1]\) as the visual onset envelopes \(O\): \[O(t)=\eta(\sum_{k=1}^{K}max(0,|D(t,k)|-|D(t-1,k)|)), \tag{6}\] where \(D(t,k)\) denotes the directogram volume at \(t\)-th time step and \(k\)-th bin, \(\eta\) is the normalized function. Although \(O(t)\) can already be regarded as the visual rhythmic conditioning, we further employ a peak-picking strategy (Bock et al., 2012) to simplify the continuous curves into discrete binary codes for the convenience of conditional generation. Specifically, the temporal point \(t\) can be identified as the \(i\)-th visual rhythmic point only when the following prerequisites are satisfied: \[c_{r}(t_{i})=max(O[t_{i}-pre_{m}:t_{i}+post_{m}]),\] \[c_{r}(t_{i})\geq mean(O[t_{i}-pre_{a}:t_{i}+post_{a}])+\delta, \tag{7}\] \[t_{i}-t_{i-1}>\omega\] where \(c_{r}(t_{i})\) is the \(i\)-th rhythm peak in temporal position \(t\), \(pre_{m}\) and \(post_{m}\) denote the distance of finding local maxima before and after the current position; \(pre_{a}\) and \(post_{a}\) indicate the distance of computing the local average, \(\delta\) is the threshold that the local maxima must be above the local average. Finally, we get a binary vector \(c_{r}\in\mathbb{R}^{T\times 1}\) that represents visual rhythm peaks, where 1 denotes that the current time step is one of the rhythm points. We also explain the rationale of our improved visual rhythm extraction method in Appendix C. **Genre Conditioning.** Rhythmic videos can be categorized into different types based on their characteristics such as the choreography styles of dancing videos. Taking genre into consideration could facilitate some interesting applications like style transfer and music editing. We regard the musical genre as global conditioning and embed one-hot categorical labels \(G\) into genre features via linear projection: \[c_{g}=Embed(G). \tag{8}\] It is noted that genre conditioning can only be utilized for datasets that include musical genre or category labels. ### Hierarchical Conditional Diffusion Considering the enormous success of cross-attention mechanism (Vaswani et al., 2017) in conditional generation (Ramesh et al., 2022; Rombach et al., 2022), we adopt such an approach to model the correlation between latent feature \(z\) and conditioning \(c\). One obstacle for conditional diffusion is that rhythm peaks are binary vectors, which cannot perform feature interactions due to the low dimension. Therefore, we employ a trainable linear layer \(W_{r}\) similar to the genre encoder to project the binary vector to a rhythm embedding matrix \(c_{r}W_{r}\in\mathbb{R}^{T\times C}\). We further argue that in a sequence of rhythm points, the mechanism of peak-picking determines the temporal point neighboring to rhythm peaks is unlikely to be another rhythm peak, thus we can explicitly add positional penalties to those temporal points adjacent to rhythm peaks. To this end, we introduce the Hawkes (Hawkes, 1971; Mei and Eisner, 2017; Zhang et al., 2020), where additional temporal offsets are attached over the raw positional encoding in each dimension: \[Hawkes^{k}(t_{i})=Tri(\omega_{k}\times i+w_{k}\times t_{i}), \tag{9}\] where \(Tri\) denotes \(sin\) and \(cos\) for the even and odd dimension, \(\omega_{k},w_{k}\) are the learnable parameters in the \(k\)-th dimension for Hawkes encoding and positional encoding, respectively, \(t_{i}\) denotes the temporal position of the \(i\)-th rhythm peak. In practice, the shifted position can be computed as \(i^{\prime}=i+\frac{w_{k}}{\omega_{k}}t_{i}\), where \(\underset{\omega}{=}\) for all dimensions can be regarded as a learnable parameter matrix. In this way, our model takes contextual rhythm information into consideration for more accurate rhythmic control. Then we add the shifted positional embeddings to the rhythm embedding \(\hat{c}_{r}\) and use a Transformer (Vaswani et al., 2017) decoder block to perform feature integration: \[\hat{c}_{r}=TrmDec(c_{r}W_{r}+Hawkes(t_{i})), \tag{10}\] After acquiring the conditional embeddings \(\{c_{v},\hat{c}_{r},c_{g}\}\), cross-modal attention is employed to interact conditional embeddings with intermediate layers of U-Net (Ronneberger et al., 2015) by computing feature similarity: \[Att(c_{\alpha},\psi_{i}(z))=Softmax(\frac{W_{Q}^{i}\psi_{i}(z)\cdot(W_{K}^{i}c _{\alpha})^{T}}{\sqrt{d}})\cdot W_{V}^{i}c_{\alpha}, \tag{11}\] where \(c_{\alpha}\in\{c_{v},\hat{c}_{r},c_{g}\}\) denotes the conditional embeddings, \(\psi_{i}(z)\) denotes the \(i\)-th intermediate tensor of latent feature \(z\), \(W_{Q}^{i},W_{K}^{i},W_{V}^{i}\) are learnable projection matrices. Notably, we put conditionings serially to adapt the divergent temporal lengths of visual cues, by virtue of which we can use one cross-modal attention block to attend RGB, rhythm, and genre embeddings with far less computation cost. Given the conditioning \(C=\{c_{v},\hat{c}_{r},c_{g}\}\), the objective for the latent conditional denoising can be formulated as: \[L_{CLD}:=\mathbb{E}_{z\sim p_{data},t\sim[1,T]}\left[\lambda(\sigma_{t}) \right]\|D_{b}(z,\sigma_{t},C)-z\|_{2}^{2}]. \tag{12}\] ## 4 Benchmark ### Dataset Since prior waveform-based methods tackle short-length videos, the primary obstacle for long-term soundtrack generation is the shortage of paired audio-visual data. To this end, we curate the LORIS dataset based on existing datasets, which involves 86.43h paired videos varying from dances to multiple sports events. The comparison of our dataset with existing datasets is listed in Table 1. To be specific, our dataset incorporates three rhythmic categories: dance, figure skating, and floor exercise. The dancing videos are curated from AIST++ (Li et al., 2021) dataset, a fine-annotated subset of AIST (Tsuchida et al., 2019). We select all videos longer than 25 seconds and preserve their categorical labels to perform genre conditioning. Although 3D meshes and skeletons are available, we only curate the original videos. Figure skating videos are collected from FisV (Xu et al., 2019) dataset and FS1000 (Xia et al., 2022) dataset, and floor exercise videos are from Finegym (Shao et al., 2020) dataset. For the sports videos, we only use the raw videos and do not utilize any annotation or provided features. After curating the raw videos, we make the following pre-processes: 1). We cut off the first and last 5 seconds of each sports video, and divide these videos into 25s and 50s segments. 2). We adopt the sound source-separated framework Spleeter (Hennequin et al., 2020) and employ its 2stem pre-trained model to remove vocals, commentaries, and cheers to acquire pure 16kHz musical accompanies. 3). We manually filter video splits and remove the videos with noisy audio, unseparated vocals and cheers, absent background music, and overmuch missing frames. 4). We upsample the music sample rate to 22kHz. 5). We extract the RGB features of visual frames using I3D (Carreira and Zisserman, 2017) pre-trained on Kinetics (Kay et al., 2017) and Charades (Zhang et al., 2020) datasets, and employ mmPose (Contributors, 2020) to obtain 2D skeletons using HRNet (Sun et al., 2019) pre-trained on MS COCO (Lin et al., 2014) dataset. 6). Finally, we randomly split the dataset with a 90%/5%/5% proportion. To sum up, we curate 12,446 25-second paired videos, including 1,881 dancing videos, 8,585 figure skating videos, and 1,950 floor exercise videos. For the 50-second versions, our dataset includes 4,147 figure skating videos and 660 floor exercise videos. ### Evaluation Metrics We follow the general paradigm of previous works (Zhu et al., 2022;b) that measure musical quality and cross-modality correspondence. For the musical quality, the subjective metrics Mean Opinion Scores (MOS) for the general quality are reported. To investigate rhythm correspondence, we use the improved versions of beats coverage scores (BCS) and beats hit scores (BHS) for evaluation. To be specific, BCS and BHS are first proposed for music-guided dance generation (Davis and Agrawala, 2018; Lee et al., 2019) which measures the alignment of musical rhythms and dancing patterns. Similarly, prior dance-to-music methods (Zhu et al., 2022; 20) employ these metrics to count the aligned rhythm points of synthesized music and ground-truth music by computing the rhythm point number of generated music \(B_{g}\), the rhythm point number of ground-truth music \(B_{t}\), and the number of aligned rhythm points \(B_{a}\). Then, BCS is calculated as the fraction of generated musical beats by the ground truth musical beats (\(B_{g}/B_{t}\)), and BHS measures the ratio of aligned beats to the ground truth beats (\(B_{a}/B_{t}\)). However, we found that these metrics are only suitable for short-length (2\(\sim\)6s) music, and two main problems emerge when evaluating long-term soundtracks: 1). the second-wise rhythm detection algorithm results in an extremely sparse vector for any long music sequence, thus the constantly low BCS and BHS values are unable to reflect the real performance. 2). BHS can easily exceed 1 if generated music involves more rhythm points than ground truth. Considering a batch involves two samples with BHS of 0.5 and 1.5, the average BHS is 1, which seems to be perfect while each sample performs unsatisfactorily. Hence, the reported value cannot reflect the real quality under such metrics. Accordingly, we make two corresponding modifications: 1. We adjust the parameters of audio onset detection algorithms (Bock et al., 2012) (More details in Appendix B.3) to avoid sparse rhythm vectors. 2. We calculate BCS by dividing the aligned beats by the total beats from the generated music (\(B_{a}/B_{g}\)), by which BCS and BHS play the roles of recall and precision, respectively. Besides, we calculate the F1 scores of BCS and BHS as an integrated assessment and report the standard deviations of BCS and BHS (termed CSD and HSD, respectively) to evaluate generative stability. ### Baselines To make an exhaustive evaluation, we choose several well-performed methods with available codes as baselines. Concretely, we re-implement MIDI-based methods Foley (Gan et al., 2020) and CMT (Di et al., 2021), waveform-based methods D2M-GAN (Zhu et al., 2022) and CDCD (Zhu et al., 2022) on our dataset. Experimental results of the baseline methods and our LORIS framework on the established benchmark are reported in Section 5. 50s floor exercise videos, and LORIS\({}_{F525}\) and LORIS\({}_{F550}\) subsets for 25s and 50s figure skating videos. Results on the 25s dancing subset are shown in Table 2, where our model outperforms all previous methods both on rhythmic coherence and musical quality. In particular, All methods show satisfactory BCS and low CSD since dancing beats are periodic and easy to perceive, thereby the rhythm points of generated music \(B_{g}\) and GT music \(B_{t}\) are similar. However, waveform-based methods D2M-GAN (Zhu et al., 2022) and CDCD (Zhu et al., 2022) achieve higher BHS than MIDI-based methods Foley (Gan et al., 2020) and CMT (Di et al., 2021), which suggests that waveforms synthesized by generative models are more flexible to perform rhythm alignment. Performance on sports subsets is demonstrated in Table 3 and Table 4. We find that all methods perform worse in rhythmic coherence and musical quality on the sports datasets, indicating sports are more challenging rhythmic scenarios. Nevertheless, our model still achieves considerable boosts compared with CDCD (Zhu et al., 2022) about +8.4%, +8.2% F1 scores for 25s and 50s floor exercise videos, and +9.4%, +8.2% F1 scores for figure skating videos. These numerical results verify that our LORIS framework is capable of generating high-quality musical soundtracks with accurate rhythmic alignment for both dances and sports. **Model Architecture.** We further evaluate the necessity of each model component, and results are shown in Table 6. To investigate the effectiveness of temporal modeling, we first drop the Bi-LSTM layer termed 'LORIS w/o LSTM'. Results show that though the ablated model achieves comparable performance on BHS, the coverage scores decline significantly (-5.6%), indicating that temporal modeling for visual features is essential. We then analyze the impact of the rhythm conditioning module by ablating several variants: 'LORIS w/o Hawkes' that removes the Hawkes process attached to the positional encoding, 'LORIS w/o PE' that abolishing the entire positional encoding module, and 'LORIS w/ rhythm' that adopting a simpler conditioning strategy to directly multiply the visual rhythm envelopes by the auditory latent embedding (rather than using the cross-modality block). We observe that both positional encoding and the Hawkes process contribute to the rhythmic alignment. Besides, embed and cross-attend the rhythm peaks is a better conditioning approach than plain multiplication. ### Qualitative Results As illustrated in Figure 3, we visualize music generated by LORIS together with the raw video-music pairs to illustrate rhythm correspondence and spectrogram analogousness. The upper part contrast visual rhythms with auditory rhythms extracted by ground-truth waveform and our generated music, where the synthesized contents reveal accurate alignment with visual appearances even when the ground-truth rhythms are mismatched (the 11th rhythm point). The lower part compares the log-melspectrograms of our generated music and ground truths. We find that the synthesized prosody patterns are alien to the raw audio since we do not add explicit reconstruction regularization to the raw audio sampling points, and the scholastic sampling nature of the diffusion sampling strategy guarantees the diversity of synthesized results. However, the position of audio peaks is likely to share a similar distribution, which indicates LORIS models the audio-visual correspondence and leverages such prior to generating music with comparable rhythm distribution. Besides, to show our model's ability in generating superior long-term soundtracks, we provide complementary qualitative demos in the **Supplementary Material**. ## 6 Conclusion We have presented LORIS, a long-term rhythmic video soundtrack that generates video-conditioned musical waveforms via a context-aware latent diffusion model. A comprehensive benchmark for video soundtracks is also established, which includes a large-scale rhythmic video-music dataset varying from dancing to multiple sports events and a set of improved evaluation metrics. Experiments demonstrate that LORIS generates soundtracks with the best rhythm correspondence and satisfactory quality compared with existing methods. Nonetheless, LORIS only tackles fixed-length videos, thereby limiting its practicability, and the overall quality is still in need of improvement. In the future, we would seek different audio generation backbones for better musical quality and try context-aware modules to achieve unconstrained, even real-time generation purposes. Figure 3: Visualizations of rhythms and musical log-melspectrograms. Examplar in the upper of the picture shows the rhythm correspondence between audio and visual rhythms, where the green curve indicates visual rhythm peaks extracted via our rule-based strategy, and blue and yellow curves denote the ground truth and generated musical rhythm points. Results show that our model synthesizes music with satisfactory rhythm coherence. The lower part is the comparison of generated and ground-truth musical log-melspectrogram, where the synthesized results lie in an analogous crest distribution with ground-truth music. ## Acknowledgements This work is partially supported by the Shanghai Committee of Science and Technology (Grant No. 21DZ1100100). This work was supported in part by the National Natural Science Foundation of China under Grants 62102150.
2308.07794
Post-superhumps maximum on intranight time scales of the AM CVn star CR Boo
We present observations of the intranight brightness variability of CR Boo, a member of the AM CVn stars group. The observational data are obtained with the 2m telescope of the Rozhen National Astronomical Observatory and the 60 cm telescope of the Belogradchik Observatory, Bulgaria, in BVR bands. We report the appearance of superhumps, with an amplitude from 0.08 to 0.25 mag, when the maximum brightness reaches the magnitude 14.08 in the V band, and 14.13 in the B band. A secondary maximum of each superhump is detected with the same periodicity as the superhumps: Psh = 24.76 - 24.92 min. In our results, the post maxima are shifted in time from $\approx 7.62$ min to $\approx 16.35$ min in different nights, with an amplitude of $\approx 0.06 - 0.09$ mag and an amplitude difference of $\approx 0.035$ mag towards the superhumps' maximum. We find a correlation of the post maxima with the accretion processes at the outer side of the disc.
Daniela Boneva, Georgi Latev, Svetlana Boeva, Krasimira Yankova, Radoslav Zamanov
2023-08-15T14:18:36Z
http://arxiv.org/abs/2308.07794v1
# Post - superhumps maximum on intranight time scales of the AM CVn star CR Boo ###### Abstract We present observations of the intranight brightness variability of CR Boo, a member of the AM CVn stars group. The observational data are obtained with the 2m telescope of the Rozhen National Astronomical Observatory and the 60 cm telescope of the Belogradchik Observatory, Bulgaria, in BVR bands. We report the appearance of superhumps, with an amplitude from 0.08 to 0.25 mag, when the maximum brightness reaches the magnitude 14.08 in the V band, and 14.13 in the B band. A secondary maximum of each superhump is detected with the same periodicity as the superhumps: \(Psh=24.76-24.92\) min. In our results, the post maxima are shifted in time from \(\approx 7.62\) min to \(\approx 16.35\) min in different nights, with an amplitude of \(\approx 0.06-0.09\) mag and an amplitude difference of \(\approx 0.035\) mag towards the superhumps' maximum. We find a correlation of the post maxima with the accretion processes at the outer side of the disc. white dwarfs; binary stars; Double white dwarfs; AM CVns + Footnote †: journal: Notre Dame, 45: 1000, 2000 ## 1 Introduction AM CVn stars are double white dwarf binaries, initially possible to be detected only by their helium emission lines (Wood et al 1987, Provencal et al 1997, Patterson et al 1997, Kato et al. 2000). In the paper of Ramsay et al. (2018), the number of \(\approx 56\) known AM CVn stars is reported. They are binaries with short orbital periods, of 5 - 65 minutes (Podsiadlowski et al. 2003, Solheim 2010). AM CVns are part of the cataclysmic variables (CVs) stars family (Warner 1995). The most possible evolutionary channels schemes of AM CVns include a helium-reach donor star, in contrast with the hydrogen-rich secondary component in the ordinary CVs. A low frequency gravitational wave (GW) radiation is detected in AM CVn stars by eLISA (Evolved Laser Interferometer Space Antenna) (Nelemans 2013). The GW emission has an important role in evolving the binary into their semidetached phase (Solheim 2010). One main feature of AM CVns is that the white dwarf accretes from another white-dwarf companion (Nelemans et al. 2001, Paczynski 1967, Faulkner et al. 1972), where the donor star could be semi- or fully-degenerated. Further mass-transfer between the components, by its stability or instability factor, has a significant effect on the evolution of the white-dwarf binary configuration in AM CVn stars (Marsh et al. 2004). For the binaries with sufficiently short orbital periods, the rate of angular momentum loss is efficient through the gravitational wave emission (Paczynski 1967, Faulkner et al. 1972) As a member of the AM CVn stars group, CR Boo is an interacting double white dwarf binary (Paczynski 1967, Faulkner et al. 1972, Kato et al. 2000, Nelemans et al. 2004), discovered in 1986 by Palomar - Green Survey (Green et al. 1986). From the first observations of this objects (Wood et al. 1987) and up to now, its brightness varies with amplitude 13.0 - 18.0 mag in the V band. In the CR Boo's spectrum, He I lines are observed (Wood et al. 1987). The orbital period of CR Boo is estimated as \(\mbox{Porb}=0.017\) days (Provencal et al. 1997, Isogai et al. 2016), which is about 24.5 min or 1471.3 s. Judging by this orbital period, CR Boo belongs to a group characterized with regular outbursts or occasional super-outbursts production, and with a variable size of the disc (Solheim 2010). As an outburst system (Kato et al. 2000, Groot et al. 2001), CR Boo periodically passes from faint to bright states and it manifests a brightness variability in a range of 1-3 magnitudes at optical wavelengths (Isogai et al. 2016, Duffy et al. 2021, Boneva et. al. 2020, 2022). The object shows characteristics of SU UMa type dwarf novae (Patterson et al. 1997, Kato et al. 1999), which exhibit short-time normal outbursts and longer superoutbursts that can last for weeks. During the outbursts periods of CR Boo, a production of superhumps is observed (Isogai et al. 2016, Boneva et al. 2022). Superhumps are short-period, low-magnitude brightness variations, and when they are positive, their periodicity is a few percent longer than the binary period (Kato et al. 2000, Patterson 2005). Superhumps can be observed during the outbursts state of the cataclysmic variables and AM CVn stars (Warner 1995). Currently, we construct phase-average diagrams, based on the superhumps periodicity (Sections 3.1 and 3.2). The appearance and discussion on the post-superhumps maxima are presented in Sections 3.2 and 4. ## 2 Observations and data reduction We report the observational data, obtained with the 2.0 m telescope of the National Astronomical Observatory (NAO) Rozhen, Bulgaria and 60 cm telescope of the Belogradchik Observatory, Bulgaria. The 2m telescope with CCD camera Andor iKON-L, with 2 - channel focal reductor FoReRo was used on February 12, 2021 in the BVR bands. The observations on April 16th 2020 in B band were obtained with the 60 cm telescope of the Belogradchik Observatory (with CCD camera FLI PL16803). Data reduction was performed with standard tools for processing of CCD images and aperture photometry. The photometric standards were applied. Six comparison stars have been used, based on the standards in the APASS9 catalog with their original data (see table in Boneva et al. 2022). A periodicity of the maximum brightness were obtained by PDM (Phase Dispersion Minimization) method by Stellingwerf (1978). We used the PGRAM ([https://exoplanetarchive.ipac.caltech.edu](https://exoplanetarchive.ipac.caltech.edu)) and PerSea (Maciejewski & Niedzielski, 2005) software packages to check the results. ## 3 Observational results ### Superhumps periods of CR Boo We present our observational results of CR Boo for two nights, obtained in different campaigns: 16 April, 2020 (hereafter 20200416) and 12 February, 2021 (hereafter 20210212). In both nights CR Boo was in an outburst state (see Boneva et al. 2022). On the night of 20200416, the average magnitude of the star was \(14.06\pm 0.02\) in the B band. We have detected an appearance of superhumps on this date (Fig. 1a), with periodicity of \(Psh\approx 24.76\pm 0.023\) min, estimated in Boneva et al. (2022). On the second night, 20210212, the magnitude was \(14.17\pm 0.01\) in B and \(14.22\pm 0.01\) in R (Boneva et al. 2022). Here, the light curve is made for a shortened time-period with a more detailed view, where the superhumps during this night are clearly distinguished (Fig. 1b). Additionally, data in the V band are also included. We estimated the superhumps periodicity on this date as \(Psh\approx 24.92\pm 0.0012\) min in Boneva et al. (2022). ### Phase - average diagrams and post-superhumps maxima During the superhumps, the observations show secondly lower maxima of the brightness, calling them post - superhumps maximum or shortly post - superhumps, which appeared with a period similar to the superhumps periodicity. It is difficult to see some secondary maxima on the light curve during the night of 20200416. They are well distinguishable at the phase-average diagram, obtained on this night (Fig. 2). We estimate their shift from the superhumps maxima as \(\approx 16.35\pm 0.05\) min, with amplitude \(0.098\pm 0.012\) mag against the brightness minimum. Such post-superhumps are observed at the phase average - magnitude diagram (Fig. 3), constructed with the data from 20210212. Their average time-shift is \(\approx 7.62\pm 0.005\) min with an amplitude difference towards the superhumps' maximum \(\approx 0.035\) mag. These post-superhumps maxima appear at interval similar to the superhump's period \(\approx Psh=24.92\) min to \(\approx 25.03\) min in the B and R bands. They can be seen on the light curve of this night as well (Fig. 1b). The periodograms in Figs. 2 and 3 are the average result, obtained by analysis of PGRAM, PerSea and PDM (see Section 2). Figure 1: Light curve of CR Boo’s intranight observations in B band (left - a) and BVR bands (right - b). Superhumps are detected in both nights. The data are obtained with the 60 cm telescope of the Belogradchik Observatory, Bulgaria and the 2m telescope of NAO Rozhen. ## 4 Discussion ### An intranight post-superhumps detection According to our observational data, it is seen that the small - amplitude modulations in brightness appear with periods close to the orbital period. Post-superhumps maxima are possible to be observed during one night, because of the short orbital period of CR Boo. They appear when the star is in an outburst state. In comparison, we haven't detected any secondary maxima on the night of July 5, 2019, when the star was in a low state and definitely with a humps activity. They are not seen on the intranight light curve (see Boneva et al. 2020, 2022) or on the phase-average diagram of this date (Fig. 4). Superhumps have also been detected in many SU UMa variables. The phase average diagrams of V 1047 Aql, GS Cet and NY Her show post-superhumps appearance during their super outbursts in 2016 (Kato et al. 2017). Figure 3: Phase average – magnitude diagram, constructed on the superhumps periodicity of intranight observations on 20210212. Figure 2: Phase average - magnitude diagram in B, based on the obtained superhumps periodicity \(Psh=24.76\pm 0.023\) min, on the night of 20200416. ### Post-superhumps sources In Boneva et al. (2022) we have discussed the possible mechanisms and sources of the superhumps production in CR Boo. Several mechanisms could cause an appearance of superhumps, summarizing the most probable of them as: the disc precession; tidal waves; spiral-density formation. With its superhumps and superoutbursts characteristics, CR Boo is also counted into the SU UMa class of dwarf novae stars (Warner 1995). The superhumps production for these objects is explained by the precession of the eccentric disc that cause some kind of a periodically beating (Whitehurst 1988). The appearance of the post-superhumps maxima could be caused by the switching-on of a second mechanism during the outbursts period, as an example of a second instability. It is very likely that at some point of time, a tidal wave inflowing to the primary star's accretion disc in a combination with the precession of such a disc might produce superhumps (Hirose & Osaki 1990, Wood et al. 2011, Kato et al. 2017). It is known that even a small change in the inflow tidal wave's velocity could make transformations in the outer accretion disc (Bisikalo et al. 2008, Boneva et al. 2009). This usually causes a destabilisation effect on the hot spot structure, as at a further stage, it could turn to a hot line formation. We assume that the hot line might produce the secondary maxima during the superhumps. Then, the system could become bright and blue during the superhumps, which, on the other hand, is in contradiction with the results in (Boneva et al. 2022), where we found that the star is bright, but redder on the second date - 20210212. Honeycutt et al. (2013) make a detailed analyses of CR Boo's lightcurves for longer-term observations. They have an interesting suggestion that the star, as a physical system, and its light curves have a chaotic behavior under the rules of the deterministic chaos principles. Figure 4: Phase average – magnitude diagram in the B, V bands, based on the humps periodicity of the intranight observations on 20190705. ### On the source's parameters In this section, we make an analytical estimation of the probable superhump's source "size", as we rely on the temperature and luminosity assumptions for the two observational dates. We use the Stefan-Boltzmann's law and we adapt the formula for our study: \(L_{s}=\sigma T_{eff}^{4}(4\pi R_{s}^{2})\) Where, we can assume that: \(L_{s}\) is the observational luminosity of the source; \(T_{eff}\) is the effective temperature of the source in this case; \(\sigma\) is the Stefan-Boltzmann constant. We define \(R_{s}\) as a size of the superhump source. Further, we use the relation between the effective and color \(T_{col}\) temperatures: \(T_{eff}^{4}=\tau T_{col}^{4}\) - in geometrically thin radiative layers. Where \(\tau\) is the optical thickness of the layer. We also denote with "1" the terms which refer as to the night of 20200416 and "2" respectively for the night of 20210212. Now, we can express a next relation between the luminosities for the two nights, as: \(\frac{L_{s1}}{L_{s2}}=\frac{T_{eff1}^{4}}{T_{eff2}^{4}}\frac{R_{s1}^{2}}{R_{s2 }^{2}}=\frac{\tau_{1}T_{col1}^{4}R_{s1}^{2}}{\tau_{2}T_{col2}^{4}R_{s2}^{2}}\) Then, using the distance to the object, \(d(CRBoo)=337pc\)(Sion et al., 2011), and based on its apparent magnitudes, we obtain the relation of the observational luminosity \(L_{s1}/L_{s2}=0.95\pm 0.02\). For the color temperature, we have \(T_{col1}/T_{col2}=1.218\pm 0.024\), where the values for the two nights are obtained in (Boneva et al., 2022). Applying these values to the eq.2, gives: \(\frac{\tau_{1}(\rho_{1})R_{s1}^{2}}{\tau_{2}(\rho_{2})R_{s2}^{2}}\approx 0.43 \pm 0.02\) The ratio between the sizes \(R_{s1}\) and \(R_{s2}\) depends on the optical thickness in a function of the mass density, in this way: \(\frac{R_{s1}}{R_{s2}}\approx\sqrt{0.43\frac{\tau_{2}(\rho_{2})}{\tau_{1}(\rho_ {1})}}\) This leads to the rude estimation that the size \(R_{s1}\) in the first night, when CR Boo is bluer is \(\approx 0.66\sqrt{\frac{\tau_{2}(\rho_{2})}{\tau_{1}(\rho_{1})}}\) times the size \(R_{s2}\) in the second night, when the object is redder. ## 5 Conclusion We presented our observations of intranight brightness variability of the AM CVn star CR Boo, in the BVR bands. We reported an appearance of superhumps, with an amplitude from 0.08 to 0.25 mag and post-superhumps maxima with the same periodicity as the superhumps: \(Psh=24.76-24.92\) min. In our results, these secondary maxima in brightness are shifted in time from the primary maxima with \(\approx 7.62\) min to \(\approx 16.35\) min in different nights. They have an amplitude of \(\approx 0.06-0.09\) mag and an amplitude difference of \(\approx 0.035\) mag towards the superhumps' maximum. We found the post-superhumps maxima of CR Boo are detected during the periods of outbursts activity in the current observations. This is visible both in the light curves and phase average - magnitude diagrams. We estimated the ratio of the superhumps' size between two nights. A correlation of the post - superhumps maxima with the accretion processes at the outer side of the disc is very possible. ======== **Acknowledgments**: This work is supported by the grant "Binary stars with compact objects", \(K\Pi-06-H28/2\) 08.12.2018 (Bulgarian National Science Fund). D.B. thanks for the support to the EUROWD22 workshop organizers, where part of these results were presented.
2306.01649
Optimal Transport and Generalized Ricci Flow
We prove results relating the theory of optimal transport and generalized Ricci flow. We define an adapted cost functional for measures using a solution of the associated dilaton flow. This determines a formal notion of geodesics in the space of measures, and we show geodesic convexity of an associated entropy functional. Finally, we show monotonicity of the cost along the backwards heat flow, and use this to give a new proof of the monotonicity of the energy functional along generalized Ricci flow.
Eva Kopfer, Jeffrey Streets
2023-06-02T16:18:30Z
http://arxiv.org/abs/2306.01649v2
# Optimal transport and generalized Ricci flow ###### Abstract. We prove results relating the theory of optimal transport and generalized Ricci flow. We define an adapted cost functional for measures using a solution of the associated dilaton flow. This determines a formal notion of geodesics in the space of measures, and we show geodesic convexity of an associated entropy functional. Finally we show monotonicity of the cost along the backwards heat flow, and use this to give a new proof of the monotonicity of the energy functional along generalized Ricci flow. We warmly dedicate this article to Jean-Pierre Bourguignon on the occasion of his 75th birthday. The first named author gratefully acknowledges support by the German Research Foundation through the Hausdorff Center for Mathematics and the Collaborative Research Center 1060. The second named author was supported by the NSF via DMS-2203536. A fundamental observation about the generalized Ricci flow is that the time-dependent metric is gauge-equivalent to a supersolution of Ricci flow. As noted above, McCann-Topping showed monotonicity of Wasserstein distance for measures evolving by the backwards heat flow along a supersolution to Ricci flow. Our first result explicitly derives this for generalized Ricci flow (cf. Corollary 3.6), with the proof using a notion of the energy of a path of measures which explicitly incorporates the dilaton weight \(f\). Next we extend results of [18, 7] and define an adapted cost for paths of measures in terms of a solution of the associated continuity equation, where again the associated dilaton flow plays a key role. This cost determines a formal Riemannian geometry on the space of probability measures. There is furthermore a natural entropy for such measures, and our second main result establishes geodesic convexity of this entropy (cf. Corollary 4.4). We furthermore show that the cost of paths is monotone along the backwards heat flow (cf. Corollary 4.6). Finally we use this to give a new proof of the monotonicity of the \(\mathcal{F}\)-functional along generalized Ricci flow (cf. Corollary 4.7). **Acknowledgements:** We thank Micah Warren for helpful comments. ## 2. Background In this section we recall some fundamental results related to the generalized Ricci flow equation. Given a smooth manifold, fix \(g\) a Riemannian metric, \(H=\bigoplus_{k=1}^{n}H_{k}\in\Lambda^{*}T^{*}M\), and a smooth function \(f\). We recall the weighted sum defined in the introduction, and furthermore introduce \[|H|^{2}_{\frac{k-1}{k}}:=\sum_{k=1}^{n}\frac{k-1}{k}|H_{k}|^{2},\qquad|H|^{2}_ {\frac{1}{k}}:=\sum_{k=1}^{n}\frac{1}{k}\,|H_{k}|^{2}\,.\] This data also determines notions of Ricci and scalar curvature: **Definition 2.1**.: Given \((g,H,f)\) as above, the _Ricci tensor_ is \[\operatorname{Rc}^{H,f}:=\operatorname{Rc}-\frac{1}{4}H^{2}+\nabla^{2}f- \frac{1}{2}\left(d_{g}^{*}H+i_{\nabla f}H\right)\in\operatorname{Sym}^{2}T^{ *}M\oplus\bigoplus_{k=0}^{n-1}\Lambda^{k}T^{*}M.\] Furthermore, the _scalar curvature_ is \[R^{H,f}=R-\frac{1}{4}\,|H|^{2}_{\frac{1}{k}}+2\Delta f-|\nabla f|^{2}\,.\] **Remark 2.2**.: If a superscript in \(\operatorname{Rc}^{H,f}\) or \(R^{H,f}\) is dropped then the notation refers to the corresponding quantity with that term set to zero, i.e. \(\operatorname{Rc}^{f}=\operatorname{Rc}+\frac{1}{2}\nabla^{2}f\). We note that in the case \(H\in\Lambda^{3}\) and \(f\) is constant, the Ricci tensor above is precisely the Ricci tensor of the Bismut connection, which is a two-tensor with a symmetric and skew-symmetric part. For \(H\in\Lambda^{3}\) and \(f\) arbitrary, this tensor was defined in [13] and named the twisted Bakry-Emery tensor. For general \(H\) but constant \(f\), this tensor is in the spirit of the generalized Ricci tensor used in [2]. The coupling of the Ricci tensor to forms of arbitrary degree arises naturally in supergravity theories. Taking a hint from this, it may be possible to describe this Ricci curvature in general in terms of the curvature of a generalized connection on some augmented tangent bundle, as in the case of three-forms and the Bismut connection [4]. For our purposes here these definitions are justified by a key monotonicity formula for the scalar curvature along generalized Ricci flow. Given \((g_{t},H_{t},f_{t})\) a solution to generalized Ricci flow as described in the introduction, we let \[\square_{f}:=\frac{\partial}{\partial t}-\Delta_{f}=\frac{\partial}{\partial t }-\Delta+\nabla f,\qquad\operatorname{div}_{f}X:=e^{f}\operatorname{div}(e^{- f}X)\] denote the forward weighted heat operator and weighted divergence. Before stating the result we record some consequences of the fact that \(H\) is closed which are left as exercises (cf. [4] Lemma 3.19 for the case \(H\) is a three-form): **Lemma 2.3**.: _Given \((g,H)\) as above, one has_ \[\operatorname{div}H^{2}= \ -\left\langle d^{*}H,H\right\rangle+\frac{1}{2}d\left|H\right| _{\frac{1}{k}}^{2},\] \[\operatorname{div}\operatorname{div}H^{2}= \ \frac{1}{2}\Delta\left|H\right|_{\frac{1}{k}}^{2}+\sum_{k=1}^{n} \frac{1}{k}\left\langle\Delta_{d}H,H\right\rangle+\left|d_{g}^{*}H\right|^{2},\] _where for a \((k-1)\)-form \(\alpha\) and \(k\)-form \(\beta\), the notation \(\left\langle\alpha,\beta\right\rangle\) denotes the \(1\)-form uniquely defined by \(\left\langle\alpha,\beta\right\rangle(X)=\left\langle\alpha,i_{X}\beta\right\rangle\)._ **Proposition 2.4**.: _([13] Proposition 2.11) Given \((g_{t},H_{t},f_{t})\) a solution to generalized Ricci flow, one has_ \[\square_{f}R^{H,f}=2\left|\operatorname{Rc}^{H,f}\right|^{2}.\] Proof.: The result is claimed in [13] without proof, so we include the short calculation here for convenience. Note furthermore that we are working here with the flow modified by diffeomorphisms generated by \(\nabla f\). We compute the time derivative of each term in \(R^{H,f}\) separately. First we compute that \[\begin{split}\partial_{t}R=&-\left\langle \operatorname{Rc},\partial_{t}g\right\rangle+\operatorname{div}\operatorname{ div}\partial_{t}g-\Delta(\operatorname{tr}\partial_{t}g)\\ =& 2\left\langle\operatorname{Rc},\operatorname{Rc}- \frac{1}{4}H^{2}\right\rangle+\Delta R+\frac{1}{2}\operatorname{div} \operatorname{div}H^{2}-\left\langle\nabla R,\nabla f\right\rangle-\frac{1}{2 }\Delta|H|^{2},\end{split} \tag{2.1}\] where we used the Bianchi identity and that \[\operatorname{div}\nabla^{2}f= \ \nabla\Delta f+\operatorname{Rc}(\nabla f),\] \[\operatorname{div}\operatorname{div}\nabla^{2}f= \ \Delta^{2}f+\frac{1}{2}\left\langle\nabla R,\nabla f\right\rangle+ \left\langle\operatorname{Rc},\nabla^{2}f\right\rangle.\] Then we observe using Bochner's formula \[\begin{split}\partial_{t}|\nabla f|^{2}=&\ 2( \operatorname{Rc}+\nabla^{2}f-\frac{1}{4}H^{2})(\nabla f,\nabla f)+2\left\langle \nabla f,\nabla\left(\Delta f-|\nabla f|^{2}+\frac{1}{4}\left|H\right|_{\frac{ 1-1}{k}}^{2}\right)\right\rangle\\ =&\ \Delta|\nabla f|^{2}-2|\nabla^{2}f|^{2}-\left\langle \nabla f,\nabla|\nabla f|^{2}\right\rangle+\frac{1}{2}\left\langle\nabla\left| H\right|_{\frac{k-1}{k}}^{2},\nabla f\right\rangle-\frac{1}{2}\left\langle H^{2}, \nabla f\otimes\nabla f\right\rangle.\end{split} \tag{2.2}\] Next we compute \[\begin{split}\partial_{t}\Delta f=&\ \Delta\partial_{t}f-\left\langle \partial_{t}g,\nabla^{2}f\right\rangle-\left\langle\operatorname{div}(\partial _{t}g)-\frac{1}{2}\nabla(\operatorname{tr}\partial_{t}g),\nabla f\right\rangle \\ =&\ \Delta^{2}f-\Delta|\nabla f|^{2}+2\left\langle \operatorname{Rc}+\nabla^{2}f-\frac{1}{4}H^{2},\nabla^{2}f\right\rangle-2 \left\langle-\operatorname{div}\nabla^{2}f+\frac{1}{2}\nabla\Delta f,\nabla f \right\rangle\\ &+\left\langle-\frac{1}{2}\operatorname{div}H^{2}+\frac{1}{4} \nabla|H|^{2},\nabla f\right\rangle+\frac{1}{4}\Delta\left|H\right|_{\frac{ k-1}{k}}^{2}.\end{split} \tag{2.3}\] Finally one has easily \[\partial_{t}|H_{k}|^{2}=-k\left\langle\partial_{t}g,H_{k}^{2}\right\rangle+2 \left\langle H_{k},\Delta_{d}H_{k}-di_{\nabla f}H_{k}\right\rangle. \tag{2.4}\] Inserting (2.1), (2.2), (2.3) and (2.4) into the definition of \(R^{H,f}\) yields \[\partial_{t}R^{H,f}=\ \Delta R^{H,f}+2|\operatorname{Rc}^{H,f}|^{2}-\left\langle \nabla R^{H,f},\nabla f\right\rangle,\] where we used the identities for \(\operatorname{div}\nabla^{2}f\) above and Lemma 2.3. The proposition follows. ## 3. Wasserstein distance monotonicity for generalized Ricci flow Given a smooth manifold \(M\), let \(P(M)\) denote the space of Borel probability measures. This space is naturally endowed with the Wasserstein distance \(W\), defined for \(\mu_{1},\mu_{2}\in P(M)\) by the optimal transport problem \[W(\mu_{1},\mu_{2})^{2}:=\inf\int_{M\times M}d^{2}(x,y)\,d\gamma(x,y),\] where the infimum is taken over all couplings \(\gamma\in P(M\times M)\) with marginals \(\gamma(\cdot\times M)=\mu_{1}\) and \(\gamma(M\times\cdot)=\mu_{2}\). In the case \((M,g,e^{-f}dV)\) is a weighted Riemannian manifold, it is useful to consider the subspace \(P^{\infty}(M)\subset P(M)\) consisting of smooth positive densities with respect to the weighted volume measure \[P^{\infty}(M):=\{\mu\in P(M):d\mu=\rho\,e^{-f}\,dV,\,\rho\in C^{\infty}(M),\, \,\rho>0\}.\] If \(\mu\colon[0,1]\to P^{\infty}(M)\) is a smooth path we write \(d\mu(s)=\rho(s)\,e^{-f}dV\) and define \(\phi(s)\) as a solution to the continuity equation \[\partial_{s}\rho=-\operatorname{div}_{f}(\rho\nabla\phi).\] Such a \(\phi(s)\) exists and is unique up to an additive constant. Thus for such a path we may define the Lagrangian \[E(\mu)=\frac{1}{2}\int_{0}^{1}\int_{M}|\nabla\phi|^{2}\rho e^{-f}\,dV\,ds.\] A result known as the Benamou-Brenier formula shows that this formal notion of the length of a path can be used to recover the Wasserstein distance, in the following sense (see Proposition 4.3 in [10]): **Theorem 3.1**.: _Let \(\mu_{1},\mu_{2}\in P^{\infty}(M)\) be probability measures. Then the infimum of \(E\) over smooth curves in \(P^{\infty}(M)\) satisfying the continuity equation and connecting these probability measures is \(\frac{1}{2}W(\mu_{1},\mu_{2})^{2}\)._ In this section we will analyze the monotonicity of the Wasserstein distance between two backward heat flows of probability measures under generalized Ricci flow. To begin we record a fundamental lemma, whose proof is elementary and left to the reader: **Lemma 3.2**.: _Given \((g_{t},H_{t},f_{t})\) a solution to generalized Ricci flow, one has_ \[\frac{d}{dt}e^{-f}dV=-R^{H,f}dV.\] Also we derive a preliminary computation varying a certain integral along a curve of measures in a fixed time-slice. **Lemma 3.3**.: _Let \((\rho(s,t),\phi(s,t))_{[0,1]\times[t_{0}-\epsilon,t_{0}+\epsilon]}\) be a smooth two-parameter family of curves_ \[\partial_{s}\rho=-\operatorname{div}_{f}(\rho\nabla\phi).\] _Then for any fixed \(t\) we have_ \[\frac{d}{ds}\int_{M}\left\langle\nabla\phi,\nabla\rho\right\rangle e^{-f}\,dV =\,\int_{M}\left[-(\partial_{s}\phi+\frac{1}{2}|\nabla\phi|^{2}) \Delta_{f}\rho+|\nabla^{2}\phi|^{2}\rho+\operatorname{Rc}^{f}(\nabla\phi, \nabla\phi)\rho\right]e^{-f}\,dV.\] Proof.: Note that \[\frac{d}{ds}\int_{M}\left\langle\nabla\phi,\nabla\rho\right\rangle e^ {-f}\,dV\] \[= \,\int_{M}\left[\left\langle\nabla(\partial_{s}\phi+\frac{1}{2}| \nabla\phi|^{2}),\nabla\rho\right\rangle-\frac{1}{2}\left\langle\nabla|\nabla \phi|^{2},\nabla\rho\right\rangle+\left\langle\nabla\phi,\nabla(-\operatorname{ div}_{f}(\rho\nabla\phi))\right\rangle\right]e^{-f}\,dV\] \[= \,\int_{M}\left[-(\partial_{s}\phi+\frac{1}{2}|\nabla\phi|^{2}) \Delta_{f}\rho+\frac{1}{2}\Delta_{f}|\nabla\phi|^{2}\rho-\left\langle\nabla \Delta_{f}\phi,\nabla\phi\right\rangle\rho\right]e^{-f}\,dV.\] We obtain the result by applying the weighted Bochner identity \[\frac{1}{2}\Delta_{f}|\nabla\phi|^{2}-\left\langle\nabla\Delta_{f}\phi,\nabla \phi\right\rangle=|\nabla^{2}\phi|^{2}+\operatorname{Rc}^{f}(\nabla\phi, \nabla\phi).\] Now we compute the time-derivative of the Lagrangian \(E\) of a one-parameter family of curves in \(P^{\infty}(M)\) along generalized Ricci flow. **Proposition 3.4**.: _Let \((g_{t},H_{t},f_{t})\) be a generalized Ricci flow for \(t\in[t_{0}-\epsilon,t_{0}+\epsilon]\). Let \((\rho(s,t),\phi(s,t))_{[0,1]\times[t_{0}-\epsilon,t_{0}+\epsilon]}\) be a smooth two-parameter family of curves solving_ \[\partial_{s}\rho=-\operatorname{div}_{f}(\rho\nabla\phi).\] _Let_ \[E(t):=E(\mu(\cdot,t))=\frac{1}{2}\int_{0}^{1}\int_{M}|\nabla\phi(s,t)|^{2}\rho (s,t)e^{-f}\,dV\,ds,\] _where \(\mu(\cdot,t):=\rho(\cdot,t)\,e^{-f_{t}}\,dV_{t}\). Then_ \[\frac{d}{dt}\bigg{|}_{t=t_{0}}\,E(t) =\int_{M}\phi(\partial_{t}\rho+\Delta_{f}\rho-R^{H,f}\rho)e^{-f} \,dV\bigg{|}_{s=0}^{1}\] \[\quad+\int_{0}^{1}\int_{M}\left[|\nabla^{2}\phi|^{2}\rho+\frac{1 }{4}H^{2}(\nabla\phi,\nabla\phi)\rho-(\partial_{s}\phi+\frac{1}{2}|\nabla\phi |^{2})(\partial_{t}\rho+\Delta_{f}\rho-R^{H,f}\rho)\right]e^{-f}\,dV\,ds.\] Proof.: Using the generalized Ricci flow equations and Lemma 3.2 we have \[\frac{d}{dt}\bigg{|}_{t=t_{0}}\,E(t)=\int_{0}^{1}\int_{M}\left[ \operatorname{Rc}^{H,f}(\nabla\phi,\nabla\phi)\rho+\left\langle\nabla\phi, \nabla\partial_{t}\phi\right\rangle\rho+\frac{1}{2}|\nabla\phi|^{2}\partial_{ t}\rho-\frac{1}{2}R^{H,f}|\nabla\phi|^{2}\rho\right]e^{-f}\,dV\,ds.\] For a fixed \(\psi\in C^{\infty}(M)\) we have \[\int_{M}\psi\partial_{s}\rho e^{-f}\,dV=\int_{M}\left\langle \nabla\psi,\nabla\phi\right\rangle\rho e^{-f}\,dV.\] Hence, integrating by parts in \(t\), \[\int_{M}\psi(\partial_{s}\partial_{t}\rho-R^{H,f}\partial_{s} \rho)e^{-f}\,dV=\int_{M}\left[2\operatorname{Rc}^{H,f}(\nabla\psi,\nabla\phi) \rho+\left\langle\nabla\psi,\nabla\partial_{t}\phi\right\rangle\rho\right.\] \[\left.\qquad\qquad\qquad\qquad\qquad\qquad\left.+\left\langle \nabla\psi,\nabla\phi\right\rangle\partial_{t}\rho-R^{H,f}\left\langle\nabla \psi,\nabla\phi\right\rangle\rho\right]e^{-f}\,dV.\] For \(\psi=\phi\) this yields \[\int_{M}\phi(\partial_{s}\partial_{t}\rho-R^{H,f}\partial_{s} \rho)e^{-f}\,dV=\int_{M}\left[2\operatorname{Rc}^{H,f}(\nabla\phi,\nabla\phi) \rho+\left\langle\nabla\phi,\nabla\partial_{t}\phi\right\rangle\rho\right.\] \[\left.\qquad\qquad\qquad\qquad\left.+\left\langle\nabla\phi, \nabla\phi\right\rangle\partial_{t}\rho-R^{H,f}\left\langle\nabla\phi,\nabla \phi\right\rangle\rho\right]e^{-f}\,dV.\] Inserting this into the derivative of \(E\) and integrating by parts in \(s\) produces \[\frac{d}{dt}E= \int_{0}^{1}\int_{M}\phi(\partial_{s}\partial_{t}\rho-R^{H,f} \partial_{s}\rho)e^{-f}\,dV\,ds\] \[+\int_{0}^{1}\int_{M}\left[-\operatorname{Rc}^{H,f}(\nabla\phi, \nabla\phi)\rho-\frac{1}{2}|\nabla\phi|^{2}\partial_{t}\rho+\frac{1}{2}R^{H,f} |\nabla\phi|^{2}\rho\right]e^{-f}\,dV\,ds\] \[=\int_{M}\phi\partial_{t}\rho e^{-f}\,dV\bigg{|}_{s=0}^{1}\] \[+\int_{0}^{1}\int_{M}\left[-R^{H,f}\phi\partial_{s}\rho- \operatorname{Rc}^{H,f}(\nabla\phi,\nabla\phi)\rho-(\partial_{s}\phi+\frac{1} {2}|\nabla\phi|^{2})\partial_{t}\rho+\frac{1}{2}R^{H,f}|\nabla\phi|^{2}\rho \right]e^{-f}\,dV\,ds.\] Note that by Lemma 3.3, \[-\int_{M}\phi\Delta_{f}\rho\,e^{-f}\,dV\bigg{|}_{s=0}^{1}=\int_{0}^{1}\int_{M} \left[|\nabla^{2}\phi|^{2}\rho+\operatorname{Rc}^{f}(\nabla\phi,\nabla\phi) \rho-(\partial_{s}\phi+\frac{1}{2}|\nabla\phi|^{2})\Delta_{f}\rho\right]e^{-f }\,dV,\] and \[\frac{d}{ds}\int_{M}R^{H,f}\phi\rho e^{-f}\,dV=\int_{M}R^{H,f} \partial_{s}\phi\rho e^{-f}\,dV+\int_{M}R^{H,f}\phi\partial_{s}\rho e^{-f}\,dV.\] So, combining the above computations gives \[\frac{d}{dt}E= \int_{M}\phi(\partial_{t}\rho+\Delta_{f}\rho-R^{H,f}\rho)e^{-f} \,dV\bigg{|}_{s=0}^{1}\] \[+\int_{0}^{1}\int_{M}\left[|\nabla^{2}\phi|^{2}\rho+\frac{1}{4}H^ {2}(\nabla\phi,\nabla\phi)\rho-(\partial_{s}\phi+\frac{1}{2}|\nabla\phi|^{2})( \partial_{t}\rho+\Delta_{f}\rho-R^{H,f}\rho)\right]e^{-f}\,dV\,ds,\] as claimed. As a corollary from this proposition we obtain the Wasserstein contraction of the backward heat flow of two probability measures under generalized Ricci flow. Here, we denote by \(W_{t}\) the Wasserstein distance associated to time \(t\). We first record an elementary lemma showing an equivalent formulation of the backward heat equation in terms of the density, whose proof is left to the reader: **Lemma 3.5**.: _Given \((M^{n},g_{t},H_{t},f_{t})\) a solution to generalized Ricci flow, suppose \(\mu_{t}=\rho_{t}e^{-f_{t}}dV_{t}\in P^{\infty}(M)\) is a smooth one-parameter family of probability measures. Then \(\mu_{t}\) satisfies the backwards heat flow_ \[\partial_{t}\mu_{t}=-\Delta\mu_{t}\] _if and only if_ \[\partial_{t}\rho_{t}=-\Delta_{f}\rho_{t}+R^{H,f}\rho_{t}.\] **Corollary 3.6**.: _Let \(\mu_{t}^{1},\mu_{t}^{2}\) be two solutions of the backward heat equation_ \[\partial_{t}\mu_{t}=-\Delta\mu_{t}\] _in \(P^{\infty}(M)\). Then \(W_{t}(\mu_{t}^{1},\mu_{t}^{2})\) is nondecreasing in \(t\)._ Proof.: Fix \(t_{0}\). For each \(\varepsilon>0\) we may choose according to Theorem 3.1 a curve \(\mu\colon[0,1]\to P^{\infty}(M)\) with \(\mu(0)=\mu_{t_{0}}^{1}\) and \(\mu(1)=\mu_{t_{0}}^{2}\) satisfying \[E(\mu)\leq\frac{1}{2}W_{t_{0}}(\mu_{t_{0}}^{1},\mu_{t_{0}}^{2})^{2}+\varepsilon,\] where \(E(\mu)\) is the Lagrangian of the curve \(\mu\) at time \(t_{0}\). Let \(t\leq t_{0}\) and let \(\mu_{t}(s)\) be the backward heat flow with \(\mu_{t_{0}}(s)=\mu(s)\). Observe that this implicitly defines two-parameter families \((\rho(s,t),\phi(s,t))\) as described above. Then we know by Proposition 3.4 and Lemma 3.5 that \[W_{t}(\mu_{t}^{1},\mu_{t}^{2})^{2}\leq E(\mu_{t})\leq E(\mu_{t_{0}})\leq\frac{1 }{2}W_{t_{0}}(\mu_{t_{0}}^{1},\mu_{t_{0}}^{2})^{2}+\varepsilon.\] As \(\varepsilon>0\) is arbitrary, the result follows. ## 4. Adapted cost for generalized Ricci flow In this section we define a cost adapted to generalized Ricci flow akin to the \(\mathcal{L}_{0}\)-cost in Ricci flow [7]. We will show monotonicity of the cost along the weighted backwards heat equation, and furthermore use this to recapture the monotonicity of the \(\mathcal{F}\)-functional. Fix \((g_{t},H_{t},f_{t})\) a solution to generalized Ricci flow on \([0,T]\). Given \(\mu_{t}\) a smooth one-parameter family of probability measures in \(P^{\infty}(M)\) which have densities \(\rho_{t}\) with respect to \(e^{-f_{t}}dV_{t}\), it follows that there exists a smooth family \(\phi_{t}\) such that \[\partial_{t}\rho=-\operatorname{div}_{f}(\rho\nabla\phi)+R^{H,f}\rho. \tag{4.1}\] For such paths \(\mu\) defined on \([t^{\prime},t^{\prime\prime}]\subset[0,T]\) we define the Lagrangian \[E_{0}(\mu):=\frac{1}{2}\int_{t^{\prime}}^{t^{\prime\prime}}\int_{M}\left[| \nabla\phi|^{2}+R^{H,f}\right]\rho e^{-f}\,dVdt.\] This functional can be interpreted as an optimal transport cost for a length functional modified by integrating the weighted scalar curvature \(R^{H,f}\) along the curve. This choice is natural given the gradient flow interpretation of generalized Ricci flow [9]. ### Geodesic entropy convexity In this subsection we prove a convexity property for a natural entropy associated to the cost functional \(E_{0}\). We first derive the geodesic equation associated to this cost, then show convexity of the entropy along these geodesics. **Lemma 4.1**.: _Let \((g_{t},H_{t},f_{t})\) be a solution to generalized Ricci flow. Let \((\rho(t,s),\phi(t,s))\) be a two-parameter family of densities and functions satisfying (4.1). Then_ \[\frac{d}{ds}E_{0}(\mu(\cdot,s))=\,\int_{M}\phi\partial_{s}\rho e^{-f}\,dV\Big{|} _{t=t^{\prime}}^{t^{\prime\prime}}-\int_{t^{\prime}}^{t^{\prime\prime}}\int_{ M}\left[\partial_{t}\phi+\frac{1}{2}|\nabla\phi|^{2}-\frac{1}{2}R^{H,f} \right]\partial_{s}\rho e^{-f}\,dVdt.\] _In particular, a one-parameter \((\rho(t),\phi(t))\) is a geodesic if and only if_ \[\partial_{t}\rho= \,-\operatorname{div}_{f}(\rho\nabla\phi)+R^{H,f}\rho,\] \[\partial_{t}\phi= \,-\frac{1}{2}|\nabla\phi|^{2}+\frac{1}{2}R^{H,f}. \tag{4.2}\] Proof.: First of all we compute \[\frac{d}{ds}E_{0}(\mu(\cdot,s))=\,\int_{t^{\prime}}^{t^{\prime\prime}}\int_{M }\left[\left\langle\nabla\phi,\nabla\partial_{s}\phi\right\rangle\rho+\frac{1 }{2}\left(|\nabla\phi|^{2}+R^{H,f}\right)\partial_{s}\rho\right]e^{-f}dV.\] Observe that for an arbitrary function \(\psi\) we have by integration by parts \[\int_{M}\psi\partial_{t}\rho e^{-f}dV=\,\int_{M}\left[\left\langle\nabla\psi, \nabla\phi\right\rangle+\psi R^{H,f}\right]\rho e^{-f}dV.\] It follows that \[\int_{M}\psi\partial_{s}\partial_{t}\rho e^{-f}dV=\,\int_{M}\left[\left\langle \nabla\psi,\nabla\partial_{s}\phi\right\rangle\rho+\left\langle\nabla\psi, \nabla\phi\right\rangle\partial_{s}\rho+\psi R^{H,f}\partial_{s}\rho\right]e^ {-f}dV.\] We choose \(\psi=\phi\) to yield \[\int_{M}\phi\partial_{s}\partial_{t}\rho e^{-f}dV=\,\int_{M}\left[\left\langle \nabla\phi,\nabla\partial_{s}\phi\right\rangle\rho+|\nabla\phi|^{2}\partial_{s} \rho+\phi R^{H,f}\partial_{s}\rho\right]e^{-f}dV.\] Combining the above discussion yields \[\frac{d}{ds}E_{0}(\mu(\cdot,s))=\int_{t^{\prime}}^{t^{\prime\prime}}\int_{M} \left[\phi\partial_{s}\partial_{t}\rho-\frac{1}{2}|\nabla\phi|^{2}\,\partial_{ s}\rho-\phi R^{H,f}\partial_{s}\rho+\frac{1}{2}R^{H,f}\partial_{s}\rho \right]e^{-f}\,dVdt.\] Note that Lemma 3.2 further implies \[\partial_{t}(\phi\partial_{s}\rho e^{-f}\,dV)=(\partial_{t}\phi\,\partial_{s} \rho+\phi\,\partial_{s}\partial_{t}\rho-R^{H,f}\phi\,\partial_{s}\rho)e^{-f} \,dV.\] Consequently we obtain \[\frac{d}{ds}E_{0}(\mu(\cdot,s))= \int_{t^{\prime}}^{t^{\prime\prime}}\partial_{t}\left[\int_{M} \phi\partial_{s}\rho e^{-f}\,dV\right]\,dt-\int_{t^{\prime}}^{t^{\prime\prime} }\int_{M}\left[\partial_{t}\phi+\frac{1}{2}|\nabla\phi|^{2}-\frac{1}{2}R^{H,f }\right]\partial_{s}\rho e^{-f}\,dVdt,\] which is, after integrating the first term in time, the claim. Next we show the geodesic convexity of a natural entropy quantity associated to this cost. First we prove two propositions containing useful evolution equations for geodesics along a solution to generalized Ricci flow. **Proposition 4.2**.: _Fix \((g_{t},H_{t},f_{t})\) a solution to generalized Ricci flow, and suppose \((\rho_{t},\phi_{t})\) solves the geodesic equations (4.2). Then_ \[\frac{d}{dt}\int_{M}\phi\rho e^{-f}\,dV= \,\frac{1}{2}\int_{M}\left[|\nabla\phi|^{2}+R^{H,f}\right]\rho e^ {-f}\,dV,\] \[\frac{1}{2}\frac{d}{dt}\int_{M}|\nabla\phi|^{2}\rho e^{-f}\,dV= \,\int_{M}\left[\operatorname{Rc}^{H,f}(\nabla\phi,\nabla\phi)+ \frac{1}{2}\left\langle\nabla\phi,\nabla R^{H,f}\right\rangle\right]\rho e^{- f}\,dV.\] Proof.: We compute using the geodesic equation and Lemma 3.2 \[\frac{d}{dt}\int_{M}\phi\rho e^{-f}\,dV= \int_{M}\left[\left(-\frac{1}{2}|\nabla\phi|^{2}+\frac{1}{2}R^{H,f}\right)\rho+|\nabla\phi|^{2}\rho+(R^{H,f}-R^{H,f})\phi\rho\right]e^{-f}\,dV\] \[= \int_{M}\left[\frac{1}{2}|\nabla\phi|^{2}+\frac{1}{2}R^{H,f} \right]\rho e^{-f}\,dV,\] which yields the first claim. For the second claim we compute first of all \[\frac{d}{dt}\frac{1}{2}|\nabla\phi|^{2}=\,\operatorname{Rc}^{H,f}(\nabla\phi, \nabla\phi)+\left\langle\nabla\phi,\nabla\left(-\frac{1}{2}|\nabla\phi|^{2}+ \frac{1}{2}R^{H,f}\right)\right\rangle.\] Hence \[\frac{d}{dt}\frac{1}{2}\int_{M}|\nabla\phi|^{2}\rho e^{-f}\,dV= \int_{M}\left[\operatorname{Rc}^{H,f}(\nabla\phi,\nabla\phi)+ \left\langle\nabla\phi,\nabla\left(-\frac{1}{2}|\nabla\phi|^{2}+\frac{1}{2}R^{ H,f}\right)\right\rangle\right]\rho e^{-f}\,dV\] \[+\int_{M}\frac{1}{2}|\nabla\phi|^{2}\left(-\operatorname{div}_{f} (\rho\nabla\phi)\right)e^{-f}dV\] \[= \int_{M}\left[\operatorname{Rc}^{H,f}(\nabla\phi,\nabla\phi)+ \left\langle\nabla\phi,\frac{1}{2}R^{H,f}\right\rangle\right]\rho e^{-f}\,dV,\] as claimed. **Proposition 4.3**.: _Fix \((g_{t},H_{t},f_{t})\) a solution to generalized Ricci flow, and suppose \((\rho_{t},\phi_{t})\) solves the geodesic equations (4.2). Then_ \[\frac{d}{dt}\int_{M}\rho\log\rho\,e^{-f}\,dV= \int_{M}\left[\left\langle\nabla\rho,\nabla\phi\right\rangle+R^{H,f}\rho\right]e^{-f}\,dV,\] \[\frac{d}{dt}\int_{M}\left\langle\nabla\rho,\nabla\phi\right\rangle e ^{-f}\,dV= \int_{M}\left[|\nabla^{2}\phi|^{2}+\operatorname{Rc}^{f}(\nabla \phi,\nabla\phi)-2\left\langle\operatorname{Rc}^{H,f},\nabla^{2}\phi\right\rangle \right]\rho e^{-f}\,dV\] \[+\int_{M}\left[\left\langle\frac{1}{2}\operatorname{div}H^{2}- \frac{1}{4}\nabla\left|H\right|_{\frac{1}{k}}^{2},\nabla\phi\right\rangle- \frac{1}{2}H^{2}(\nabla f,\nabla\phi)\right]\rho e^{-f}\,dV\] \[+\frac{1}{2}\int_{M}\left\langle\nabla\rho,\nabla R^{H,f}\right\rangle e ^{-f}\,dV,\] \[\frac{d}{dt}\int_{M}R^{H,f}\rho e^{-f}\,dV= \int_{M}\left[\partial_{t}R^{H,f}+\left\langle\nabla R^{H,f}, \nabla\phi\right\rangle\right]\rho e^{-f}\,dV.\] Proof.: We show the first claim by noting \[\frac{d}{dt}\int_{M}\rho\log\rho\,e^{-f}\,dV= \int_{M}\left[(\log\rho+1)(-\operatorname{div}_{f}(\rho\nabla \phi)+R^{H,f}\rho)-\rho\log\rho R^{H,f}\right]e^{-f}\,dV\] \[= \int_{M}\left[\left\langle\nabla\rho,\nabla\phi\right\rangle+R^{H,f}\rho\right]e^{-f}\,dV.\] To show the second claim we will need to decompose the Ricci tensor into its symmetric piece \(\operatorname{Rc}^{H,f}_{s}\) and anti-symmetric piece \(\operatorname{Rc}^{H,f}_{a}\) (which in general is a polyform). We first compute \[\frac{d}{dt}\int_{M}\left\langle\nabla\rho,\nabla\phi\right\rangle e ^{-f}\,dV= \int_{M}\left[2\operatorname{Rc}^{H,f}_{s}(\nabla\rho,\nabla\phi)+ \left\langle\nabla(-\operatorname{div}_{f}(\rho\nabla\phi)+R^{H,f}\rho),\nabla \phi\right\rangle\right]e^{-f}\,dV\] \[+\int_{M}\left[\left\langle\nabla\rho,\nabla\left(-\frac{1}{2}| \nabla\phi|^{2}+\frac{1}{2}R^{H,f}\right)\right\rangle-\left\langle\nabla\rho, \nabla\phi\right\rangle R^{H,f}\right]e^{-f}\,dV.\] Using Lemma 2.3 we have the Bianchi identity \[2\operatorname{div}\operatorname{Rc}^{H,f}_{s}=\nabla R^{H,f}+\nabla|\nabla f |^{2}+\frac{1}{4}\nabla\left|H\right|_{\frac{1}{k}}^{2}+2\operatorname{Rc}( \nabla f)-\frac{1}{2}\operatorname{div}H^{2},\] Using this we integrate by parts to yield \[\int_{M} 2\operatorname{Rc}^{H,f}_{s}(\nabla\rho,\nabla\phi)e^{-f}\,dV\] \[= -2\int_{M}\left(\left\langle\operatorname{div}\operatorname{Rc}^{ H,f}_{s},\nabla\phi\right\rangle+\left\langle\operatorname{Rc}^{H,f},\nabla^{2} \phi\right\rangle-\operatorname{Rc}^{H,f}_{s}(\nabla f,\nabla\phi)\right)\rho e ^{-f}\,dV\] \[= -\int_{M}\left(\left\langle\nabla R^{H,f}+\frac{1}{4}\nabla\left| H\right|_{\frac{1}{k}}^{2}-\frac{1}{2}\operatorname{div}H^{2},\nabla\phi\right\rangle+2 \left\langle\operatorname{Rc}^{H,f},\nabla^{2}\phi\right\rangle+\frac{1}{2}H^ {2}(\nabla f,\nabla\phi)\right)\rho e^{-f}\,dV.\] Further by integration by parts and Bochner's formula \[\int_{M}\left\langle\nabla(-e^{f}\operatorname{div}(\rho e^{-f} \nabla\phi)),\nabla\phi\right\rangle e^{-f}\,dV+\int_{M}\left\langle\nabla\rho, \nabla(-\frac{1}{2}|\nabla\phi|^{2})\right\rangle e^{-f}\,dV\] \[= -\int_{M}\left\langle\nabla\phi,\nabla\Delta\phi\right\rangle\rho e ^{-f}\,dV+\frac{1}{2}\int_{M}\Delta|\nabla\phi|^{2}\rho e^{-f}\,dV+\int_{M} \left\langle\nabla\phi,\nabla\left\langle\nabla f,\nabla\phi\right\rangle \right\rangle\,dV\] \[\qquad-\frac{1}{2}\int_{M}\left\langle\nabla|\nabla\phi|^{2}, \nabla f\right\rangle\rho e^{-f}\,dV\] \[= \int_{M}(|\nabla^{2}\phi|^{2}+\operatorname{Rc}(\nabla\phi,\nabla \phi))\rho e^{-f}\,dV+\int_{M}\left\langle\nabla\phi,\nabla\left\langle\nabla f,\nabla\phi\right\rangle\right\rangle\rho e^{-f}\,dV-\frac{1}{2}\int_{M} \left\langle\nabla|\nabla\phi|^{2},\nabla f\right\rangle\rho e^{-f}\,dV.\] Consequently, \[\frac{d}{dt}\int_{M}\left\langle\nabla\rho,\nabla\phi\right\rangle e ^{-f}\,dV\] \[\qquad=\,-\int_{M}\left(\left\langle\nabla R^{H,f}+\frac{1}{4} \nabla\left|H\right|_{\frac{1}{k}}^{2}-\frac{1}{2}\operatorname{div}H^{2}, \nabla\phi\right\rangle+2\left\langle\operatorname{Rc}^{H,f},\nabla^{2}\phi \right\rangle+\frac{1}{2}H^{2}(\nabla f,\nabla\phi)\right)\rho e^{-f}\,dV\] \[\qquad\quad+\int_{M}\left(\left|\nabla^{2}\phi\right|^{2}+ \operatorname{Rc}(\nabla\phi,\nabla\phi)+\left\langle\nabla\phi,\nabla\left\langle \nabla f,\nabla\phi\right\rangle\right\rangle-\frac{1}{2}\left\langle\nabla| \nabla\phi|^{2},\nabla f\right\rangle\right)\rho e^{-f}\,dV\] \[\qquad\quad+\int_{M}\left\langle\nabla(R^{H,f}\rho),\nabla\phi \right\rangle e^{-f}\,dV+\int_{M}\left\langle\nabla\rho,\frac{1}{2}\nabla R^ {H,f}\right\rangle e^{-f}\,dV-\int_{M}\left\langle\nabla\rho,\nabla\phi \right\rangle R^{H,f}e^{-f}\,dV.\] Reordering terms yields \[\frac{d}{dt}\int_{M}\left\langle\nabla\rho,\nabla\phi\right\rangle e ^{-f}\,dV\] \[\qquad= \int_{M}\left(\left|\nabla^{2}\phi\right|^{2}+\operatorname{Rc}( \nabla\phi,\nabla\phi)-2\left\langle\operatorname{Rc}^{H,f},\nabla^{2}\phi \right\rangle\right)\rho e^{-f}\,dV\] \[\qquad+\int_{M}\left\langle\frac{1}{2}\operatorname{div}H^{2}- \frac{1}{4}\nabla\left|H\right|_{\frac{1}{k}}^{2},\nabla\phi\right\rangle\rho e ^{-f}\,dV-\frac{1}{2}\int_{M}H^{2}(\nabla f,\nabla\phi)\rho e^{-f}\,dV\] \[\qquad+\frac{1}{2}\int_{M}\left\langle\nabla\rho,\nabla R^{H,f} \right\rangle e^{-f}\,dV+\int_{M}\left\langle\nabla\phi,\nabla\left\langle \nabla f,\nabla\phi\right\rangle\right)\rho e^{-f}\,dV-\frac{1}{2}\int_{M} \left\langle\nabla|\nabla\phi|^{2},\nabla f\right\rangle\rho e^{-f}\,dV\] \[\qquad= \int_{M}(\left|\nabla^{2}\phi\right|^{2}+\operatorname{Rc}^{f} (\nabla\phi,\nabla\phi)-2\left\langle\operatorname{Rc}^{H,f},\nabla^{2}\phi \right\rangle)\rho e^{-f}\,dV+\frac{1}{2}\int_{M}\left\langle\nabla\rho, \nabla R^{H,f}\right\rangle e^{-f}\,dV\] \[\qquad+\int_{M}\left\langle\frac{1}{2}\operatorname{div}H^{2}- \frac{1}{4}\nabla\left|H\right|_{\frac{1}{k}}^{2},\nabla\phi\right\rangle\rho e ^{-f}\,dV-\frac{1}{2}\int_{M}H^{2}(\nabla f,\nabla\phi)\rho e^{-f}\,dV,\] which is the claim. For the last claim we simply compute \[\frac{d}{dt}\int_{M}R^{H,f}\rho e^{-f}\,dV= \int_{M}\partial_{t}R^{H,f}\rho e^{-f}+\left\langle\nabla R^{H,f },\nabla\phi\right\rangle\rho e^{-f}+(R^{H,f})^{2}\rho e^{-f}-(R^{H,f})^{2} \rho e^{-f}\,dV\] \[= \int_{M}\partial_{t}R^{H,f}\rho e^{-f}+\left\langle\nabla R^{H,f },\nabla\phi\right\rangle\rho e^{-f}\,dV.\] **Corollary 4.4**.: _Fix \((g_{t},H_{t},f_{t})\) a solution to generalized Ricci flow, and suppose \((\rho_{t},\phi_{t})\) solves the geodesic equations (4.2). Then_ \[\frac{d^{2}}{dt^{2}}\int_{M}\rho\log\rho e^{-f}\,dV= \int_{M}\left(\left|\operatorname{Rc}^{H,f-\phi}\right|^{2}+ \operatorname{Rc}^{H,f}(\nabla\phi,\nabla\phi)+\frac{1}{2}\partial_{t}R^{H,f}+ \left\langle\nabla R^{H,f},\nabla\phi\right\rangle\right)\rho e^{-f}\,dV.\] _Also_ \[\frac{d^{2}}{dt^{2}}\int_{M}(\rho\log\rho-\phi\rho)e^{-f}\,dV= \int_{M}\left|\operatorname{Rc}^{H,f-\phi}\right|^{2}\rho e^{-f}\,dV.\] Proof.: We obtain from Proposition 4.3 \[\frac{d^{2}}{dt^{2}}\int_{M}\rho\log\rho e^{-f}\,dV= \int_{M}\left(|\nabla^{2}\phi|^{2}+\mathrm{Rc}^{f}(\nabla f,\nabla f )-2\left\langle\mathrm{Rc}^{H,f},\nabla^{2}\phi\right\rangle\right)\rho e^{-f} \,dV\] \[+\int_{M}\left\langle\frac{1}{2}\operatorname{div}H^{2}-\frac{1} {4}\nabla\left|H\right|_{\frac{1}{k}}^{2},\nabla\phi\right\rangle\rho e^{-f} \,dV\] \[-\frac{1}{2}\int_{M}H^{2}(\nabla f,\nabla\phi)\rho e^{-f}\,dV+ \frac{1}{2}\int_{M}\left\langle\nabla\rho,\nabla R^{H,f}\right\rangle e^{-f} \,dV\] \[+\int_{M}\left(\partial_{t}R^{H,f}+\left\langle\nabla R^{H,f}, \nabla\phi\right\rangle\right)\rho e^{-f}\,dV.\] Using Proposition 2.4, Lemma 2.3 and noting \[\left|\mathrm{Rc}^{H,f-\phi}\right|^{2}=\,\left|\mathrm{Rc}^{H,f}\right|^{2}+ \left|\nabla^{2}\phi\right|^{2}-2\left\langle\mathrm{Rc}^{H,f},\nabla^{2} \phi\right\rangle-\frac{1}{2}\left\langle d^{*}H+i\nabla_{f}H,i\nabla_{\phi} H\right\rangle+\frac{1}{4}\left|i_{\nabla\phi}H\right|^{2},\] we find \[\frac{d^{2}}{dt^{2}}\int_{M}\rho\log\rho e^{-f}\,dV= \int_{M}\left(\left|\mathrm{Rc}^{H,f-\phi}\right|^{2}+\mathrm{Rc}^{H,f}(\nabla\phi,\nabla\phi)+\frac{1}{2}\partial_{t}R^{H,f}+\left\langle\nabla R ^{H,f},\nabla\phi\right\rangle\right)\rho e^{-f}\,dV,\] as claimed. The second claim of the proposition then follows easily from Proposition 4.2. ### Cost monotonicity Given the setup as above, for \(\mu^{\prime},\mu^{\prime\prime}\in P^{\infty}(M)\) define the distance \[C_{0}^{\prime^{\prime},t^{\prime\prime}}(\mu^{\prime},\mu^{\prime\prime}):= \inf_{\mu}E_{0}^{t^{\prime},t^{\prime\prime}}(\mu),\] where the infimum is taken among all paths of smooth measures \(\mu:=\rho e^{-f}\,dV\colon[t^{\prime},t^{\prime\prime}]\to P^{\infty}(M)\) with \(\mu(t^{\prime})=\mu^{\prime}\) and \(\mu(t^{\prime\prime})=\mu^{\prime\prime}\), and such that (4.1) is satisfied. **Proposition 4.5**.: _Let \(\mu\colon[t^{\prime},t^{\prime\prime}]\times(-\epsilon,\epsilon)\to P^{ \infty}(M)\) be a smooth map, where \(\mu=\mu(t,u)\). Define \(\mu_{u}\colon[t^{\prime}+u,t^{\prime\prime}+u]\to P(M)\) by \(\mu_{u}(t):=\mu(t-u,u)\). Suppose that \(\mu_{0}=\mu(\cdot,0)\) is a minimizer for \(E_{0}^{t^{\prime},t^{\prime\prime}}\), i.e. there exists \(\phi_{0}=\phi(\cdot,0)\) such that (4.2) holds. Then_ \[\frac{d}{du}\bigg{|}_{u=0}\,E_{0}^{t^{\prime}+u,t^{\prime\prime}+ u}(\mu_{u})= \int_{t^{\prime}}^{t^{\prime\prime}}\int_{M}\left|\mathrm{Rc}^{H,f-\phi_{0}}\right|^{2}\rho_{0}e^{-f}\,dVdt\] \[+\int_{M}\phi(\left.\partial_{u}\right|_{u=0}\rho(\cdot,u)-R^{H, f}\rho_{0}+\Delta_{f}\rho_{0})e^{-f}\,dV\bigg{|}_{t=t^{\prime}}^{t^{\prime \prime}}\,.\] Proof.: Note that we express \[E_{0}^{t^{\prime}+u,t^{\prime\prime}+u}(\mu_{u})=\frac{1}{2}\int_{t^{\prime}}^ {t^{\prime\prime}}\int_{M}(|\nabla\phi(t,u)|^{2}+R^{H,f})\rho(t,u)e^{-f}\,dVdt,\] where the metric, volume and \(f\) are evaluated at time \(t+u\). Then we compute \[\frac{d}{du}\bigg{|}_{u=0}\,E_{0}^{t^{\prime}+u,t^{\prime\prime}+ u}(\mu_{u})= \int_{t^{\prime}}^{t^{\prime\prime}}\int_{M}\left[\mathrm{Rc}^{H,f}( \nabla\phi,\nabla\phi)+\langle\nabla\phi,\nabla\partial_{u}\phi\rangle+\frac{1 }{2}\partial_{t}R^{H,f}\right]\rho e^{-f}\,dVdt\] \[+\frac{1}{2}\int_{t^{\prime}}^{t^{\prime\prime}}\int_{M}(| \nabla\phi|^{2}+R^{H,f})(\partial_{u}\rho-R^{H,f}\rho)e^{-f}\,dVdt, \tag{4.3}\] where \(\rho(t,u)\) and \(\phi(t,u)\) are evaluated at \(u=0\). For each \(\psi\in C^{\infty}(M)\), by equation (4.1) we have \[\int_{M}\psi\partial_{t}\rho e^{-f}\,dV=\int_{M}(\langle\nabla\psi,\nabla\phi \rangle+\psi R^{H,f})\rho e^{-f}\,dV.\] Hence \[\int_{M}\psi(\partial_{u}\partial_{t}\rho-R^{H,f}\partial_{t}\rho)e^ {-f}\,dV= \int_{M}(2\operatorname{Rc}^{H,f}(\nabla\psi,\nabla\phi)+\langle \nabla\psi,\nabla\partial_{u}\phi\rangle+\partial_{t}R^{H,f}\psi)\rho e^{-f}\,dV\] \[+\int_{M}(\langle\nabla\psi,\nabla\phi\rangle+R^{H,f}\psi)( \partial_{u}\rho-R^{H,f}\rho)e^{-f}\,dV.\] Choosing \(\psi=\phi\) we obtain \[\int_{M}\phi(\partial_{u}\partial_{t}\rho-R^{H,f}\partial_{t}\rho )e^{-f}\,dV= \int_{M}(2\operatorname{Rc}^{H,f}(\nabla\psi,\nabla\phi)+\langle \nabla\phi,\nabla\partial_{u}\phi\rangle+\partial_{t}R^{H,f}\phi)\rho e^{-f}\,dV\] \[+\int_{M}(|\nabla\phi|^{2}+R^{H,f}\phi)(\partial_{u}\rho-R^{H,f} \rho)e^{-f}\,dV. \tag{4.4}\] Combining (4.3) and (4.4) we obtain \[\frac{d}{du}\bigg{|}_{u=0}E_{0}^{t^{\prime}+u,t^{\prime\prime}+u} (\mu_{u})= \int_{t^{\prime}}^{t^{\prime\prime}}\int_{M}\phi(\partial_{u} \partial_{t}\rho-R^{H,f}\partial_{t}\rho)e^{-f}\,dVdt\] \[+\int_{t^{\prime}}^{t^{\prime\prime}}\int_{M}\left[-\operatorname {Rc}^{H,f}(\nabla\phi,\nabla\phi)+\frac{1}{2}\partial_{t}R^{H,f}-\partial_{t} R^{H,f}\phi\right]\rho e^{-f}\,dVdt\] \[-\int_{t^{\prime}}^{t^{\prime\prime}}\int_{M}\left(\frac{1}{2}| \nabla\phi|^{2}-\frac{1}{2}R^{H,f}\right)(\partial_{u}\rho-R^{H,f}\rho)e^{-f} \,dVdt\] \[-\int_{t^{\prime}}^{t^{\prime\prime}}\int_{M}R^{H,f}\phi(\partial _{u}\rho-R^{H,f}\rho)e^{-f}\,dVdt.\] Integrating by parts in \(t\) we have \[\int_{t^{\prime}}^{t^{\prime\prime}}\int_{M}\phi(\partial_{u} \partial_{t}\rho-R^{H,f}\partial_{t}\rho)e^{-f}\,dVdt =\int_{M}\phi(\partial_{u}\rho-R^{H,f}\rho)e^{-f}\,dV\bigg{|}_{t=t ^{\prime}}^{t^{\prime\prime}}\] \[+\int_{t^{\prime}}^{t^{\prime\prime}}\int_{M}\left[-\partial_{t} \phi\partial_{u}\rho+\phi\partial_{u}\rho R^{H,f}\right]e^{-f}\,dVdt\] \[+\int_{t^{\prime}}^{t^{\prime\prime}}\int_{M}\left[\partial_{t}R^ {H,f}\phi+\partial_{t}\phi R^{H,f}-(R^{H,f})^{2}\phi\right]\rho e^{-f}\,dVdt,\] thus yielding \[\frac{d}{du}\bigg{|}_{u=0}E_{0}^{t^{\prime}+u,t^{\prime\prime}+u }(\mu_{u}) =\int_{M}\phi(\partial_{u}\rho-R^{H,f}\rho)e^{-f}\,dV\bigg{|}_{t=t ^{\prime}}^{t^{\prime\prime}}\] \[+\int_{t^{\prime}}^{t^{\prime\prime}}\int_{M}\left[-\operatorname {Rc}^{H,f}(\nabla\phi,\nabla\phi)+\frac{1}{2}\partial_{t}R^{H,f}\right]\rho e^{ -f}\,dVdt.\] We know from Proposition 4.3 that \[\frac{d}{dt}\int_{M}\left\langle\nabla\rho,\nabla\phi\right\rangle e ^{-f}\,dV= \int_{M}(|\nabla^{2}\phi|^{2}+\operatorname{Rc}^{f}(\nabla\phi, \nabla\phi)-2\left\langle\operatorname{Rc}^{H,f},\nabla^{2}\phi\right\rangle) \rho e^{-f}\,dV\] \[+\int_{M}\left\langle\frac{1}{2}\operatorname{div}H^{2}-\frac{1}{ 4}\nabla\,|H|^{2}_{\frac{1}{k}}\,,\nabla\phi\right\rangle\rho e^{-f}\,dV\] \[-\frac{1}{2}\int_{M}H^{2}(\nabla f,\nabla\phi)\rho e^{-f}\,dV+ \frac{1}{2}\int_{M}\left\langle\nabla\rho,\nabla R^{H,f}\right\rangle e^{-f} \,dV.\] Inserting this and using the result of Proposition 2.4 and Lemma 2.3 gives \[\left.\frac{d}{du}\right|_{u=0}E_{0}^{t^{\prime}+u,t^{\prime\prime}+ u}(\mu_{u}) =\int_{M}\phi(\partial_{u}\rho+\Delta\rho-\langle\nabla\rho,\nabla f \rangle-R^{H,f}\rho)e^{-f}\,dV\bigg{|}_{t=t^{\prime}}^{t^{\prime\prime}}\] \[\quad+\int_{t^{\prime}}^{t^{\prime\prime}}\int_{M}|\operatorname {Rc}^{H,f}-\nabla^{2}\phi|^{2}\rho e^{-f}\,dVdt\] \[\quad+\int_{t^{\prime}}^{t^{\prime\prime}}\int_{M}\left\langle \frac{1}{2}\operatorname{div}H^{2}-\frac{1}{4}\nabla\left|H\right|_{\frac{1}{ k}}^{2},\nabla\phi\right\rangle\rho e^{-f}\,dVdt\] \[\quad+\int_{t^{\prime}}^{t^{\prime\prime}}\int_{M}\frac{1}{4}H^ {2}(\nabla\phi,\nabla\phi)\rho e^{-f}-\frac{1}{2}H^{2}(\nabla f,\nabla\phi) \rho e^{-f}\,dV\,dt\] \[=\int_{M}\phi(\partial_{u}\rho+\Delta\rho-\langle\nabla\rho, \nabla f\rangle-R^{H,f}\rho)e^{-f}\,dV\bigg{|}_{t=t^{\prime}}^{t^{\prime\prime}}\] \[\quad+\int_{t^{\prime}}^{t^{\prime\prime}}\int_{M}|\operatorname {Rc}^{H,f-\phi}|^{2}\rho e^{-f}\,dVdt,\] as claimed. Using this we establish monotonicity of the cost along the backwards heat flow, and use it to obtain the monotonicity of the energy functional along generalized Ricci flow. **Corollary 4.6**.: _Under the hypothesis of Proposition 4.5, suppose that each \(\mu_{u}\) is a minimizer for \(E_{0}^{t^{\prime}+u,t^{\prime\prime}+u}\). Suppose that the endpoint measures \(\mu_{u}(t^{\prime}+u)=\mu(t^{\prime},u)\) and \(\mu_{u}(t^{\prime\prime}+u)=\mu(t^{\prime\prime},u)\) satisfy the backward heat equation in \(u\), i.e. for the densities_ \[\partial_{u}\rho=-\Delta_{f}\rho+R^{H,f}\rho.\] _Then_ \[u\mapsto C_{0}^{t^{\prime}+u,t^{\prime\prime}+u}(\mu_{u}(t^{\prime}+u),\mu_{u }(t^{\prime\prime}+u))\] _is nondecreasing._ **Corollary 4.7**.: _Suppose that \(\mu_{t}=\rho_{t}e^{-f_{t}}dV_{t}\in P^{\infty}(M)\) is a smooth solution of the backward heat equation \(\partial_{t}\mu=-\Delta\mu\). Then_ \[\mathcal{F}=\int_{M}[|\nabla\log\rho|^{2}+R^{H,f}]\rho e^{-f}\,dV\] _is nondecreasing in \(t\)._ Proof.: Fix a time \(t^{\prime}\). Using ellipticity of the linearized geodesic equation and an inverse function theorem argument, for \(t^{\prime\prime}\) all sufficiently close to \(t^{\prime}\) and \(u>0\) sufficiently small, the minimizing geodesic connecting \(\mu(t^{\prime}+u)\) and \(\mu(t^{\prime\prime}+u)\) is smooth. By Lemma 3.5 and Corollary 4.6 we have \[\frac{C_{0}^{t^{\prime}+u,t^{\prime\prime}+u}(\mu(t^{\prime}+u),\mu^{\prime \prime}(t^{\prime\prime}+u))}{t^{\prime\prime}-t^{\prime}}\geq\frac{C_{0}^{t ^{\prime},t^{\prime\prime}}(\mu(t^{\prime}),\mu(t^{\prime\prime}))}{t^{\prime \prime}-t^{\prime}}.\] Letting \(t^{\prime\prime}\to t^{\prime}\) \[\frac{1}{2}\int_{M}[|\nabla\phi|^{2}+R^{H,f}]\rho e^{-f}\,dV\bigg{|}_{t^{ \prime}+u}\geq\frac{1}{2}\int_{M}[|\nabla\phi|^{2}+R^{H,f}]\rho e^{-f}\,dV \bigg{|}_{t^{\prime}}\,.\] As \(\rho\) solves \(\partial_{t}\rho=-\Delta_{f}\rho+R^{H,f}\rho\) it follows that \(\nabla\phi=\nabla\log\rho\), giving the claim.
2304.08772
Multi-robot Motion Planning based on Nets-within-Nets Modeling and Simulation
This paper focuses on designing motion plans for a heterogeneous team of robots that has to cooperate in fulfilling a global mission. The robots move in an environment containing some regions of interest, and the specification for the whole team can include avoidances, visits, or sequencing when entering these regions of interest. The specification is expressed in terms of a Petri net corresponding to an automaton, while each robot is also modeled by a state machine Petri net. With respect to existing solutions for related problems, the current work brings the following contributions. First, we propose a novel model, denoted {High-Level robot team Petri Net (HLPN) system, for incorporating the specification and the robot models into the Nets-within-Nets paradigm. A guard function, named Global Enabling Function (gef), is designed to synchronize the firing of transitions such that the robot motions do not violate the specification. Then, the solution is found by simulating the HPLN system in a specific software tool that accommodates Nets-within-Nets. An illustrative example based on a Linear Temporal Logic (LTL) mission is described throughout the paper, complementing the proposed rationale of the framework.
Sofia Hustiu, Eva Robillard, Joaquin Ezpeleta, Cristian Mahulea, Marius Kloetzer
2023-04-18T07:06:07Z
http://arxiv.org/abs/2304.08772v3
# Multi-robot Motion Planning based on Nets-within-Nets ###### Abstract This paper focuses on designing motion plans for a heterogeneous team of robots that has to cooperate in fulfilling a global mission. The robots move in an environment containing some regions of interest, and the specification for the whole team can include avoidances, visits, or sequencing when entering these regions of interest. The specification is expressed in terms of a Petri net corresponding to an automaton, while each robot is also modeled by a state machine Petri net. With respect to existing solutions for related problems, the current work brings the following contributions. First, we propose a novel model, denoted _High-Level robot team Petri Net_ (HLPN) system, for incorporating the specification and the robot models into the Nets-within-Nets paradigm. A guard function, named _Global Enabling Function (get)_, is designed to synchronize the firing of transitions such that the robot motions do not violate the specification. Then, the solution is found by simulating the HPLN system in a specific software tool that accommodates Nets-within-Nets. An illustrative example based on a Linear Temporal Logic (LTL) mission is described throughout the paper, complementing the proposed rationale of the framework. Discrete event systems, Motion Planning, Multi-robot system ## I Introduction An important part of robotics research is dedicated to planning the motion of mobile agents such that a desired mission is accomplished. Classic scenarios relate to standard navigation problems, where a mobile agent has to reach a desired position without colliding with obstacles [1, 2]. Multiple extensions exist, from which an important part refers to planning a team of mobile robots [3, 4]. At the same time, solutions are proposed for ensuring that the team of robots fulfills a high-level mission, for example visiting some regions only after other regions were explored. Such missions are expressed in various formal languages, such as classes of Temporal Logic formulas [5, 6, 7, 8, 9], Boolean expressions [10], \(\mu\)-calculus specifications [11, 12]. In general, solutions for the above problems rely on different types of discrete-event-system models for both the mobile robots and the imposed mission. The mobile agents are usually modeled by transition systems or Petri nets (PN), while the specification model has specific forms, such as Buchi automata in the case of Linear Temporal Logic (LTL) tasks. The models for agents and specifications are usually combined, for example, based on a synchronous product of automata, and a solution is found. However, due to the involved synchronous product, the number of states in the resulting discrete-event model may exponentially grow with the number of agents. In the case of identical robots, a PN model for the robotic team has the advantage of maintaining the same topology, irrespective of the team size. In this work we consider a team of mobile robots evolving in an environment cluttered with regions of interest. The robots may be different, such as some of them can visit only a subset of the regions of interest, and the possible movements of each agent can be modeled by a PN system. The user imposes a global specification over the regions of interest, and we aim in generating motion plans for the agent such that the specification is accomplished. Rather than assuming a specific formal language for the specification, we assume that the task is satisfiable by a finite sequence of movements of the agents, and this task is given in form of a PN. The task can express multiple useful missions for mobile agents, for example, it can include visits of regions of interest based on some conditions (as previously entering some other regions), synchronizations when different agents enter into disjoint regions, or avoidance of some areas until some other are reached. Different than other works, we include both the robots and the task in the same model, and for this, we use the Nets-within-Nets formalism. The Nets-within-Nets belong to the family of high-level nets, in which each token can transfer information such as the state of another process [13]. For this particular type of net, each token is represented by another net denoted as _Object net_. The relations between these objects nets and other nets are captured in _System net_, which contains a global view of the entire system [14]. In the proposed solution, robots are implemented as Object nets. Additionally, the team's mission is also implemented as an Object net. The interactions between robots and mission models take place via the System net. The model is implemented using the Renew tool [15], which is used to obtain path-planning solutions that accomplish the mission through simulation. Therefore, the contributions of this paper are as follows: * Introducing a general framework called the _High-Level robot team Petri Net_ system (HLPN) for path planning a heterogeneous multi-agent robotic system that ensures a global mission; * Designing a synchronization function (_Global Enabling _Function_) between the nets in the model that verifies and acts on a set of logical Boolean formulas to ensure their correct connotations; * Incorporating scalability and adaptability properties in the proposed model to address the problem of a flexible number of agents and different spatial constraints with respect to the environment. In particular, we present a case study based on a partition that maps the environment, an LTL specification as a global mission given to the team of robots, and agents of two different types in terms of their capability of accessing partition regions. ## II Intuitive Reasoning of the proposed approach Assuming that a global specification for a team of mobile robots is modeled as a PN system, this PN can be referred to as the _Specification Object Petri net_ (SpecOPN) system. It is assumed that a final reachable marking exists, and the formula is satisfied when this marking is reached. Transitions within the SpecOPN are labeled with Boolean formulas and can fire only when they are marking-enabled and their Boolean formulas are evaluated to \(True\). This will ensure that the system adheres to the imposed specification. An example of a SpecOPN is illustrated in Fig. 2 (ii). In the initial marking, there is one token in \(p_{1}^{S}\), and both transitions \(t_{1}^{S}\) and \(t_{3}^{S}\) are marking-enabled. However, for transition \(t_{1}^{S}\) to be fired, the atomic proposition \(b_{3}\) must be evaluated as \(True\). On the other hand, transition \(t_{3}^{S}\) is always fireable at this marking, since it is labeled with \(T\) (\(True\)). The specification is fulfilled when place \(p_{2}^{S}\) is marked, which means that transition \(t_{1}^{S}\) should fire, and for this proposition \(b_{3}\) should become \(True\). We note that this SpecOPN models the LTL formula \(\diamondsuit b_{3}\) ("eventually \(b_{3}\)"). Furthermore, a heterogeneous team of robots is assumed to operate in an environment consisting of several regions denoted \(y_{i}\), which may intersect. For each region \(y_{i}\) of the environment, an atomic proposition (otherwise known as observation) \(b_{i}\) is defined. If a robot is in the region \(y_{i}\), the atomic proposition \(b_{i}\) evaluates to \(True\). As the regions may intersect, more than one atomic proposition may evaluate to \(True\). For example, in Fig. 1, if a robot is inside \(y_{2}\cap y_{3}\), then both \(b_{2}\) and \(b_{3}\) are evaluated to \(True\), hence the label \(b_{2}\wedge b_{3}\) is assigned to the region \(y_{2}\cap y_{3}\). To model each robot, a _Robotic Object Petri Net_ (RobotOPN) is used, where each place represents a part of the environment and it is labeled with a Boolean formula composed of atomic propositions \(b_{i}\) corresponding to the environment's part to which the place belongs. If a place in RobotOPN has one token, the Boolean formula assigned to the place becomes \(True\). As robots evolve within the environment, the truth values of atomic propositions change, thus enabling or disabling transitions in the SpecOPN. For example, Fig. 1, illustrates an example of a 2D environment having four regions of interest \(y_{1}\) - purple region, \(y_{2}\) - blue region, \(y_{3}\) - green region and \(y_{4}\) - the free space (white), regions \(y_{2}\) and \(y_{3}\) being partially overlapped. If all three robots are in region \(y_{4}=E\setminus(y_{1}\cup y_{2}\cup y_{3})\) (\(E\) being the full known environment), only \(b_{4}\) is evaluated to \(True\), while \(b_{1}\), \(b_{2}\), and \(b_{3}\) are \(False\). If robot 1 leaves \(y_{4}\) and enters \(y_{3}\), by firing transition \(t_{1}^{O}\) in the RobotOPN of Fig. 2 (iii), the atomic proposition \(b_{3}\) becomes \(True\), while \(b_{1}\) and \(b_{2}\) remain \(False\). Since a transition firing in the RobotOPN implies a change in the values of atomic propositions, it must be fired synchronously with a transition in the SpecOPN. In this case, \(t_{1}^{O}\) should be fired synchronously with either \(t_{1}^{S}\) or \(t_{3}^{S}\). Synchronization is accomplished by using a high-level Petri net called the _System Petri Net_, shown in Fig. 2 (i). Each token in place \(Rb\) corresponds to a RobotOPN, while the token in \(Ms\) corresponds to the SpecOPN. When a transition \(t_{i},i=\overline{1,3}\) is fired, at least one transition from the RobotOPNs and one transition from the SpecOPN are fired synchronously, respecting the conditions imposed by the _Global Enabling Function_. ## III Notations and problem statement Let us denote a set of regions of interest (ROIs) labeled with \(\mathcal{Y}=\{y_{1},y_{2},\ldots,y_{|\mathcal{Y}|}\}\), with \(|\mathcal{Y}|\) being the cardinality of set \(\mathcal{Y}\). We are going to assume that there is a special region that does not intersect with any other region, and that we will call the _free space region_. For the sake of simplicity, let us assume this region is \(y_{|\mathcal{Y}|}\). A mission may specify visits or avoidances of regions \(\mathcal{Y}\setminus\{y_{|\mathcal{Y}|}\}\) by a team \(\mathcal{R}=\{r_{1},r_{2},\ldots r_{|R|}\}\) of omnidirectional mobile robots (agents). Let us also assume that, at the initial state, all the robots are in the free space region. To capture the behavior of the robots in the environment, the continuous space \(E\) is partitioned over set \(\mathcal{P_{Y}}=\{p_{1},p_{2},\ldots p_{|P_{Y}|}\}\) of discrete elements, returned by a mapping technique that preservers the borders of ROIs, e.g., SLAM [16], occupancy grid map [17], cell decomposition [4]. For simplicity, we assume to have the fewest possible number of elements in the partitioned environment \(\mathcal{P_{Y}}\). Thus, in this work, the discrete representation of the workspace is returned by a cell decomposition technique [4] which is further altered into a reduced model denoted Quotient Petri net. This algorithm is described in our work [18]. The result consists in a discrete model of the environment, which can be handled afterwards with respect to the motion of the robots. Each agent has a pre-established set of constraints that defines its allowed movements in \(E\). At any moment a robot should physically be placed in one \(p\in\mathcal{P_{Y}}\). In addition, a characteristic of each element \(p\in\mathcal{P_{Y}}\) is Fig. 1: Example of an environment with 3 regions of interest and 3 robots represented by its capacity in terms of the maximum number of agents that can be in \(p\) at the same time. Let us denote with \(\mathcal{B}_{\mathcal{Y}}=\{b_{1},b_{2},\ldots b_{|\mathcal{Y}|}\}\) the set of Atomic Propositions (AP) associated with the set of ROIs \(\mathcal{Y}\). The power-set \(2^{\mathcal{Y}}\) represents all the combinations of regions of interest. For any subset \(A\in 2^{\mathcal{Y}}\) let us define the characteristic conjunction formula of \(A\) as \(A_{\wedge}=\bigwedge\{b_{i}\in\mathcal{B}_{\mathcal{Y}}\mid y_{i}\in A\}\). For any partition element \(p\in\mathcal{P}_{\mathcal{Y}}\), let \(h(p)=\{y_{i}\in\mathcal{Y}\mid p\subseteq y_{i}\}\). Let \(p_{\wedge}\equiv h(p)_{\wedge}\) be the labeling function that assigns a characteristic conjunction formula to each element \(p\in\mathcal{P}_{\mathcal{Y}}\). This paper considers the following problem: **Problem III.1**: _Given a heterogeneous multi-agent robotic system in a known environment, and a global mission (specification) requiring visiting and/or avoiding regions of interest, design motion plans for the team of agents such that the specification is fulfilled._ **Remark.** Although similar problems are studied in the literature [4] (but mainly for identical agents), here we are concerned with designing a different formalism that allows us to combine the motion of the robots with the given specification in the same model. This model is suitable for running simulations in dedicated software tools, and thus the current method obtains a sub-optimal solution through simulations, rather than following the ideas of existing methods that search for an optimal solution by either exploring the reachability graph of various models or solving complex optimization problems. In this work we solve the motion planning problem for heterogeneous robotic systems with a flexible number of robots, which can synchronize among them. Therefore, we propose a framework that encapsulates the advantages of a compact nested model. The current work models the motion of the robots in a physical workspace with respect to a given mission, all under the Petri net formalism, known as _Nets-within-Nets_[14]. Such representation allows us to handle various models in a structured manner, which is easier to handle compared with a non-nested structure. In addition, the coordination between different levels of Petri nets benefits from an object-oriented methodology which is applied here in the field of high-level motion planning. Since the modeling methodology considers high-level Petri nets, the marking of the places will be a multi-set. Given a non-empty set \(U\), let \(\mu\) be a function \(\mu:U\longrightarrow\mathbb{N}\) which assigns a non-negative integer number (coefficient) for each element \(u\in U\)[19]. The multi-set \(\mu\) over a set \(U\) is expressed by: symbol \({}^{\prime}\) used for an easier clarity of the multiplicity \(u\in U\) and by a symbolic addition \(\sum_{u\in U}\mu(u)u\), e.g., \(\mu=1^{\prime}u_{1}+2^{\prime}u_{2}\), with \(U=\{u_{1},u_{2}\}\). The set of all multi-sets over \(U\) is denoted with \(Bag(U)\). The algebra of multi-sets is defined in [19], containing multiple operations such as: addition, comparison, which we will omit here due to paper space constraints. ## IV Object Petri net systems As the proposed framework is encapsulated under Nets-within-Nets paradigm, let us define the Object nets that model the given mission for the multi-agent system, and the allowed movements of robots, respectively. ### _Specification Object Petri net_ The following definition is designated to a subclass of global missions, which can be modeled as a state machine Petri net, being strongly connected, and bounded by one token. **Definition IV.1** (SpecOPN): _A Petri net modeling the global mission given to a multi-agent system is called Specification Object Petri net (SpecOPN) and it is a tuple \(Spec=\langle P^{S},P^{S}_{f},T^{S},F^{S},\lambda^{S}\rangle\), where: \(P^{S}\) and \(T^{S}\) are the disjoint finite set of places and transitions, \(P^{S}_{f}\subset P^{S}\) is the set of final places, \(F^{S}\subseteq(P^{S}\times T^{S})\cup(T^{S}\times P^{S})\) is the set of arcs. \(\lambda^{S}_{\wedge}(t^{S})\equiv t^{S}_{\wedge}\) is the transition labeling function, which assigns to each transition \(t^{S}\in T^{S}\) a Boolean formula based on conjunctions of atomic propositions \(\mathcal{B}_{\mathcal{Y}}\) or negation of them. A disjunctive Boolean formula assigned to a transition \(\lambda^{S}_{\wedge}(t^{S})=b_{i}\lor b_{j}\) is converted into a conjunctive Boolean formula by modifying the topology of \(SpecOPN\) such that there exist two transitions with \(\lambda^{S}_{\wedge}(t^{S}_{i})=b_{i}\) and \(\lambda^{S}_{\wedge}(t^{S}_{j})=b_{j}\). \(\square\)_ A marking is a \(|P^{S}|\)-sized natural-valued vector, while a SpecOPN system is a pair \(\langle Spec,\mathbf{m}^{S}_{0}\rangle\) where \(\mathbf{m}^{S}_{0}\) is the initial marking. The specification is fulfilled when SpecOPN Fig. 2: Example of Nets-within-nets formalism: (i) System Petri net, (ii) Specification Object Petri net, (iii) Robotic Object Petri net reaches a marking with a token in a place of \(P_{f}^{S}\), through firing a sequence of enabled transitions. A transition \(t^{S}\in T^{S}\) in the SpecOPN system is enabled at a marking \(\mathbf{m}^{S}\) when two conditions are met: (i) \(\mathbf{m}^{S}[\mathbf{\iota}^{S}]=1\)1 and (ii) the Boolean formula \(t^{S}_{\wedge}\) is _True_. Informally, condition (i) is a witness of the part of the specification already fulfilled, while (ii) means that the movement of the robots with respect to the regions of interest \(\mathcal{Y}\) can imply the firing of a transition in SpecOPN by changing the truth value of \(t^{S}_{\wedge}\). Footnote 1: \(\mathbf{\iota}\) and \(\mathbf{t}^{\bullet}\) are the input and output places of the transition \(t\in T\) - singletons, since the Object Petri nets are considered state machine. For exemplification purposes, we consider SpecOPN associated with a Buchi automaton that can be obtained as in [18], where the global mission to the team is given by a LTL specification. The latter formalism denotes a high-level abstraction of the natural language based on a set of atomic propositions, Boolean and temporal operators, e.g., \(\varphi=\Diamond b_{3}\) specifies that region \(y_{3}\) should be eventually visited, as in Fig. 2. The model of an LTL formula is often represented by a Buchi automaton [20], which accepts only input strings satisfying the LTL formula. In our case, the formula should be satisfiable by a finite string (also known as co-safe LTL), since the SpecOPN has to reach a final marking. ### _Robotic Object Petri net_ The robotic system which evolves in the known environment can be modeled by a set of Petri nets systems, denoted _Robotic Object Petri nets_, as each model is assigned to one robot (Fig. 2 (iii)). We assume that the defined model is a state machine PN, where each transition has only one input and one output place. The model can be considered as an extended _Labeled Petri net_[21] representation by the addition of a labeling function over the set of places. **Definition IV.2** (RobotOPN): _A Robotic Object Petri net modeling the robot \(k\) is a tuple \(o^{k}=\langle P^{o},T^{o},F^{o},h^{o},\lambda^{o},\gamma^{o}\rangle\):_ * \(P^{o}\) _is the finite set of places, each place being associated to an element_ \(p\in\mathcal{P_{Y}}\) _in which robot_ \(k\) _is allowed to enter;_ * \(T^{o}\) _is the finite set of transitions. A transition_ \(t^{o}_{ij}\in T^{o}\) _is added between two places_ \(p^{o}_{i},p^{o}_{j}\in P^{o}\) _only if the robot_ \(k\) _can move from any position in_ \(p_{i}\) _to a position in_ \(p_{j}\)_;_ * \(F^{o}\subseteq(P^{o}\times T^{o})\cup(T^{o}\times P^{o})\) _is the set of arcs. If_ \(t^{o}_{ij}\) _is the transition modeling the movement from_ \(p^{o}_{i}\) _to_ \(p^{o}_{j}\)_, then_ \((p^{o}_{i},t^{o}_{ij})\in F^{o}\) _and_ \((t^{o}_{ij},p^{o}_{j})\in F^{o}\)_;_ * \(h^{o}_{\wedge}\) _is the labeling function of places_ \(p^{o}\in P^{o}\)_, defined in the previous section and associating to each place a Boolean formula over the set of propositions_ \(\mathcal{B_{Y}}\)_;_ * \(\lambda^{o}_{\wedge}\) _is the Boolean labeling function of transitions_ \(t^{o}\in T^{o}\)_, such that_ \(\lambda^{o}_{\wedge}(t^{o}_{i})=h\left(t^{\bullet\bullet}_{i}\right)_{\wedge}\)_;_ * \(\gamma^{o}:P^{o}\rightarrow\mathcal{P_{Y}}\) _is the associating function. If place_ \(p^{o}_{i}\in P^{o}\) _is associated to_ \(p_{i}\in\mathcal{P_{Y}}\)_, then_ \(\gamma^{o}(p^{o}_{i})=p_{i}\)_._ Notice that if the robots are omnidirectional, the resulting RobotOPN model is a state machine, which is, indeed, safe. A marking of a RobotOPN is a vector \(\mathbf{m}^{k}\in\{0,1\}^{|P^{o}|}\). The initial marking is denoted \(\mathbf{m}^{k}_{0}\) such that \(\mathbf{m}^{k}_{0}[p^{o}_{i}]=1\) if one robot is initially in \(p^{o}_{i}\), and \(\mathbf{m}^{k}_{0}[p^{o}_{j}]=0\) for the rest of places \(p^{o}_{j}\in P^{o}\setminus\{p^{o}_{i}\}\). A RobotOPN system of robot \(r_{k}\) is a pair \(\langle o^{k},\mathbf{m}^{k}_{0}\rangle\). The heterogeneity of the robotic system relates to the differences in RobotOPN models in terms of topology and labels. ## V System Petri Net and Global Enabling Function ### _System Petri net_ This subsection introduces the System Petri Net for the motion planning problem, being denoted with _High-Level robot team Petri net_. This representation is designed to enable synchronization among the robots and specification net models, following the Nets-within-Nets paradigm. **Definition V.1**: _A High-Level robot team Petri net (HLPN) is a tuple \(\mathcal{N}=\langle P,T,\mathcal{O},\mathcal{S},Vars,F,W,\mu_{cap}\rangle\), where:_ * \(P=\{Rb,Ms\}\) _is the set of places;_ * \(T=\{t_{1},t_{2},\ldots,t_{|R|}\}\) _is the set of transitions;_ * \(\mathcal{O}=\{\langle o^{1},\mathbf{m}^{1}_{0}\rangle,\langle o^{2},\mathbf{m}^{2}_{0 }\rangle,\ldots,\langle o^{|R|},\mathbf{m}^{|R|}_{0}\rangle\}\) _is a set of_ \(|R|\) _RobotOPN systems, one for each robot;_ * \(\mathcal{S}=\langle Spec,\mathbf{m}^{S}_{0}\rangle\) _is a SpecOPN system;_ * \(Vars=\{n,x_{1},x_{2},\ldots,x_{|R|}\}\) _is a set of variables;_ * \(F\) _is the set of arcs:_ \(\forall t\in T\) _and_ \(\forall p\in P\)_,_ \((t,p)\in F\) _and_ \((p,t)\in F\)_;_ * \(W\) _is the_ _inscription function assigning to each arc a set of variables from_ \(Vars\) _such that for every_ \(t_{i}\in T\)_,_ \(W(Rb,t_{i})=W(t_{i},Rb)=(x_{1},x_{2},\ldots,x_{i})\)_,_ \(W(Ms,t_{i})=W(t_{i},Ms)=n\)_;_ * \(\mu_{cap}\in Bag(\mathcal{P_{Y}})\) _is the capacity multi-set, with_ \(\mu_{cap}[p_{i}]>0,\forall i\in\{1,\ldots,|\mathcal{P_{Y}}|-1\}\) _and_ \(\mu_{cap}[p_{|\mathcal{P_{Y}}|}]\leq|R|\)_._ \(Rb\) _and_ \(Ms\) _are called, respectively, the robot and mission places. Transition_ \(t_{i}\) _is used for the synchronized movement of_ \(i\) _robots according to the specification (this is the reason for having exactly_ \(|R|\) _transitions). Furthermore, each element_ \(p_{i}\in\mathcal{P_{Y}}\) _has a given number of space units, its capacity. The capacity of each discrete element_ \(p_{i}\) _(the maximum number of robots that can exist simultaneously in_ \(p_{i}\)_) is_ \(\mu_{cap}[p_{i}]\)_, being given by the multi-set_ \(\mu_{cap}\)_. Therefore, each element_ \(p_{i}\in\mathcal{P_{Y}}\) _has a strictly positive capacity, considering that the free space_ \(p_{|\mathcal{P_{Y}}|}\) _can accommodate the whole team, as mentioned in the last bullet of Def._ 5 __ Notice in Fig. 2 (i) that the places \(Rb\) and \(Ms\) are connected with the transitions via bidirectional arcs. The firing of a transition manipulates an Object net through the use of variables, e.g., \(x_{1}\) is bound to RobotOPN \(o^{1}\). Although the state of an Object net is modified when a transition in the HLPN system is fired, by using the reference semantics as in [14], a token is a reference to an Object net, the same variable being used for both directions: from place towards transitions, respectively vice-versa. A HLPN system is a tuple \(\langle\mathcal{N},\mathbf{m},\mu_{occ}\rangle\) where \(\mathcal{N}\) is a HLPN as in Def. 5, \(m\) is the marking associating to each place in \(P\) a multi-set. The initial marking \(\mathbf{m}_{0}\) is * \(\mathbf{m}_{0}[R_{b}]=1^{7}\langle o^{1},\mathbf{m}_{0}^{1}\rangle+1^{7}\langle o^{2},\mathbf{ m}_{0}^{2}\rangle+\ldots+1^{7}\langle o^{|R|},\mathbf{m}_{0}^{|R|}\rangle\); * \(\mathbf{m}_{0}[Ms]=1^{7}\langle Spec,\mathbf{m}_{0}^{S}\rangle\). Finally, \(\mu_{occ}\in Bag(\mathcal{B}_{\mathcal{Y}})\) is the _occupancy multi-set_ representing the actual position of the robots with respect to \(\mathcal{Y}\). At a given time, \(\mu_{occ}[b_{i}]\) is the actual number of robots in the region \(y_{i}\). The initial occupancy multi-set is \(\mu_{occ_{0}}=\sum_{i=1}^{|\mathcal{B}_{\mathcal{Y}}|-1}0^{i}b_{i}+|R|^{ \prime}b_{|\mathcal{Y}|}\) since we assume that initially, all robots are in the free space region. A transition \(t\in T\) of the HLPN is _enabled_ at a given state \(\langle\mathbf{m},\mu_{occ}\rangle\) iff * \(\mathbf{m}[Ms]\) has a transition \(t^{S}\in T^{S}\) enabled; * being \(W(Rb,t)=(x_{1},x_{2},\ldots,x_{i})\), \(\mathbf{m}[Rb]\) has \(i\) RobotOPN net systems \((\langle o^{1},\mathbf{m}^{1}\rangle,\langle o^{2},\mathbf{m}^{2}\rangle,\ldots, \langle o^{i},\mathbf{m}^{i}\rangle)\) such that each of these nets has a transition \(t^{o^{i}}\) enabled, with \(j=\overline{1,i}\), and also \(gef(\mu_{occ},\mu_{cap},t^{S},(t^{o^{1}},t^{o^{2}},\ldots t^{o^{i}}))=1\)2. Footnote 2: The _Global Enabling Function (gef)_ is detailed in subsection V-B. An enabled transition \(t\in T\) may fire yielding the system from \(\langle\mathbf{m},\mu_{occ}\rangle\) to \(\langle\mathbf{m}^{\prime},\mu^{\prime}_{occ}\rangle\) such that, * \(\mathbf{m}^{\prime}[Ms]\) has fired transition \(t^{S}\); * at \(\mathbf{m}^{\prime}[Rb]\), each \(o^{i}\) has fired transition \(t^{o^{i}}\); * \(\mu^{\prime}_{occ}\) is updated based on the new position of the robots. ### _The Global Enabling Function (gef)_ When firing a HLPN transition \(t_{j}\in T\), the system must coordinate transitions in both RobotOPNs from \(\mathbf{m}[Rb]\) and SpecOPN from \(\mathbf{m}[Mb]\). However, this synchronization must comply with multiple compatibility rules that take into account the current state of the system, including \(\mu_{occ}\). To ensure that these rules are satisfied, the _Global Enabling Function_ (_gef_) acts as a gatekeeper (guard), verifying the compatibility of the system's state with the transition rules before enabling the synchronous transitions. The _gef_ plays a critical role in ensuring that the firing of a HLPN transition, along with its associated RobotOPNs and SpecOPN enabled transitions, can proceed without violating any rules. For any transition \(t\in T\), _gef_ takes inputs from \(Bag(\mathcal{P}_{\mathcal{Y}})\times Bag(\mathcal{B}_{y})\times T^{S}\times \left(\bigcup_{i=1}^{|R|}\prod_{j=1}^{i}T_{j}^{O}\right)\), and returns either \(True\) or \(False\) to enable or disable \(t\). The _gef_ considers the assignment of variables to the input arcs (i.e., \(n\) and \((x_{1},x_{2},\ldots x_{i})\)), along with global information such as the occupancy multi-set \(\mu_{occ}\), the capacity multi-set \(\mu_{cap}\), a marking-enabled transition \(t^{S}\) in the SpecOPN \(n\), and a set of \(i\) marking-enabled transitions \((t^{o^{1}},t^{o^{2}},\ldots t^{o^{i}})\) in \((x_{1},x_{2},\ldots x_{i})\). If by the firing of the \(i\) transitions in the RobotOPN the Boolean label assigned to \(t^{S}\) is satisfied then _gef_ returns \(True\); otherwise, it returns \(False\). The algorithm of this function is shown in Alg. 1. ``` Input:\(\mu_{occ},\mu_{cap},t^{S},(t^{o^{1}},t^{o^{2}},\ldots t^{o^{i}})\) Output:\(True\) or \(False\) Data:\((\langle o^{1},\mathbf{m}^{1}\rangle,\langle o^{2},\mathbf{m}^{2}\rangle,\ldots, \langle o^{|R|},\mathbf{m}^{|R|}\rangle),\mathcal{P}_{\mathcal{Y}}\) 1 Let \(\chi\) be the simulated occupancy multi-set w.r.t. \(\mathcal{P}_{\mathcal{Y}}\) after firing \((t^{o^{1}},t^{o^{2}},\ldots t^{o^{i}})\); 2forall\(p_{j}\in\mathcal{P}_{\mathcal{Y}}\)do 3if\((\chi[p_{j}]>\mu_{cap}[p_{j}])/*\) if the capacity of \(p_{j}\) will not be satisfied */ 4then 5return \(False\) 6if\(\left(t^{S}_{\Lambda}==True\right)/*\) if the label of the enabled transition in the SpecOPN is \(True\) */ 7then 8return \(True\) 9else 10 Let \(\mu^{\prime}_{occ}\) be the simulated update of \(\mu_{occ}\) w.r.t. \(\mathcal{B}_{\mathcal{Y}}\) after firing \((t^{o^{1}},t^{o^{2}},\ldots t^{o^{i}})\); 11forall\((b_{j}\in\mathcal{B}_{\mathcal{Y}})\)do 12if\(\left(b_{j}\in t^{S}_{\Lambda}\wedge\mu^{\prime}_{occ}[b_{j}]==0\right)\lor\)\(\left(-b_{j}\in t^{S}_{\Lambda}\wedge\mu^{\prime}_{occ}[b_{j}]\geq 1\right)\)then 13return \(False\) 14 15 return \(True\) ``` **Algorithm 1**The Global Enabling Function (_gef_) The verification of enabling a transition \(t\in T\) is made possible via the simulation of the firing of the corresponding \(i\) transitions in RobotOPN synchronized by means of transition \(t_{i}\) of the HLPN. This artificial process is carried out by the simulation of multi-set \(\chi\) which is computed (line 1). In order to compute it, transitions \((t^{o^{1}},t^{o^{2}},\ldots,t^{o^{i}})\) are fictitiously fired in the corresponding RobotOPN (from \(o^{1}\) to \(o^{i}\)), while in the rest of RobotOPN (from \(o^{i+1}\) to \(o^{|\mathcal{R}|}\)) no transition is fired. Thus, the marked places of all RobotOPN are considered. By using the associating function \(\gamma^{o}\) in each RobotOPN, multi-set \(\chi\) is obtained. Next, the _gef_ verifies that the firing of transitions satisfies the capacity of each \(p_{j}\in\mathcal{P}_{\mathcal{Y}}\) (line 2-5). If the capacity constraints are satisfied and the Boolean formula assigned to \(t^{S}\) (i.e., \(t^{S}_{\Lambda}\)) is identical with \(True\), then the transitions \(\langle t^{S},(t^{o^{1}},t^{o^{2}},\ldots,t^{o^{i}})\rangle\) can fire synchronously without being necessary the evaluation w.r.t. robots positions. As a result, the _gef_ returns \(True\) (line 8). Otherwise, a new simulation is performed and \(\mu^{\prime}_{occ}\) is updated. Notice that \(\chi\), respectively \(\mu^{\prime}_{occ}\) represent the simulated occupancy multi-sets w.r.t. \(\mathcal{P}_{\mathcal{Y}}\), respectively \(\mathcal{B}_{\mathcal{Y}}\). In the case of \(\mu^{\prime}_{occ}\), there are two additional conditions that could prevent the considered transitions to fire. These conditions are checked in lines 11-13 and can be described as: 1. If an atomic proposition \(b_{j}\in\mathcal{B}_{\mathcal{Y}}\) is part of the formula \(t^{S}_{\Lambda}\), but in the simulated occupancy state obtained by the firing of the transitions (\(\mu^{\prime}_{occ}\)), no robot will be in \(y_{j}\), then this movement of robots does not fulfill the Boolean function assigned to the transition \(t^{S}\) (first condition in line 12). 2. If a negated atomic proposition \(-b_{j}\) (with \(b_{j}\in\mathcal{B}_{\mathcal{Y}}\)) is part of the \(t^{S}_{\Lambda}\) formula, but the simulated update of the occupancy \(\mu^{\prime}_{occ}\) after the firing of the involved transitions is such that \(\mu^{\prime}_{occ}[b_{j}]>1\), this means that one robot would be in \(y_{j}\) and the formula \(t_{\wedge}^{S}\) would not be fulfilled (second condition in line 12). If at least one of the previous conditions is true, then the _gef_ will return \(False\) (line 13). Otherwise, it will return \(True\) (line 14). The firing of a transition \(t\) updates the marking of the system and the multi-set \(\mu_{occ}\). ## VI Simulations ### _RENEW Implementation_ The proposed framework is implemented in the Renew software tool [22], which is a Java-based high-level Petri net simulator suitable for modeling the Nets-within-Nets paradigm in a versatile approach, based on transitions labels, and Java functions among others. As mentioned in the beginning of the paper, the proposed framework is assessed through simulations. The simulations return non-deterministic solutions for the paths of the robots, as there are several possibilities to verify the Boolean formulas required by the SpecOPN. Based on the versatility of the algorithmic approach for the current work, several metrics can be defined to assess the quality of simulations. In the following study case, we will refer to the shortest paths of robots out of a given total number of simulations, based on the number of fired transitions in each RobotOPN assigned to the motion of each agent. Of course, this does not guarantee that obtained robotic plans yield the optimal possible value for the chosen metric, but only the best one from the performed set of simulations. For an easier visualization in the Renew simulator, let us define notations without subscripts. Thus, let \(\mathcal{Y}=\{c,b,a,free\}\) be the set of atomic propositions replacing the set \(\mathcal{B}_{\mathcal{Y}}\) associated with set \(\mathcal{Y}\), such that \(c\equiv b_{1},b\equiv b_{2},a\equiv b_{3}\), and \(b_{4}\equiv free\). In addition, the symbols \(\neg\) or \(\wedge\) are replaced in Renew with the syntax "\(\vdash\)", respectively "\(\cdot\)". The Specification net models a Buchi automaton assigned to the given LTL formula (described in the next sub-section), thus the \(True\) value in the automaton is expressed in Renew through "\(\vdash\)". The elements of the partitioned workspace are associated with the regions of interest in such way: \(p_{1}\) for region \(y_{1}\), \(p_{2}\) for the overlapped area \(y_{2}\cap y_{3}\), \(p_{3}\) for area \(y_{3}\setminus y_{2}\), \(p_{4}\) for free space \(y_{4}\), while \(p_{5}\) for area \(y_{2}\setminus y_{3}\). In the following, one example is provided in addition to the mathematical formalism described throughout the paper. ### _Case-study_ Let us provide a complete example tackling the problem formulation from Sec. III. We exemplify a path planning strategy for a team of three robots evolving in a known workspace (Fig. 1) for which a global specification is given, using the Linear Temporal Logic language. The mission \(\varphi=\hat{\Diamond}b_{3}\wedge\Diamond b_{2}\wedge\Diamond b_{1}\wedge( \neg b_{3}\ \mathcal{U}\ b_{1})\) implies the visit of regions of interest \(y_{1},y_{2},y_{3}\), but requires that region \(y_{3}\) to be visited before \(y_{1}\). From the point of view of multi-agent spatial constraints, each partition element has a maximum capacity equal with two, e.g., \(\mu_{cap}[p_{1}]=2\) means that no more than two robots can be present at the same time in the area \(p_{1}\), where \(p_{1}\) is the partition element corresponding to \(y_{1}\). The robots are different w.r.t. to their spacial capabilities, in the sense that robots \(r_{1}\) and \(r_{2}\) are allowed to move freely in the entire environment, while robot \(r_{3}\) cannot reach the overlapping part of regions \(y_{2}\) and \(y_{3}\), denoted with partition element \(p_{2}\) in Fig. 3. For each agent of the team, one RobotOPN is built as follows: \(r_{1},r_{2}\) are modeled identically by \(o^{1},o^{2}\), while \(r_{3}\) is modeled by \(o^{3}\) (Fig. 2 (iii)), which are represented in Renew as in Fig. 3. The SpecOPN (Fig. 4) is represented by the Buchi Petri net assigned to the mentioned LTL formula, based on the algorithm from [18]. Out of 100 simulations (the execution time per simulation was 18 milliseconds in mean, with a standard deviation of 12 milliseconds), we could compute the shortest path of the multi-agent system in terms of the truth values of atomic propositions, such as: \(\langle r_{1},r_{2},r_{3}\rangle=\langle b_{1},b_{3},b_{2}\rangle\), meaning that \(r_{1},r_{2}\) respectively \(r_{3}\) moves synchronously into regions \(y_{1},y_{3}\), respectively \(y_{2}\). The motion of the robots is synchronized (based on the truth value of \(gef\)) by firing the most right transition in SpecOPN, having the Boolean label \(b_{1}\wedge b_{2}\wedge b_{3}\) (blue path in Fig. 4). As mentioned before, Renew returns non-deterministic solutions. Therefore, other path in the SpecOPN (purple color) can be returned by another run of 100 simulations, having as robots shortest paths \(\langle r_{1},r_{2},r_{3}\rangle=\langle b_{2},b_{1},b_{2}\rangle,\langle free,b_{1}\wedge b_{3},free\rangle\), meaning that firstly \(r_{1},r_{3}\) move to region \(y_{2}\), while \(r_{2}\) moves towards \(y_{3}\); secondly \(r_{1},r_{3}\) returns to the free space \(y_{4}\), while \(r_{2}\) reaches the overlapped area of \(y_{2}\cap y_{3}\). Notice that if the number of robots in the team decreases, having one robot for each type, then the shortest path of the multi-agent system will be longer compared with the formerly result, due to the fact that the transition in SpecOPN model with label \(b_{1}\wedge b_{2}\wedge b_{3}\) cannot be fired. The entire implementation of this example can be accessed on the GitHub link. Although our approach provides a scalable and independent method of modeling the motion of heterogeneous Fig. 3: RobotOPN models in Renew tool: \(r_{1}\) and \(r_{2}\) can move freely in the workspace (left side), while \(r_{3}\) is not allowed to enter the overlapped region between \(y_{2}\) and \(y_{3}\) (right side) systems based on RobotOPN's structure topology, two questions are raised in terms of (i) inferring the computational tractability scales when the number of robots increases, and (ii) encapsulating time constraints under high-level Petri nets formalism, which would be suitable in the motion planning field. ## VII Conclusion This paper tackles the problem of motion planning for a team of heterogeneous robots while ensuring a global mission that includes visits and avoidances of some regions of interest. The planning strategy is returned by a newly proposed framework under the Nets-within-Nets paradigm, denoted _High-Level robot team Petri net_. Specifically, the current work offers an adaptable and flexible solution with respect to the number of agents in the team, by formally defining the model and simulating it for a case study. The approach makes use of the advantages of nested high-level Petri nets systems, such as object-oriented methodology, by modeling the movement of the robots as a set of _Robotic Object Petri nets_, while the given mission is modeled by a _Specification Petri net_. These nets become part of a so-called _System Petri net_, which coordinates their transitions, contributing to a globa view of the entire robotic system due to the fact that the Object nets are represented as tokens in the latter net. Moreover, the relation between the nets is directed by a guard denoted _Global Enabling Function_. Thus, the synchronization between the nets is handled by the nested structure of the proposed model, joined by the designed guard function. Future work envisions handling collaborative tasks assignments of the robotic systems, based on multiple missions given to sub-groups of robots. In addition, we are interested in coordinating the Object Petri net when time constraints are added next to space constraints, under the proposed _High-Level robot team Petri net_ system.
2306.06126
Deep Learning Method for Cell-Wise Object Tracking, Velocity Estimation and Projection of Sensor Data over Time
Current Deep Learning methods for environment segmentation and velocity estimation rely on Convolutional Recurrent Neural Networks to exploit spatio-temporal relationships within obtained sensor data. These approaches derive scene dynamics implicitly by correlating novel input and memorized data utilizing ConvNets. We show how ConvNets suffer from architectural restrictions for this task. Based on these findings, we then provide solutions to various issues on exploiting spatio-temporal correlations in a sequence of sensor recordings by presenting a novel Recurrent Neural Network unit utilizing Transformer mechanisms. Within this unit, object encodings are tracked across consecutive frames by correlating key-query pairs derived from sensor inputs and memory states, respectively. We then use resulting tracking patterns to obtain scene dynamics and regress velocities. In a last step, the memory state of the Recurrent Neural Network is projected based on extracted velocity estimates to resolve aforementioned spatio-temporal misalignment.
Marco Braun, Moritz Luszek, Mirko Meuter, Dominic Spata, Kevin Kollek, Anton Kummert
2023-06-08T07:33:05Z
http://arxiv.org/abs/2306.06126v2
Deep Learning Method for Cell-Wise Object Tracking, Velocity Estimation and Projection of Sensor Data over Time ###### Abstract Current Deep Learning methods for environment segmentation and velocity estimation rely on Convolutional Recurrent Neural Networks to exploit spatio-temporal relationships within obtained sensor data. These approaches derive scene dynamics implicitly by correlating novel input and memorized data utilizing ConvNets. We show how ConvNets suffer from architectural restrictions for this task. Based on these findings, we then provide solutions to various issues on exploiting spatio-temporal correlations in a sequence of sensor recordings by presenting a novel Recurrent Neural Network unit utilizing Transformer mechanisms. Within this unit, object encodings are tracked across consecutive frames by correlating key-query pairs derived from sensor inputs and memory states, respectively. We then use resulting tracking patterns to obtain scene dynamics and regress velocities. In a last step, the memory state of the Recurrent Neural Network is projected based on extracted velocity estimates to resolve aforementioned spatio-temporal misalignment. Deep Learning, Perception, Recurrent Neural Networks, Sensor Data Processing, Velocity Estimation ## I Introduction For automated driving functionalities, data from sensors such as camera, radar and lidar is processed to reason about environmental semantics. As each sensor scan captures a sparse subset of the environment, temporal integration of consecutive scans enriches the information density of the data and thus improves the perceptual capability of the system. State of the art approaches for perceiving the environment of a vehicle [1, 2, 3, 4, 5, 6, 7, 8] utilize Deep Learning methods. A combination of Recurrent Neural Network (RNN) layers like Long Short-Term Memories (LSTM) [9] or Gated Recurrent Units (GRU) [10] and Convolutional Neural Networks (ConvNets) [11] such as [12] can be applied to integrate sensor recordings over time. These networks process a memory state, or hidden state (\(h\)), which contains accumulated features from previous sensor scans up to _t-1_ as well as the novel input (\(I\)) that contains encodings from sensor scans at time \(t\). By matching \(h_{t-1}\) and \(I_{t}\), the network is able to increase the information density of the captured environment as well as extract patterns from [\(h_{t-1}\), \(I_{t}\)]. Scene dynamics induced by the ego-motion of the host vehicle as well as movements from external objects, however, cause spatial misalignment between \(h_{t-1}\) and \(I_{t}\). While knowledge about ego-motion can be used to resolve misalignment between the memory states and novel input encodings due to dynamics of the host vehicle, movements of external objects are unknown to the model and therefore harder to resolve. As a result, the model maintains spatially obsolete information in its memory which causes ambiguities and increases noise in the feature domain. While these spatially inaccurate artifacts in the hidden state potentially reduce the prediction performance of the network, correlation between \(h_{t-1}\) and \(I_{t}\) may serve as a source to exploit dynamic scene context, e.g., relative movement of underlying objects between consecutive frames. Knowledge about the relative movement of underlying objects could then be utilized by the model to resolve spatio-temporal misalignment between \(h_{t-1}\) and \(I_{t}\). As shown in Figure 1, this does not happen within conventional Convolutional RNNs. Another challenge when recurrently processing sensor data involves the extraction of velocities within the model. As we show in a later section of this work, the network essentially relies on spatio-temporal patterns extracted by ConvNets from [\(h_{t-1}\), \(I_{t}\)] to predict velocities of objects. The limited receptive field size of these ConvNets thus defines the limits of object velocities that can still be perceived. This motivates the need for an advanced method to extract scene dynamics from sensor scans within the Deep Learning-based model. Fig. 1: Feature maps (L2-norm) showing the output of two Gated Recurrent Units processing sensor scans on a highway. _Left_: Conventional RNN retaining trails that contain obsolete data behind moving vehicles. _Right_: Our approach performs projections of the memory state based on dynamics of underlying objects to maintain spatio-temporal alignment of the data. In this paper, we present a novel RNN structure for processing consecutive sensor scans that 1. Extracts scene dynamics from [\(h_{t-1}\), \(I_{t}\)] on a cell basis by tracking object-related characteristics over time while being independent of the grid resolution 2. Uses these velocity patterns to resolve spatio-temporal misalignment between accumulated information from previous iterations and novel sensor scans This work is structured as follows: First, we summarize related work showing alternative and supplementary approaches in section II. We then present a novel recurrent cell that solves aforementioned issues in section III. Finally, in section IV we show how this recurrent unit is capable of improving the performance of our network on a grid segmentation and velocity regression task. ## II Related Work Our work intersects research directions of dynamic grid segmentation, optical flow for semantic video segmentation and Deep Learning-based methods for object tracking. In this section, we provide an overview about central approaches to these topics that are related to our work. ### _Exploiting Inter-Frame Correlations_ As a central contribution of this paper, we extract scene dynamics to resolve spatio-temporal misalignments between consecutive frames. Optical flow approaches [13, 14] learn dynamics between consecutive images utilizing 2D ConvNet architectures and potentially warp these image data respectively. [15] improves temporal consistency of semantic segmentation outputs by postprocessing the network output based on optical flow estimation. However, this method processes sensor scans individually without leveraging temporal correlations within the data while we focus on assuring spatio-temporal consistency within the RNN unit. In contrast, [16] uses a Spatial Transformer Network [17] to perform optical flow warping as a pre-processing step of hidden state feature maps within an RNN to effectively propagate video data over time. This concept is similar to ours. However, our approach exploits scene dynamics by tracking object patterns across frames. For this purpose, we explicitly define velocities of objects within the scene that the model needs to learn as opposed to [16] where the network, to some precision, learns them implicitly based on the backpropagated loss signal. Since we present a recurrent unit that extracts movements by tracking object-related patterns between consecutive frames, our approach shows relations to tracking algorithms such as [18] that are based on Deep Learning methods like [19, 20, 21]. By combining a network like [22] to detect objects and RNN structures like LSTMs [9] to exploit inter-frame correlations, these approaches discriminate targets on object level while we deploy tracking-like mechanisms to extract motion patterns from object representations within the model. To our knowledge, there is no work performing this kind of object pattern tracking on cell level to validate and refine warping of feature maps. ### _Dynamic Grid Segmentation for Autonomous Driving_ Various approaches build on Deep Learning models to process lidar [23, 24, 25, 3] or radar data [7, 8] for grid segmentation, i.e. segmenting the area around the vehicle into equally sized grid cells and classifying each cell individually. While recurrently processing sensor scans allows the model to temporally integrate environmental information, RNNs that are deployed for this purpose in [26, 3, 24] show two major weaknesses: First, for moving objects, patterns from previous frames \(h_{t-1}\) show a spatial misalignment to novel feature encodings \(I_{t}\) from sensor scans as described above. This misalignment could be resolved by warping \(h_{t-1}\) according to scene-inherent dynamics as proposed by optical flow mechanisms like [16]. However, this requires a reliable estimation of velocities on a cell basis. Measurements from various sensors like lidar sensors do not contain intrinsic velocity information, and even radar sensors merely measure radial velocity components of detected objects. Therefore, models need to rely on movement patterns of captured objects between consecutive frames, i.e. offsets between patterns in \(h_{t-1}\) and \(I_{t}\) in order to extract scene dynamics. However, any ordinary Convolutional RNN like Convolutional GRUs and Convolutional LSTMs [12] which process \(h_{t-1}\) and \(I_{t}\) has a receptive field which is restricted by the kernel size. **Example**: A kernel of size 3\(\times\)3 on a quadratic grid with a resolution of \((0.5\,m)^{2}\) and a sampling rate of 20 _Hz_ is able to capture movements inherent in [\(h_{t-1}\), \(I_{t}\)] in each direction of 10 \(\frac{m}{s}\). This is due to the fact that a 3x3 kernel is able to fetch information from a maximum distance of one neighboring cell in each direction towards its central grid cell within one subsequent frame. Even though the receptive field can be further increased by deploying bigger filters or pyramid structures as presented in [26, 24, 3], these modifications come at a cost of increased time and space complexity while not entirely solving the problem of receptive field size dependency. In section III, we therefore present a recurrent cell that tracks objects in between frames to estimate their velocities while being independent of the underlying grid resolution. These velocities can then be used to resolve the spatio-temporal misalignment between memory state and novel sensor input. ## III Method ### _Recurrent State Projection Cell_ Figure 2 shows the recurrent cell that we present as a wrapper for an RNN to estimate scene dynamics and to maintain spatio-temporal alignment between the memory state \(h_{t-1}\) and novel RNN inputs \(I_{t}\). In order to provide an initial estimate about dynamics within the scene, a velocity vector per cell is predicted by processing the hidden state \(h_{t}\) using neural network layers \[v_{t}=f_{\theta_{vel}}(h_{t}) \tag{1}\] with parameters \(\theta_{vel}\) that are optimized during backpropagation. For this purpose, an auxiliary regression loss is applied so that velocities can be trained in a supervised manner. Note that this initial velocity estimation will be refined recurrently to minimize the difference to actual speeds. We translate predicted velocity vectors \(v_{t}\)\([\frac{m}{s}]\) to offset vectors _off\({}_{t}\)\([m]\)_ by incorporating the frame rate of the network. For a given frame rate \(FR\)\([\frac{1}{s}]\), predicted offsets correspond to \[\textit{off}_{t}=\frac{v_{t}}{FR}. \tag{2}\] The predicted offset vector _off\({}_{t}\)_ is a second recurrent state alongside the hidden state \(h_{t}\) so that a concatenation of \(h_{t}\) and _off\({}_{t}\)_, denoted as \(H_{t}\), is propagated to the next iteration. In the following iteration, the Transformation module from Figure 2 projects \(H_{t-1}\) according to previously predicted projection vectors _off\({}_{t-1}\)_ (see Figure 3). We denote the transformed hidden state and relative offsets as \(h^{\prime}_{t-1}\) and _off\({}_{t-1}^{\prime}\)_, respectively. For each projected data entry, we then want the model to verify the correctness of state projection to respective target locations such that correctly predicted velocities are maintained while false predictions of \(v_{t}\) are suppressed. We therefore extract cell-wise embeddings as query-key pairs from both the hidden state \(h_{t-1}\) and inputs to the recurrent cell \(I_{t}\), where \[k=f_{\theta_{q}}(I_{t}) \tag{3}\] and \[q=f_{\theta_{k}}(h_{t-1}). \tag{4}\] A Scaled Dot-Product Attention \[Att(q^{\prime},k)=\text{sigmoid}(\frac{q^{\prime}k^{T}}{\sqrt{d_{h}}}) \tag{5}\] as presented in [27] then denotes correlations between the environment within novel input and memorized data. Here, \(d_{h}\) represents the channel size of the query and key vectors per cell. Finally, before \(h^{\prime}_{t-1}\) is processed by a conventional GRU, it is weighted by \(Att(q,k)\) so that the network is able to control information flow from source to target locations. Finally, the second recurrent state, _off\({}_{t-1}^{\prime}\)_, is recurrently refined by utilizing Att(q', k) to calculate a weighted sum \[\textit{off}_{t}^{+}\ =\ Att(q^{\prime},k)\cdot\textit{off}_{t-1}^{\prime}\ +\ (1-Att(q^{\prime},k))\cdot\textit{off}_{t} \tag{6}\] between memorized and newly predicted offsets. The concept behind this mechanism can be described as follows: Predicting speed vectors using ConvNets within the GRU is a viable inital estimation of velocities. The matching Fig. 3: Representation of an object which is subject to translation between two successive iterations. Object-specific encodings on a cell-basis are projected according to _off\({}_{t-1}\)_. Fig. 2: **Recurrent State Projection Cell**: Spatio temporal misalignment caused by external scene dynamics is resolved by a transformation of \(H_{t-1}\) based on regressed velocities of underlying objects _off\({}_{t-1}\)_. This transformation is then controlled by a gating mechanism which validates the matching accuracy between embeddings of source and target patterns of the underlying objects. The resulting gated memory state is then processed with the input state \(I_{t}\) by an RNN, e.g., a GRU [10]. Finally, scene dynamics that determine memory state projection in the next iteration are estimated. attention \(Att(q,k)\) in the following iteration correlates with the accuracy of the prediction of _off\({}_{t}\)_. Utilizing the gating mechanism from Equation 6, the model is able to decide on a per cell basis to keep accurate offset predictions in memory while using newly predicted offsets in case of low correlation. This mechanism allows for a sequential refinement of predicted scene dynamics. **Optional**: The presented approach can by extended by treating predicted offsets _off_ as Gaussian distributions \(\mathcal{N}(\mu_{\textit{off}},\sigma_{\textit{off}}^{2})\) for both directions x and y where all of \(\mu_{\textit{off},x},\ \sigma_{\textit{off},x},\ \mu_{\textit{off},y}\) and \(\sigma_{\textit{off},y}\) are regression targets of the network output presented in Equation 1. We assume an isotropic uncertainty distribution where \(\sigma_{\textit{off},x}=\sigma_{\textit{off},y}\). Incorporating uncertainty scores into iterative refinement of scene dynamics potentially improves velocity estimation as it increases the chance of finding correlating cells for uncertain offset predictions. Loss function \[L_{V}=\frac{1}{N}\ \sum_{i=1}^{N}\ \frac{1}{2\sigma_{\textit{off},i}^{2}}( \textit{off}_{i}^{\prime}-\mu_{\textit{off},i})^{2}\ +\ \frac{1}{2}\ \text{log}\ \sigma_{\textit{off},i}^{2} \tag{7}\] as presented in [28] could be deployed where \(N\) is the amount of training samples and _off\({}^{\prime}\)_ defines the Ground Truth (GT) regression target. Multiple offsets for each state projection could then be sampled from \(\mathcal{N}(\mu_{\textit{off}},\sigma_{\textit{off}}^{2})\) as depicted in Figure 5 such that multiple attention scores per emitting cell are obtained. A softmax function processing correlations for each target location then defines probabilities for various movement patterns. By projecting information based on these probabilities, the model is able to account for a variety of possible motions of external road users. ### _Network Design_ The model we deploy to sequentially process sensor scans for dynamic grid segmentation is shown in Figure 4. Our approach is independent of the utilized sensor type as long as received data is available in a 2D bird's eye view grid structure. Therefore, we define an input stream of sensor data being processed by a generic Preprocessing module (Prep.). For experiments presented in this work, this Preprocessing module consists of successive 3x3 2D ConvNets. These layers transfer input sensor scans of size \(\mathbb{R}^{X\times Y\times S}\) to environment encodings of size \(\mathbb{R}^{X\times Y\times F}\). Here, X and Y define the amount of cells in x and y direction, respectively, and S defines the amount of sensor-specific input channels. Resulting feature maps with a depth of F features are then processed by the recurrent spatio-temporal processing unit from subsection III-A we denote as RNN+ in Figure 4. For an iteration at time \(t\), this module receives two data streams \(I_{t}\in\mathbb{R}^{X\times Y\times F}\) and \(h_{t-1}\in\mathbb{R}^{X\times Y\times M}\) where M denotes the amount of channels per cell in the memory state. RNN+ then outputs predictions _off\({}_{t}\)_ on cell dynamics as well as spatio-temporal patterns of size \(\mathbb{R}^{X\times Y\times F}\) extracted from [\(I_{t}\), \(h_{t-1}\)]. We then deploy a module we call grid segmentation head consisting of four successive Atrous Spatial Pyramid Pooling (ASPP) [2] layers to map outputs in II to class probabilities per cell to perform grid segmentation. These ASPP layers allow an increased receptive field for grid segmentation by using 2D ConvNets with various dilation rates in parallel. ## IV Experiments In this section, we describe experiments we conducted to evaluate our approach as presented in section III. To this end, we describe how we preprocessed sensor data utilized for our experiments and elaborate on the training setup we applied. Finally, various mechanisms as part of the recurrent spatio-temporal processing unit defined in section III are presented and evaluated against alternative approaches building Fig. 4: **Network Design**: Input data in a two-dimensional Cartesian grid is processed by a preprocessing module consisting of 2D ConvNets. Resulting feature maps are then fed into a recurrent unit, RNN+, where they are processed together with 2D memory states. As depicted in Figure 2, RNN+ outputs cell-wise velocity estimates in x and y direction together with patterns that are extracted from spatio-temporal relations in the data. Finally, a grid segmentation head consisting of Atrous Spatial Pyramid Pooling (ASPP) Layers [2] maps resulting feature maps to class probabilities for each cell. Fig. 5: Offsets are sampled in each iteration from joint Gaussian Distributions in X and Y. on pyramid-like network architectures to perform dynamic grid segmentation. ### _Data Preprocessing_ We obtain data that is used for training and evaluation of approaches presented in this section from a vehicle equipped with various sensors. We then train the model presented in section III to map input sensor scans to GT we derive from lidar recordings. In our egocentric setup, a grid of 160 x 160 cells with a cell resolution of \((0.5m)^{2}\) is projected around the vehicle. For experiments presented in this work, we use data from a sensor returning detections on a x-y plane as input to our model. Ground Truth grid semantics are derived from annotated lidar point clouds. For static environment, lidar recordings from static objects are accumulated over various past and future scans and then projected to the x-y plane. Cells are then assigned to the class that was most dominant in each grid projection. We assign the class label _unknown_ to any cell that is not covered by a single projection. Lidar point clouds are additionally used to annotate bounding boxes for objects like vehicles. We then derive the movement of objects from the absolute shift of these bounding boxes between consecutive frames and transfer resulting velocity vectors in between object boundaries to underlying grid cells. These movement patterns are then used twofold: First, cell-wise speed vectors can directly be used for the velocity estimation. Second, cells containing objects with a velocity above 2\(\frac{m}{s}\) are annotated as moving cells for grid segmentation. Furthermore, we want to train our network merely on those cells that are visible to the sensors. To this end, we calculate the density of lidar rays passing each cell in the vicinity of the host vehicle. Observability maps are then created by assigning a value between 0 and 1 to each cell depending on their lidar rays coverage. These observability maps are then used as a cell-wise loss weighting during training such that unobserved cells do not contribute to the final loss. ### _Settings_ Models that are presented in this work are trained utilizing Adam Optimizer [29] with parameter settings \(\beta_{1}\) = 0.9, \(\beta_{2}\) = 0.999 and a learning rate of \(10^{-4}\). We apply Cross Entropy loss for grid segmentation and L2 loss for cell-wise speed prediction. Due to the high imbalance between cells containing velocity values above 0 and cells with a speed regression target of 0, we deploy a cell-wise loss weighting based on global class frequency. We compare models for grid segmentation by calculating the Intersection over Union (IoU) scores for each of the classes _free_, _occupied_, _moving object_ and _unknown_ and then average over all classes to obtain the mean IoU value (mIoU). In order to merely evaluate the models based on those cells that are observable, we only consider cells with an observability weight greater than 0 for calculating the IoU. Accuracy of speed predictions is evaluated based on the Mean Absolute Error (MAE) score across both directions x and y. Here, only cells containing objects with a speed greater than 0 \(\frac{m}{s}\) are considered for evaluation. We train each model for 10 epochs and display results from the last epoch for comparison. ### _Results_ #### Iv-C1 Temporal Integration We initially stated that temporal integration of consecutive sensor scans enables the estimation of external velocities and increases the quality of network predictions due to effects like a richer coverage of the environment and the ability of the network to compensate noisy sensor measurements. In order to elaborate on this assumptions, we trained three networks: The first model is comparable to the one defined in Figure 4 with a standard GRU [10] utilized for RNN+. This network consumes 12 consecutive frames during training and is evaluated statefully. We then compare this model to a second approach which processes consecutive sensor scans individually without utilizing an RNN. Since this model does not contain a memory state, it is unable to integrate information sequentially and must rely entirely on single-frame sensor scans. This model is similar to the model defined in Figure 4 without an RNN placed between I and II. Since this model has a reduced capacity in trainable parameters due to the missing recurrent unit, we train a third model with inflated size for better comparability. For the latter two networks, we further process feature maps in II (Figure 4) by ConvNets comparable to those defined in Equation 2 to regress velocity estimates cell-wise in x and y direction. Evaluating this additional regression target helps to understand to which extent scene dynamics can be derived from correlations between memory state and novel input within an RNN. Results on these experiments, depicted in Table I, show a superior performance of the sequentially processing approach compared to networks processing single sensor scans individually. Approaches without an RNN show a reduced IoU performance but most importantly fail to output an accurate \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline & \multicolumn{4}{c}{Intersection over Union} & \multicolumn{2}{c}{MAE} \\ Recurrent Unit & Mean & Free & Unknown & Occupied & Moving & Velocity & Param. \\ \hline **None** & 41.67 & 66.71 & 33.97 & 34.03 & 32.0 & 9.29 & 169.0K \\ **None, aligned cap.** & 44.21 & 70.28 & 35.85 & 36.03 & 34.7 & 8.49 & 369.3K \\ **GRU [10]** & **46.61** & **74.73** & **36.62** & **37.45** & **37.65** & **4.69** & 359.9K \\ \hline \hline \end{tabular} \end{table} TABLE I: Comparison of a model processing sensor scans individually (Single Frame) with a model exploiting temporal dimension (Multi Frame) by utilizing a GRU. Grid segmentation (IoU) and velocity estimation results (MAE) are shown. velocity prediction. These findings show the strong reliance of models on the exploitation of patterns within sequential sensor scans to capture scene dynamics. #### Iv-B2 Exploiting Scene Dynamics In the previous section we showed that leveraging temporal context within the data significantly improves grid segmentation as well as scene dynamics prediction capabilities when processing sensor scans. The GRU that was deployed uses 3\(\times\)3 ConvNets to exploit spatio-temporal correlations in [\(I_{t}\), \(h_{t-1}\)]. As mentioned in section II, these ConvNets are heavily restricted in capturing a broad range of scene dynamics due to their limited receptive field size. Approaches like [3, 26] partially overcome this problem by deploying ConvNets within the RNN on subsequently downsampled spatio-temporal domains such as [\(I_{t}\), \(h_{t-1}\)]. In Table II, we present results of such an architecture inspired by [3] which we denote Pyramid RNN due to its pyramid-like structure. In this implementation, we use the network architecture from Figure 4 with RNN+ being replaced by the Multi-Scale RNN presented in [3] with reduced amount of channels to make it comparable with the baseline model in terms of trainable parameters. Table II shows that the larger receptive field hereby introduced leads to a small improvement in the velocity MAE compared to the baseline model utilizing a standard GRU. However, we obtain a slightly reduced grid segmentation capability. We then compare the two presented approaches with the purely deterministic method defined in section III. Summing up, this method introduces the following enhancements: 1. Grid independent object tracking within the recurrent unit to derive scene dynamics 2. Utilize these scene dynamics to resolve the spatio-temporal misalignment between [\(I_{t}\), \(h_{t-1}\)] Besides being able to exploit spatio-temporal dependencies while being independent of the grid resolution by utilizing arbitrary speed offsets, our approach explicitly formulates a tracking-based method to capture scene dynamics compared to previous methods that build on intrinsic properties of ConvNets for this task. A visualization of grid segmentation as well as velocity estimation results of our method can be seen from Figure 6. Furthermore, Figure 1 shows how the hidden state projection mechanism as part of our approach leads to a significant reduction of obsolete data from moving objects within the memory state of the network. The quantitative improvement of our method compared to \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{4}{c}{Intersection over Union} & \multicolumn{2}{c}{MAE} \\ Recurrent Unit & Mean & Free & Unknown & Occupied & Moving & Velocity & Param. \\ \hline **GRU**[10] & 46.61 & **74.73** & 36.62 & 37.45 & 37.65 & 4.69 & 359.9K \\ **Pyramid RNN**[3] & 45.92 & 73.79 & **37.4** & 36.46 & 36.04 & 4.56 & 346.7K \\ **Ours** & **46.8** & 73.14 & 37.35 & **38.41** & **38.43** & **3.95** & 411.6K \\ \hline \hline \end{tabular} \end{table} TABLE II: Comparison of three approaches for RNN+. Grid segmentation (IoU) and velocity estimation (MAE) results are shown. Fig. 6: Visualization of grid segmentation (top row) and cell-wise velocity estimation (bottom row) results on highway, urban and road scenes. For grid segmentation, green, yellow, grey and red cells indicate network predictions on drivable, occupied, unknown and moving areas, respectively. Cells that are not observable by the input sensor are depicted as white areas in the grid segmentation visualization. prior approaches is presented in Table II. While this evaluation shows a slight improvement of our approach for the grid segmentation task, a superior performance with an improvement of \(\sim\)**15%** can be obtained for velocity regression compared to approaches using an ordinary GRU or Pyramid RNN as defined in [3]. In this network, for reasons of comparability, the same fundamental model as in the GRU only experiments is used while increased capacity can be attributed to the appended matching function. Shown results prove that our approach combining the tracking mechanism for an enhanced velocity estimation and the state projection mechanism to resolve spatio-temporal misalignments introduced in this work present a novel and improved procedure for temporally integrating sensor data using Recurrent Neural Networks. ## V Conclusion In this work, we presented a Deep Learning-based method to extract scene dynamics by tracking object-related patterns across consecutive sensor scans without being restricted to grid resolutions or velocity limits. Furthermore, we show how our method is able to utilize these velocity patterns performing a memory state projection to avoid spatio-temporal misalignment between memorized and novel data within the Recurrent Neural Network. Embeddings that are extracted by the network to validate this state projection can be interpreted as object-specific patterns and can therefore be used to track entities on a grid basis over time. We show how our approach outperforms previous Recurrent Neural Network on the task of semantically segmenting the environment around a vehicle as well as velocity estimations.
2304.08014
Self-Supervised Learning from Non-Object Centric Images with a Geometric Transformation Sensitive Architecture
Most invariance-based self-supervised methods rely on single object-centric images (e.g., ImageNet images) for pretraining, learning features that invariant to geometric transformation. However, when images are not object-centric, the semantics of the image can be significantly altered due to cropping. Furthermore, as the model becomes insensitive to geometric transformations, it may struggle to capture location information. For this reason, we propose a Geometric Transformation Sensitive Architecture designed to be sensitive to geometric transformations, specifically focusing on four-fold rotation, random crop, and multi-crop. Our method encourages the student to be sensitive by predicting rotation and using targets that vary with those transformations through pooling and rotating the teacher feature map. Additionally, we use patch correspondence loss to encourage correspondence between patches with similar features. This approach allows us to capture long-term dependencies in a more appropriate way than capturing long-term dependencies by encouraging local-to-global correspondence, which occurs when learning to be insensitive to multi-crop. Our approach demonstrates improved performance when using non-object-centric images as pretraining data compared to other methods that train the model to be insensitive to geometric transformation. We surpass DINO[Caron et al.[2021b]] baseline in tasks including image classification, semantic segmentation, detection, and instance segmentation with improvements of 4.9 $Top-1 Acc$, 3.3 $mIoU$, 3.4 $AP^b$, and 2.7 $AP^m$. Code and pretrained models are publicly available at: https://github.com/bok3948/GTSA
Taeho Kim, Jong-Min Lee
2023-04-17T06:32:37Z
http://arxiv.org/abs/2304.08014v7
Self-Supervised Learning from Non-Object Centric Images with a Geometric Transformation Sensitive Architecture ###### Abstract Most invariance-based self-supervised methods rely on single object-centric images (e.g., ImageNet images) for pretraining, learning features that invariant to geometric transformation. However, when images are not object-centric, the semantics of the image can be significantly altered due to cropping. Furthermore, as the model becomes insensitive to geometric transformations, it may struggle to capture location information. For this reason, we propose a Geometric Transformation Sensitive Architecture designed to be sensitive to geometric transformations, specifically focusing on four-fold rotation, random crop, and multi-crop. Our method encourages the student to be sensitive by predicting rotation and using targets that vary with those transformations through pooling and rotating the teacher feature map. Additionally, we use patch correspondence loss to encourage correspondence between patches with similar features. This approach allows us to capture long-term dependencies in a more appropriate way than capturing long-term dependencies by encouraging local-to-global correspondence, which occurs when learning to be insensitive to multi-crop. Our approach demonstrates improved performance when using non-object-centric images as pretraining data compared to other methods that train the model to be insensitive to geometric transformation. We surpass DINO[Caron et al. (2021)] baseline in tasks including image classification, semantic segmentation, detection, and instance segmentation with improvements of 4.9 \(Top-1Acc\), 3.3 \(mIoU\), 3.4 \(AP^{b}\), and 2.7 \(AP^{m}\). Code and pretrained models are publicly available at: [https://github.com/bok3948/GTSA](https://github.com/bok3948/GTSA) ## 1 Introduction Invariance-based methods are one of the primary self-supervised learning approaches for computer vision. These methods learn to be insensitive to various transformations, such as rotations, flips, crops, color jittering, blurring, and random grayscale, which provide an inductive bias that helps with representation learning[Chen and He (2020); Bardes et al. (2022); Zbontar et al. (2021)]. Augmentations employed in self-supervised learning methods can be divided into two categories: photometric transformations and geometric transformations. Photometric transformations, such as color jittering, Gaussian blurring, and grayscale conversion, involve changes to the appearance of an image, like color, brightness, or contrast. geometric transformations, including random crop, multi-crop, flip and rotation, deal with changes to the spatial configuration of the image. In the case of pretraining with non-object centric image, Learning invariant features from crop-related geometric transformations can be problematic. This is because cropped views may not always depict the same object[Purushwalkam and Gupta (2020); Zhang et al. (2022)]. In contrast, object-centric images are less prone to such issues due to their inherent focus on specific objects, which remain semantically consistent across various augmentations. This explains the significant performance drop observed when applying invariant methods to non-object centric images[Purushwalkam and Gupta (2020); El-Nouby et al. (2021)]. It can also be one of the reasons that, to obtain comparable results with curated datasets, a considerably larger amount of uncurated data is required[Goyal et al. (2021)]. Furthermore, when learning to be insensitive to geometric transformations, there is a risk of not capturing location information, and dense prediction models need to be sensitive to these transformations rather than insensitive. Therefore, being insensitive to geometric transformations may not be appropriate. As mentioned earlier, training a model to be insensitive to geometric transformations may lead to noise in learning. However, these transformations can still be beneficial for representation learning since they prevent pathological training behavior[Chen et al. (2020)] and provide diversity in inputs. Therefore, we propose a method that focuses on training a model to be sensitive with respect to those transformations, instead of insensitive to those transformations. To achieve this, we must provide a target that varies according to the input's geometric transformation during training. to create a target that varies with cropping, we pool the overlapping region from the teacher's feature map, which can be seen as cropping, and provide it to the student as a target. Additionally, to make the model sensitive to four-fold rotations, we rotate the target feature map to align it appropriately with the student input, and we include a prediction task to predict the degree of rotation of the student's input. Furthermore, we use a patch correspondence loss in our approach. When learning invariant features through multi-crop inputs, it will encourage global-to-local correspondence[Caron et al. (2021); Caron et al. (2021)], resulting in the capture of long-term dependencies. Our model uses an additional loss that encourages correspondence between patch representations through cosine similarity, allowing us to capture long-term dependencies [Bardes et al. (2022)]. Unlike encouraging correspondence between randomly selected crops, our approach induces correspondence between those that are similar in feature space, leading to more accurate correspondence. Our experiments demonstrate that when using non-centric images as pretraining data, it is more advantageous to train a model to be sensitive to geometric transformations rather than insensitive. We significantly outperformed prominent invariance-based methods in various tasks, including image classification, semantic segmentation, detection, and instance segmentation. ## 2 Related Work **Non-Contrative Learning** Non-contrastive learning methods aim to learn an invariant bias towards transformations by training on different views of the same image, without explicit negative samples[Garrido et al. (2022)]. In the absence of negative samples, non-contrastive learning methods employ various alternative approaches to prevent representation collapse. These include non-contrastive losses that minimize redundancy across embeddings[Bardes et al. (2022);Zbontar et al. (2021)], clustering-based techniques that maximize the entropy of the average embedding[Caron et al. (2019); Caron et al. (2021); Goyal et al. (2021); Assran et al. (2022)], centering and sharpening output features[Caron et al. (2021)], and heuristic strategies utilizing asymmetric architectural design with stop-gradient, additional predictors, and momentum encoders[Richemond et al. (2020); Chen and He (2020); Tian et al. (2021)]. Our method belongs to the non-contrastive learning category and adopts an asymmetric architectural design to prevent representation collapse. **Self-Supervised Learning with Uncurated Dataset** Several self-supervised pretraining methods have been proposed for uncurated datasets, such as the clustering-based method presented in [Caron et al. (2021); Tian et al. (2021)]. These methods have shown good performance even when using uncurated datasets, and [Goyal et al. (2021)] demonstrated the scalability of their method to larger datasets for increased performance. Additionally, [El-Nouby et al. (2021)] showed that, given sufficient iterations, even a small non-object centric dataset can yield results that are comparable to those obtained using a larger, highly curated dataset. However, our approach differs from other methods that aim to adapt to uncurated datasets. While clustering-based techniques are used in these methods, they still learn invariant representations to augmentations, whereas our approach learns features that are sensitive to geometric transformations. On the other hand, [Tian et al. (2021)] aims to address the shift in the distribution of image classes rather than object-centric bias. **Self-Supervised Methods that Learn to be Sensitive to Transformations** Early self-supervised learning methods, such as [Noroozi and Favaro (2017); Yamaguchi et al. (2021)], train the model to be sensitive to transformations by predicting the permutation or rotation applied to the input. As contrastive learning has gained prominence in representation learning, the importance of learning transformation-invariant representations has become increasingly evident[Misra and van der Maaten (2019); Chen et al. (2020); He et al. (2020)]. More recent work has utilized a hybrid approach that is sensitive to some transformations and insensitive to others [Dangovski et al. (2022)]. Performance improvement has been achieved by training to be sensitive to four-fold rotation while insensitive to other transformations. Similarly, our model also learns to be sensitive to four-fold rotations and insensitive to other photometric transformations. However, our method additionally becomes sensitive to crop-related transformations. ## 3 Methods In this section, we describe the training procedure for our proposed GTSA method, as illustrated in Figure 1. We adopt an asymmetric Teacher-Student architecture, similar to those in [Richemond et al. (2020); Chen and He (2020)]. The Teacher comprises an encoder and a projector, while the Student includes an encoder, projector, and predictor. Following the multi-crop strategy used in [Caron et al. (2021); Caron et al. (2021)], we feed only the global view to the Teacher and both global and local views to the Student. Our objectives involve maximizing the similarity between overlapping region representations and similar patch representations and predicting rotation. **Inputs.** Similar to [Ziegler and Asano (2022)], we utilized various augmentation techniques including color jitter, random grayscale, random Gaussian noise, Gaussian blur, random resize crop, and multi-crop. Additionally, we employed four-fold rotation. For each image, we apply random augmentations and generate G global views and L local views. The inputs are \([x_{1}^{g},x_{2}^{g},\dots,x_{1}^{l},x_{2}^{l},\dots]\), where \(g\) and \(l\) indicate global and local view, respectively. \(x\) represents batchified images, and \(x\in\mathbb{R}^{B\times C\times H\times W}\). Here, \(B\) is the batch size, \(C\) represents the number of image channels, and \(H\) and \(W\) denote the image size. Figure 1: **The GTSA. The Geometric Transformation Sensitive Architecture (GTSA). the teacher only receives global views only, while the student receives both global and local views. The learning process is designed to increase the similarity in overlapping regions and predicting four-fold rotation. Additionally, to capture long-term dependencies, GTSA encourages similarity between the teacher’s patch representations and the student’s patch representations by matching these patch representations using cosine similarity.** Teacher and Student.Apart from the additional predictor attached to the Student, the Teacher and Student share the same structural design. We utilized the ViT[Dosovitskiy et al. (2021)] as the encoder and employed stacked CNN blocks for the projector, each block comprising a convolution layer, layer normalization[Ba et al. (2016)], GELU activation[Hendrycks and Gimpel (2020)] and residual connection[He et al. (2015)]. The predictor has a similar architecture to the projector but uses fewer CNN blocks in its composition. We annotated the Predictor and Projector as H and U, respectively. Note that both the projector and predictor do not reduce the spatial resolution of the encoder's output. The Teacher does not get updated through gradient descent; instead, its weights follow the Student's weights using exponential moving average.[Tarvainen and Valpola (2018); He et al. (2019)] **Correspondence Region Pooling Operator.** We introduce a correspondence region pooling operator, denoted as \(\Phi(\cdot)\). To be sensitive to crop-related augmentations, the student must receive a target that reflects the crop augmentation. The \(\Phi(\cdot)\) operator serves this purpose by cropping specific locations in the feature space. It extracts the overlapping portions between the teacher view and student view in the feature space. To accomplish this, we first calculate the overlap region bounding boxes in the input space and scale them to match the feature spatial resolution. We then apply the \(\Phi(\cdot)\) operator to both the student and teacher feature maps. We implement this operator using RoI-Align (He et al. (2018)]. **Rotation Operator.** To be sensitive to rotation, we propose a rotation operator. This operator rotates the teacher output feature map according to the input rotation. We denote this as \(R(\cdot)\). By using this operator, the student receives a target that reflects the input rotation. **Rotation Predictor.** As we employ the rotation prediction pretext task, we extract a vector from the student encoder output using a Global Average Pooling layer[Lin et al. (2014)]. This vector is then input into the rotation predictor, which generates logits for the rotation prediction pretext task. The architecture of this process includes a linear layer, GELU activation, and a normalization layer. We denote the rotation predictor as P. **Loss Function.** We denote the output feature map of the student predictor as \(z\) and the output feature map of the teacher's projection layer as \(\tilde{z}\). \(z\in\mathbb{R}^{B\times D\times h_{s}\times w_{s}}\) and \(\tilde{z}\in\mathbb{R}^{B\times D\times h_{t}\times w_{t}}\), where \(B\) is the batch size, \(D\) is the feature dimension, \(h_{s}\) and \(w_{t}\) represent the spatial size of the student feature map, and \(h_{t}\) and \(w_{t}\) represent the spatial size of the teacher feature map. We apply \(\Phi\) to both \(z\) and \(\tilde{z}\), and additionally apply \(R\) to \(\Phi(\tilde{z})\). Both \(\Phi(z)\) and \(R(\Phi(\tilde{z}))\) will have the same dimensions. \(\Phi(z),R(\Phi(\tilde{z}))\in\mathbb{R}^{B\times D\times h_{s}\times w_{o}}\). We then compute the cosine similarity between them along feature dimension. The equation is as follows: \[l(z,\tilde{z})=-\frac{1}{B\times T}\sum_{i=1}^{B}\sum_{t=1}^{T}\frac{\Phi(z_{i })_{t}\cdot R(\Phi(\tilde{z}_{i}))_{t}}{\|\Phi(z_{i})_{t}\|\|R(\Phi(\tilde{z} _{i}))_{t}\|} \tag{1}\] where \(i\) is the index of the batch, \(t\) indicates the t-th spatial location in the output of the operator and \(T\) is \((h_{o}\times w_{o})\). the cosine similarity is calculated as the dot product of the two vectors divided by the product of their magnitudes. For the Rotation prediction pretext task, we use rotation prediction loss, which is annotated with \(l_{rp}\). We design this loss using cross-entropy loss. The equation is as follows: \[l_{rp}(\mathbf{y},\mathbf{\hat{y}})=-\frac{1}{B}\sum_{i=1}^{B}\mathbf{y}_{i} \cdot\log\mathbf{\hat{y}}_{i} \tag{2}\] where \(\mathbf{y}_{i}\) is the target indicating the \(i\)-th sample rotation angle. \(\mathbf{\hat{y}}_{i}\) is the Rotation Predictor output probability distribution of the \(i\)-th sample. In this case, as we use four-fold rotation, the possible rotation angles are [0, 90, 180, 270] degrees. Additionally, we use patch correspondence loss, which we annotate with \(l_{pc}\). We pair semantically similar patches from the teacher and student patch representations based on their cosine similarity and make them more alike using the \(l_{pc}\) loss. However, as noise may exist in this process, we do not encourage correspondence between all patches. Instead, we only encourage the similarity of the top-\(K\) most similar features among all patches, the same as [Bardes et al. (2022b)]. This helps to ensure that we are aligning semantically similar patches while avoiding pairing patches that are too dissimilar. \[l_{pc}(z,\tilde{z})=-\frac{1}{B\times K}\sum_{i=1}^{B}\sum_{p=1}^{K}\frac{z_{i,p} \cdot\tilde{z}_{i,\tilde{p}}}{\|z_{i,p}\|\|\tilde{z}_{i,\tilde{p}}\|} \tag{3}\] Here, \(z_{i,p}\) refers to the \(p\)-th patch representation of the i-th sample from the student, and \(\tilde{z}_{i,\tilde{p}}\) denotes The patch representation that is the closest to \(z_{i,p}\) among the patch representations for the teacher's i-th sample. \(K\) represents total number of matched pairs which are filtered. To consider multi-crop scenarios, we use the following total loss function: Loss \[= \frac{1}{G(L+G)-G}\sum_{g=1}^{G}\sum_{\begin{subarray}{c}l=1\\ l\neq g\end{subarray}}^{L+G}l(z_{l},\tilde{z}_{g})+\alpha*\frac{1}{G(L+G)-G} \sum_{g=1}^{G}\sum_{\begin{subarray}{c}l=1\\ l\neq g\end{subarray}}^{L+G}l_{pc}(z_{l},\tilde{z}_{g})\] \[+\beta*\frac{1}{L+G}\sum_{l=1}^{L+G}l_{rp}(\mathbf{y}_{1},\mathbf{ \hat{y}_{1}})\] (4) Here, \(\alpha\) and \(\beta\) are hyperparameters that control the impact of \(l_{pc}\) and \(l_{rp}\), respectively. \(G\) and \(L\) denote the total number of global views and local views, respectively. ## 4 Experiments Experiments have three distinct subsections for a comprehensive explanation of our approach. In Section 4.1, we demonstrate the effectiveness of GTSA in learning high-quality representations from non-object centric images. We pretrained our model on the COCO train2017 dataset [Lin et al. (2015)], a collection composed of non-object centric images. Then, we evaluated the performance of our model by fine-tuning it on various downstream tasks, specifically Classification, Detection, Instance Segmentation, and Semantic Segmentation. In addition, we pretrained our model on the ADE20K train dataset [Zhou et al. (2018)], which also consists of non-object centric images. However, due to the larger size of other downstream task datasets compared to the pretraining dataset, we restricted our evaluation to Semantic Segmentation on the ADE20K dataset. In Section 4.2, we demonstrated that our model operates as intended, showcasing its sensitivity to rotation and crop-related transformations. In Section 4.3, we compiled the results from our ablation study. We illustrated the effects of rotation prediction loss and patch correspondence, and through visualization of encouraged patch pairs, we verified that our method encourages correspondence even with patches that are far apart. **Pretrain Setup.** We used the same hyperparameters as DINO, as much as possible. Specifically, we set the batch size to 512, the global view size to 224x224, and the local view size to 96x96. We also used a scheduler to start the momentum at 0.996, just like DINO, and gradually increased it to 1. For the optimizer, we employed AdamW[Loshchilov and Hutter (2019)] and set both \(\alpha\) and \(\beta\) to 0.5. When pretraining with the ADE20K train dataset, we changed the jitter strength, which is a \begin{table} \begin{tabular}{l c c} \hline \hline Method & Top-1 Acc & Top-5 Acc \\ \hline rand init & 40.6 & 69.0 \\ MoCo v3 & 48.8 & 76.3 \\ DINO & 54.8 & 82.9 \\ GTSA (Ours) & **59.7** & **85.7** \\ \hline \hline \end{tabular} \end{table} Table 1: **The iNaturalist 2019 image classification performance. performance was obtained by pretraining all models with 100 epochs on COCO train2017, followed by fine-tuning for 300 epochs using the same settings for all models.** hyperparameter used for color jittering, from 1.0 to 0.2 and didn't normalize the encoder's output, and also \(\beta\) to 0.25. Apart from these differences, all other settings were the same. Our default model is ViT-S/16, and we pretrained it for 100 epochs with 8 NVIDIA GeForce RTX 3090 GPUs. ### Fine-tuning **Baseline.** We chose Dino and MoCo v3[Chen et al. (2021)] as our baseline methods. These methods are highly relevant to our research as they, like us, employ the Vision Transformer (ViT) as an encoder and focus on training the model to be insensitive in a self-supervised manner. Thus, they provide a compelling counterpoint to our approach which train the model to be sensitive to geometric transformations. The official codes from these baseline methods were leveraged in producing our results. To ensure a fair and balanced comparison, all methods underwent pretraining under the same conditions: 100 pretraining epochs and the use of the ViT-S/16 encoder. Moreover, the fine-tuning process was executed in an entirely same manner across all methods. **Image Classification.** We compared our method with other self-supervised methods in terms of image classification performance when fine-tuning on the iNaturalist 2019 dataset [Horn et al. (2018)]. Table 1 shows that our method outperforms other methods that learn only invariant features. We achieve a 4.9 and 10.9 higher accuracy compared to DINO and MoCo-v3, respectively, and a 19.1 accuracy improvement compared to random initialization. **Detection and Instance Segmentation.** Table 2 shows the performance of our method on COCO detection and instance segmentation tasks. GTSA outperforms DINO and MoCo-v3 by 3.4 and 6.2 \(AP^{b}\) in detection, and by 2.7 and 5.2 \(AP^{m}\) in instance segmentation, respectively. All models were fine-tuned using Mask R-CNN [He et al. (2018)] and FPN [Lin et al. (2017)] under the standard 1x schedule. **Semantic Segmentation.** Table 3 reports the performance of on ADE20K semantic segmentation using the Acc, mIoU, and mAcc metrics with all methods are pretrained with COCO train2017 dataset. While DINO achieves a 27.3 mIoU, GTSA attains a higher performance of 30.6 mIoU, which is a 3.3 mIoU improvement. Moreover, our method outperforms MoCo-v3 by 7.1 mIoU. All models were fine-tuned using Semantic FPN [Kirillov et al. (2019)] under the standard 40k iteration schedule, following the same approach as in [Yun et al. (2022)]. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Method & \multicolumn{2}{c}{Detection} & \multicolumn{3}{c}{Instance Segmentation} \\ \cline{2-6} & \(AP^{\text{b}}\) & \(AP^{\text{b}}_{50}\) & \(AP^{\text{b}}_{75}\) & \(AP^{\text{m}}\) & \(AP^{\text{m}}_{50}\) & \(AP^{\text{m}}_{75}\) \\ \hline rand init & 23.2 & 42.3 & 22.5 & 23.0 & 40.0 & 23.3 \\ MoCo v3 & 29.6 & 50.1 & 30.4 & 28.3 & 47.7 & 29.2 \\ DINO & 32.4 & 54.2 & 33.8 & 30.8 & 51.1 & 32.2 \\ GTSA(ours) & **35.8** & **57.8** & **38.5** & **33.5** & **54.7** & **35.3** \\ \hline \hline \end{tabular} \end{table} Table 2: **The COCO 2017 detection and instance segmentation performance.** Performance was obtained by pretraining all models with 100 epochs on COCO train2017, followed by fine-tuning for a standard \(1\times\) schedule using the same settings for all models. \begin{table} \begin{tabular}{l c c c} \hline \hline Method & aAcc & mIoU & mAcc \\ \hline rand init & 64.5 & 12.1 & 16.1 \\ MoCo v3 & 72.7 & 23.5 & 31.7 \\ DINO & 74.7 & 27.3 & 35.9 \\ GTSA (Ours) & **76.4** & **30.6** & **40.0** \\ \hline \hline \end{tabular} \end{table} Table 3: **The ADE20K image semantic segmentation performance.** Performance was obtained by pretraining all models with 100 epochs on COCO train2017, followed by fine-tuning for a standard 40K iteration schedule. Table 4, also reports the performance of on ADE20K semantic segmentation but differ in that in this table use ADE20K train dataset as pretraining data. GTSA outperforms DINO and MoCo with improvement 2.6 mIoU. fine-tuning setting are all same to Table 3. ### Proving sensitivity to geometric transformations In this section, we present Figure 2 to showcase our model's sensitivity to various transformations. We designed this experiment by measuring the variance in the output based on input transformations. Specifically, we generated ten views with single type of augmentation and fed these views into a model pretrained using the DINO method and another pretrained using our method. We then measured the variance of the encoder output generated by global average pooling. Here, we utilized the COCO val2017 dataset to compute the mean of variance, which is denoted on the y-axis. As shown in Figure 2, both DINO and GTSA learned to be invariant to color jittering, resulting in a very low variance. However, for four-fold rotation and crop-related transformations, GTSA exhibited a substantially higher variance compared to DINO. The crop-related transformations was implemented using a random resize crop, creating two global views and eight local views. It is important to note that the exact same inputs were fed into both the DINO and GTSA. ### Ablation study. In this section, we demonstrate the performance enhancement achieved through \(l_{pc}\) and \(l_{rp}\), and we present a figure that visualizes matched pairs. This illustrates that even distant patches are matched, confirming that correspondence is encouraged over long distances. As displayed in Table 5, we set \(l\) as the baseline and showed performance improvement with the addition of \(l_{pc}\) and \(l_{rp}\). For simplicity, we pretrain on the ADE20K train dataset for 100 epochs and report the results for Semantic Segmentation on the ADE20K dataset. We observed a 0.4 mIoU increase upon adding \(l_{pc}\), and an additional 0.4 mIoU increase when \(l_{rp}\) was incorporated. \begin{table} \begin{tabular}{l c c c} \hline \hline Method & aAcc & mIoU & mAcc \\ \hline rand init & 64.5 & 12.1 & 16.1 \\ MoCo v3 & 66.3 & 13.6 & 19.0 \\ DINO & 65.5 & 13.6 & 18.8 \\ GTSA (Ours) & **68.4** & **16.2** & **22.6** \\ \hline \hline \end{tabular} \end{table} Table 4: **The performance of ADE20K image semantic segmentation when pretraining on the ADE20K train dataset.** Performance was obtained by pretraining all models with 100 epochs on ADE20K train dataset, followed by fine-tuning for a standard 40K iteration schedule. Figure 2: **Output variance comparison.** This figure demonstrates how sensitive the DINO and our model are to input transformations. The y-axis represents the mean of output variance of the pretrained encoder. It demonstrates that our model is more sensitive to geometric augmentations than DINO. Figure 3 visualizes matched pairs. We used GTSA, which was pretrained for 100 epochs on the COCO train2017 dataset, and input images from COCO val2017 to obtain the matched pairs. From the left image, we can see that matching occurs between parts that depict columns, even if they are not precisely the same column. Similarly, in the right image, we see matching between two parts, both depicting a wall, despite being located at a distance from each other. This demonstrates that our method encourages the capture of long-term dependencies as we intended. ## 5 Conclusion We propose the Geometric Transformation Sensitive Architecture (GTSA) as a self-supervised method designed for non-object centric images. Our approach trains the model to be sensitive to geometric transformations, specifically rotation and crop-related transformations, by utilizing targets that reflect geometric transformations. Experimental results demonstrate that our method outperforms other transformation-invariant methods when pretrained on non-object centric images. **Limitations and Future Works.** Our method does not learn to be sensitive to all types of geometric transformations. Specifically, it is trained to be sensitive to four-fold rotations and crop-related transformations. In the future, we aim to explore its effectiveness when made sensitive to a broader range of geometric transformations. Moreover, we will conduct research to achieve superior performance on curated datasets.
2308.01483
Efficient neural supersampling on a novel gaming dataset
Real-time rendering for video games has become increasingly challenging due to the need for higher resolutions, framerates and photorealism. Supersampling has emerged as an effective solution to address this challenge. Our work introduces a novel neural algorithm for supersampling rendered content that is 4 times more efficient than existing methods while maintaining the same level of accuracy. Additionally, we introduce a new dataset which provides auxiliary modalities such as motion vectors and depth generated using graphics rendering features like viewport jittering and mipmap biasing at different resolutions. We believe that this dataset fills a gap in the current dataset landscape and can serve as a valuable resource to help measure progress in the field and advance the state-of-the-art in super-resolution techniques for gaming content.
Antoine Mercier, Ruan Erasmus, Yashesh Savani, Manik Dhingra, Fatih Porikli, Guillaume Berger
2023-08-03T00:42:30Z
http://arxiv.org/abs/2308.01483v1
# Efficient neural supersampling on a novel gaming dataset ###### Abstract Real-time rendering for video games has become increasingly challenging due to the need for higher resolutions, framerates and photorealism. Supersampling has emerged as an effective solution to address this challenge. Our work introduces a novel neural algorithm for supersampling rendered content that is \(4\times\) more efficient than existing methods while maintaining the same level of accuracy. Additionally, we introduce a new dataset which provides auxiliary modalities such as motion vectors and depth generated using graphics rendering features like viewport jittering and mipmap biasing at different resolutions. We believe that this dataset fills a gap in the current dataset landscape and can serve as a valuable resource to help measure progress in the field and advance the state-of-the-art in super-resolution techniques for gaming content. ## 1 Introduction Real-time rendering has become increasingly difficult for video games due to the demand for higher resolutions, framerates and photorealism. One solution that has recently emerged to address this challenge consists in rendering at lower resolution and then use an upscaling technique to achieve the desired resolution. However, developing efficient upscaling solutions that balance speed and accuracy remains a challenge. Recently, several commercial solutions have been developed for gaming super-resolution, including those that are based on deep learning (DL) such as Nvidia's DLSS [36] or Intel's XeSS [11], as well as solutions that do not rely on machine learning, such as AMD's FSR [18, 19]. Despite the availability of these commercial solutions, there has been relatively little published research on the application of DL-based super-resolution for gaming. We believe that one of the reasons why DL-based super-resolution for gaming has received little attention compared to super-resolution of natural content is that there is currently no standard, publicly available dataset for developing gaming-specific super-resolution solutions. Researchers and developers who want to study or improve upon existing methods must create their own datasets, which can be a time-consuming and resource-intensive process. Our work makes the following contributions: * we release a dataset specifically designed for the research and development of gaming super-resolution algorithms. We show that models trained on this dataset Figure 1: Example images produced by our solution using neural networks of different sizes. These models produce 1080p outputs in respectively \(1.08\) ms, \(1.53\) ms, and \(3.52\) ms on an RTX 3090, which is \(4\times\) to \(12\times\) faster than previous work by Xiao _et al_. [59]. Figure 2: Example of data modalities available in the QRISP dataset. _First row, from left to right:_ Native 270p, Negative 2 mipmap biased 270p, Negative 1.58 mipmap biased 360p, Negative 1 mipmap biased 540p. _Second row, from left to right:_ 540p depth, 540p motion vectors, Native 1080p, Enhanced 1080p can compete and outperform the quality levels obtained by commercial solutions such as DLSS [36]. * we propose an efficient gaming super-resolution architecture which leverages auxiliary modalities (sub-pixel accurate motion vectors, depth) and graphics rendering features (viewport jittering, mipmap biasing) commonly-used for temporal anti-aliasing. Our solution is \(4\times\) more efficient than previous published work [59] for the same level of accuracy. Overall, we believe that this work provides a new resource to measure progress in the field and help advance the state-of-the-art in gaming super-resolution. ## 2 Related work Generic super-resolution.In recent years, DL-based approaches for super-resolution of natural content have become increasingly popular [13, 14, 52, 25, 38, 34, 37, 48], yielding state-of-the-art visual quality compared to interpolation and other algorithmic solutions. In this work, we focus mainly on approaches that exploit information gathered from consecutive frames, as multi-frame super-resolution (also called temporal supersampling in the gaming field) has become the de facto standard for video gaming [36, 11, 19]. Specifically, we consider online super-resolution architectures that can be efficiently stepped forward, as offline video enhancement approaches based on bidirectional mechanisms [24, 53, 9, 32] or sliding windows of input frames [8, 57, 56, 31] are not suitable for gaming applications. Efficient online multi-frame super-resolution is often based on recurrent convolutional architectures, either with explicit motion compensation [49] or without [16, 27, 26]. Alternatives to explicit motion compensation include architectures based on deformable convolutions [57], transformers [51, 1, 33] or dynamic upsampling filters [28]. In gaming, however, explicit motion compensation is usually preferred, as the game engine can provide precise motion vectors and the neural network can be made much smaller if it doesn't have to learn how to copy past information over long distances. Gaming supersampling.Temporal Anti-Aliasing (TAA) [62, 61, 29] and its upscaling forms [22, 17, 19] exploit samples from past frames to recover missing details in the current frame. Compared to single-frame anti-aliasing techniques [3, 39, 46], TAA has gained popularity over the past decade as it provides a good trade-off between accuracy and practicality, even in the context of deferred rendering where MSAA [3] becomes bandwidth prohibitive. A typical TAA pipeline includes a re-projection step [44, 50] to re-align historical data using accurate motion vectors, a history validation step to reject or rectify past samples that are invalid or stale due to _e.g_. occlusion, lighting or shading changes, and a blending (or accumulation) step to produce the final output. While TAA-based super-resolution approaches such as FSR2 [19] leverage hand-engineered heuristics to perform history validation and accumulation, DLSS [36], XeSS [11] and Xiao _et al_.'s work [59] have showed that these steps can be replaced by a neural net. In the rest of the paper, we compare our algorithm mainly against Xiao _et al_.'s [59], as the implementation details of DLSS and XeSS are not publicly available and, therefore, not reproducible. Graphics features traditonally used with supersampling.Viewport jittering and negative mipmap biasing are two rendering techniques that are traditionally used to boost super-resolution accuracy. Viewport jittering consists in applying a sub-pixel shift to the camera sampling grid, and it is most useful when the camera is stationary as it ensures that consecutive frames contain complementary information about the scene. The subpixel offset typically follows Figure 3: Example of color, depth and motion vector images from the QRISP dataset. In total, our dataset contains 8760 frames (7260 for training, 1500 for testing) from 13 distinct scenes, rendered at different resolutions ranging from 270p to 1080p. More details can be found in the supplementary materials. a fixed-length sequence parameterized by, for example, a Halton sequence. Negative mipmap biasing, on the other hand, reduces the amount of low-pass prefiltering applied to textures, resulting in low-resolution renders with more high-frequency details. Related datasetsWhile there are many datasets for single-frame [2, 6, 63, 40, 23] and video [43, 60] super-resolution of natural content, there is no publicly available dataset for gaming super-resolution. To the best of our knowledge, among existing datasets, Sintel [7] would be the closest candidate as it consists of synthetic images and provides motion vectors. It is however available at only one resolution, which is problematic because DL-based super-resolution models trained to reconstruct images from artificially downsized images tend to overfit the degradation method [35]. Besides, Sintel does not provide jittered or mipbiaised samples, two key ingredients for gaming super-sampling. Our dataset is also significantly larger than Sintel (\(5\times\) more frames, and available at higher resolution). ## 3 The Qualcomm Rasterized Images for Super-resolution Processing dataset The _Qualcomm Rasterized Images for Super-resolution Processing_ dataset, referred later in this paper as QRISP, was specifically designed to facilitate the development and research of super-resolution algorithms for gaming applications. To the best of our knowledge, this dataset has no publicly available equivalent. Data modalities.The QRISP dataset consists of sequences of rasterized images captured at 60 frames per second with multiple modalities rendered at different resolutions ranging from 270p to 1080p. For each frame, the dataset includes color, depth, and motion vectors with different properties such as mipmapbiasing, jittering, or both. Jittered samples are achieved by shifting the camera using a sub-pixel offset drawn from a cyclic Halton\((2,3)\) sequence of length 16 and we occasionally include stationary segments in the camera path to make models trained on this dataset more robust to the "static" scenario. For low-resolution renders, we disable MSAA or any other frame-blurring anti-aliasing techniques and adjust the texture mip levels using a negative offset set to \(-log_{2}(S)\) where \(S\) is the per-dimension scaling factor, as typically done in gaming supersampling [29, 36, 18, 19, 59]. For high-res images, we target high-quality 1080p color images which were obtained by 2x-downsizing 2160p renders with MSAAx8 applied, as done in [59]. Figure 2 shows an example of such an "enhanced" target image (see the bottom-right crop), along with the corresponding low-resolution renders. Scene diversity and composition.The dataset is diverse, with a variety of backgrounds and models to enable better generalization to new video games. There are 13 scenes in Figure 4: High-level overview of our multi-frame supersamping approach _(top-left)_ and detailed description of its individual components: a warping module _(top-right)_ and a reconstruction neural network _(bottom)_. \(m\) and \(f\) refer to the number of intermediate conv layers and the number of features in these layers, respectively. total, with 10 scenes allocated for training and the remaining 3 reserved for evaluation. Some of these scenes can be seen in Figure 3 and more samples can be found in the supplementary material. The data was generated using the Unity game engine [20], with 3D assets sourced either from the Unity Asset Store 1, or from open-source projects. The list of Unity assets used in this work can be found in the supplementary material. To make the data more representative of realistic gaming scenarios, animated characters were incorporated to the scene. We also added textual UI elements on top of animated characters to make the algorithms more robust to elements without associated depth or motion vector information. Footnote 1: [https://assetstore.unity.com/](https://assetstore.unity.com/) Commercial baselines.In this dataset, we have also included images upscaled by commercial solutions integrated into Unity on the same frames used for evaluation. At the time of the dataset collection, these included Nvidia's DLSS 2.2 and AMD's FSR 1.22, which can serve as reference baselines to assess the performance of new algorithms. Footnote 2: We do not compare against FSR 1.2 in this paper as we focus on multi-frame supersampling approaches. We believe that releasing this dataset will be beneficial for many, as it provides a time-saving alternative to extracting synchronized LR-HR pairs from a game engine, with additional modalities such as depth and motion vectors, and properties like jittering or mipmap biaising. This dataset was primarily created to advance the development of super-resolution algorithms for gaming applications, but we believe that it can also be useful for other tasks, such as optical flow estimation. ## 4 Proposed algorithm Our proposed neural algorithm for gaming super-sampling consists of two components: a warping module for re-projecting historical data to the current frame, and a neural network for reconstructing the current image at target resolution. The reconstruction neural network blends a candidate image with the output image from the previous timestep using subpixel accurate motion vectors for motion compensation. Figures 4 and 5 provide an overview of the solution and a visualization of data instances at various steps of the algorithm, respectively. ### Warping module As seen in the top-right diagram of Figure 4, the motion compensation in our proposed algorithm is divided into three steps, which are described in detail below. Jitter compensation.We remove the viewport jittering contribution to the motion vector values. This is achieved Figure 5: Visualization of data instances used at different steps of the algorithm (_from left to right_): the previous frame’s high-resolution output, the current low-resolution render, the motion vectors before and after pre-processing, the re-projected output from the previous frame, the new high-resolution color candidate for the current frame, the blending mask \(\alpha\), and the final output for the current frame. Note that a low value of \(\alpha\) (dark) means that the color from the previous output is retained; a high value of \(\alpha\) (bright) means that the color from the previous timestep is discarded in favour of the new candidate color. Figure 6: Visualization of the blending mask (8\(\times\)8 crops) over consecutive timesteps along with the corresponding subpixel-shifted sampling locations (red dots) on a surface that does not need history rejection for \(2\times\) (first row) and \(4\times\) upscaling (second row). When the previous samples are still relevant for the current frame, the model tends to only update the pixels at the sampling location. by adding the jitter offset at frame \(t-1\) and substracting the jitter offset at frame \(t\) from the motion vector at frame \(t\): \[MV_{t}=MV_{t}+J_{t-1}-J_{t} \tag{1}\] Depth-informed dilation.This step modifies the motion vectors to reduce aliasing of foreground objects in re-projected images. This is achieved by producing a high-resolution block-based motion vector grid, where each block contains the motion vector value of the frontmost (i.e. lowest depth value) pixel within the block. In our experiments, we use a block size of \(8\times 8\) at high resolution. Similar ideas have been used in [29, 61, 19]. Re-projection.The preprocessed motion vectors are used to perform a bilinear warp and realign the previous timestep's high-resolution color images and neural features to the current frame. A space-to-depth operation is then applied to map the warping outputs to the input resolution, as in FRVSR [49]. ### Neural network Our neural network architecture is similar to the efficient single-frame super-resolution architectures of [15, 5]. We use 3x3 Conv-ReLU blocks and a relatively small number of layers and channels. The output is mapped to high resolution using a depth-to-space operation. We however modify the architecture to make it suitable for multi-frame inputs and jittering: Additional inputs and outputs.In addition to color information \(C_{t}\), the neural network \(F\) takes as inputs depth \(D_{t}\), the jitter offset \(J_{t}\), the previous color output \(Y_{t-1}\) and \begin{table} \begin{tabular}{|c|c c c|c c c|} \cline{2-7} \multicolumn{1}{c|}{} & **Ours-S** & \multicolumn{3}{c|}{**Ours-M**} & \multicolumn{3}{c|}{**Ours-L**} \\ \hline Scaling factor & \(2\times\) & \(2\times\) & \(3\times\) & \(4\times\) & \(2\times\) & \(3\times\) & \(4\times\) \\ \hline MV dilation & 0.21 & 0.21 & 0.21 & 0.21 & 0.21 & 0.21 & 0.21 \\ \hline Warping & 0.15 & 0.15 & 0.15 & 0.15 & 0.15 & 0.15 \\ \hline Neural network & 0.72 & 1.17 & 0.81 & 0.70 & 3.16 & 1.74 & 1.26 \\ \hline **Total** & **1.08** & **1.53** & **1.17** & **1.06** & **3.52** & **2.10** & **1.62** \\ \hline \end{tabular} \begin{tabular}{|c|c c c|c c|} \cline{2-7} \multicolumn{1}{c|}{} & **Xiao _et al._** \\ \hline \(2\times\) & \(3\times\) & \(4\times\) \\ \hline - & - & - \\ \hline 0.92 & 0.92 & 0.92 \\ \hline 13.01 & 12.92 & 12.88 \\ \hline **13.93** & **13.84** & **13.80** \\ \hline \end{tabular} \end{table} Table 1: Profiling results (in milliseconds) of our \(2\times\), \(3\times\) or \(4\times\) architectures vs Xiao _et al._ on an RTX 3090 at 1080p target resolution. These timings were all obtained using Nvidia’s TensorRT [12] for the neural network execution, using FP16 precision. Figure 7: Super-resolution results (\(2\times\) and \(3\times\)) by our algorithm vs bicubic upscaling, Xia _et al._’s approach [59] and DLSS 2.2 [36]. features \(f_{t-1}\) re-aligned by the warmer \(W\). It returns a pixel-wise blending mask \(\alpha\) obtained using a sigmoid activation, a high-resolution candidate color image \(\widetilde{Yt}\), and recurrent features \(f_{t}\) for the next timestep: \[\alpha,\widetilde{Y_{t}},f_{t}=F(C_{t},D_{t},J_{t},W(Y_{t-1}),W(f_{t-1})) \tag{2}\] Blending.The candidate color image returned by the neural net is combined with the previous output using \(\alpha\): \[Y_{t}=\alpha*\widetilde{Y_{t}}+(1-\alpha)*W(Y_{t-1}) \tag{3}\] Figure 6 illustrates how the blending mask evolves over consecutive timesteps and shows that the model tends to use the candidate pixels located at the current sampling location but retains the previous samples everywhere else (\(\alpha\approx 0\)). The neural net is also able to identify and discard samples (\(\alpha\approx 1\)) from the re-projected color image that are outdated due to appearance changes or dis-occlusion (see Figure 5). Jitter-conditioned convolutions.To facilitate alignment of the low-resolution color input, which is sub-pixel shifted due to jittering, we predict the kernel weights of the first and last convolution modules using an MLP conditioned on the jitter offset \(J_{t}\). This is different from kernel prediction networks commonly used in denoising tasks [55, 64, 41] or burst image super-resolution [58, 10] where a separate kernel is predicted for each pixel, as we only predict one kernel for the entire frame, using a two-dimensional vector (i.e. the jitter offset for the current frame) as the only conditioning variable. At inference time, the jittering sequence is known in advance, so we pre-compute the kernel weights for each jitter offset and re-load the corresponding kernel whenever a new jittered frame is generated. This allows the neural network to more accurately realign the subpixel-shifted input data to the target resolution grid, with no additional computational overhead. The concurrent work of [21] used a similar technique for temporal anti-aliasing. Comparison to previous work.Xiao _et al_.'s neural architecture consists of a feature extraction network that processes the current low-resolution frame, an upsampling module, a warping module that recursively realigns a rolling buffer of four upsampled feature maps, a feature reweighting network for history rejection, and a U-Net-style [47] reconstruction network to produce the final output. In comparison, our approach better leverages viewport jittering and has notable advantages in terms of speed and memory. First, all convolutional layers in our architecture run at the input resolution. Second, our historical data only consists of \(4\) high-resolution channels compared to \(48\) for Xiao _et al_., resulting in a larger memory footprint and a higher latency for the re-alignment step. ## 5 Implementation details We experiment with three variants of our models: _Ours-S (f16-m1)_, _Ours-M (f32-m3)_ and _Ours-L (f64-m5)_, where _fX-mY_ means that the architecture has \(Y\) intermediate conv layers and \(X\) feature channels. We adjust the number of recurrent features produced at low-resolution based on the scaling factor to end up with a single channel of features at high-resolution after depth-to-space. To predict the kernels of the first and last convs, we use a 7-layer MLP with 2048 hidden features and ReLU activations. Since our dataset uses the same fixed jittering sequence for the entire dataset, we also tried optimizing a set of 16 kernels, but it resulted in worse performance (0.1 dB PSNR drop on test scenes). We use mini-batches of eight 16-frame clips with spatial resolution \(264\times 264\) (at high resolution), an L1 loss, and train for 500k iterations using the Adam optimizer [30] with an initial learning rate of \(1e-4\), decaying the learning rate by a factor 2 after 200k and 400k iterations. We optimize the models on \(80\%\) of the segments from each training scene and use the rest for validation. ## 6 Results analysis ### Speed-accuracy tradeoff compared to existing solutions Table 2 reports the average PSNR, SSIM and LPIPS scores obtained by our network, DLSS 2.2 and our implementation of Xiao _et al_.'s solution [59] on the test scenes. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Upscaling} & \multirow{2}{*}{Method} & \multicolumn{3}{c|}{Metric} \\ \cline{3-5} & & PSNR & SSIM & LPIPS \\ \hline \multirow{5}{*}{\(2\times\)} & Bicubic & 29.51 & 0.8672 & 0.219 \\ \cline{2-5} & DLSS 2 & 30.21 & 0.8816 & 0.187 \\ \cline{2-5} & Xiao _et al_. & 31.89 & 0.9075 & 0.140 \\ \cline{2-5} & Ours-S (f16-m1) & 31.18 & 0.8941 & 0.160 \\ \cline{2-5} & Ours-M (f32-m3) & 31.80 & 0.9044 & 0.140 \\ \cline{2-5} & Ours-L (f64-m5) & **32.21** & **0.9115** & **0.134** \\ \hline \multirow{5}{*}{\(3\times\)} & Bicubic & 27.61 & 0.8034 & 0.322 \\ \cline{2-5} & Xiao _et al_. & 30.24 & 0.8729 & 0.200 \\ \cline{2-5} & Ours-M (f32-m3) & 30.23 & 0.8655 & 0.203 \\ \cline{2-5} & Ours-L (f64-m5) & **30.67** & **0.8747** & **0.187** \\ \hline \multirow{5}{*}{\(4\times\)} & Bicubic & 26.42 & 0.7535 & 0.391 \\ \cline{2-5} & Xiao _et al_. & 29.02 & 0.8364 & 0.259 \\ \cline{2-5} & Ours-M (f32-m3) & 29.06 & 0.8305 & 0.258 \\ \cline{1-1} \cline{2-5} & Ours-L (f64-m5) & **29.42** & **0.8403** & **0.238** \\ \hline \end{tabular} \end{table} Table 2: PSNR, SSIM and LPIPS scores for our model, DLSS 2.2 [36] and our implementation of Xiao _et al_. [59] for \(2\times\), \(3\times\) and \(4\times\) upscaling. For a more fine-grained analysis, a per-scene breakdown of PSNR and SSIM scores is available in the supplementary material. Our small model outperforms DLSS, while our large architecture is slightly better than Xiao _et al_. These PSNR, SSIM and LPIPS improvements also manifest in visual quality improvements, as seen in Figure 7. We observe that DLSS 2.2 produces more ghosting artifacts (visible on the lantern in the first row or on the barrel in the third row) than the other approaches. The benefits from leveraging jittered samples are particularly visible in the reconstruction of static scenes (see Figure 8). Table 1 demonstrates that our larger model generates 1080p images in \(3.52\) ms through \(2\times\) upscaling on an Nvidia RTX 3090, representing a \(4\times\) improvement compared to the architecture proposed by Xiao _et al_. [59] while maintaining the same level of accuracy. Our small architecture runs in \(1.08\) ms for the same workload. Additionally, our architecture scales better to larger magnification factors, with our \(4\times\) architecture offering an \(8.5\times\) speedup compared to Xiao _et al_. [59] for the same level of accuracy. We believe these timings could be improved using optimized CUDA kernels for the reprojection-related operations. ### Ablation studies In this section, we ablate individual components from our \(2\times\) medium-sized model, _Ours-M_, to illustrate the impact of each component on visual quality and stability. Depth-informed motion vector dilation.While removing motion vector dilation improves PSNR and LPIPS scores (see Table 3), we found this step beneficial for the reconstruction of thin objects. This is visible in Figure 10 where we illustrate the effect of depth-informed dilation on the motion vectors, resulting in a warped image with less ghosting and aliasing artifacts, leading to a better reconstruction. Reconstruction quality and temporal stability on static scenes.We evaluate the quality and temporal stability of model outputs on a section of the AbandonedSchool scene3 where the camera is stationary. We report the average PSNR and pixel-wise standard deviation on these frames in Table 3, in addition to the average PSNR on the entire test set. We observe that: Footnote 3: Segment 0001, from frame 275 to 292 * The single-frame variant of our architecture poorly reconstructs fine-grained details and is not temporally stable. * Jittering is key to properly reconstruct static scenes: without it, the average PSNR on static scenes drops significantly. The benefits from leveraging jittered samples is also visible in Figure 8. * Blending improves temporal stability. Without it, the pixel-wise standard deviation doubles (from 0.54 to 1.10) and we generally observe considerably more flickering artifacts with this variant. * Temporal stability benefits from more (targeted) training data. The pixel-wise standard deviation score drops significant when the last 5 scenes are not used because only those contain static segments. Figure 8: Visual impact of jittering on reconstructions of static scenes using our algorithm vs non-jittered alternatives. \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline \multirow{2}{*}{Model variants} & \multicolumn{2}{c|}{Entire test set} & \multicolumn{2}{c|}{Static frames} \\ \cline{2-5} & PSNR & LPIPS & PSNR & Pixel Std \\ \hline \hline Baseline (f32-m3) & 31.80 & 0.140 & 37.38 & 0.54 \\ \hline (-) MV dilation & 31.97 & 0.136 & 37.47 & 0.54 \\ \hline (-) blending & 31.80 & 0.144 & 36.86 & 1.10 \\ \hline (-) jitter & 31.61 & 0.153 & 35.06 & 0.03 \\ \hline (-) MVs, (+) RAFT & 31.09 & 0.169 & 35.77 & 1.21 \\ \hline (-) warping & 30.69 & 0.183 & 37.35 & 0.53 \\ \hline (-) multiple frames & 30.56 & 0.192 & 33.48 & 1.96 \\ \hline (-) first jitter conv & 31.68 & 0.145 & 37.40 & 0.49 \\ \hline (-) last jitter conv & 31.50 & 0.148 & 37.38 & 0.56 \\ \hline (-) jitter conv & 31.26 & 0.156 & 37.20 & 0.56 \\ \hline \hline (-) training scenes & 31.74 & 0.144 & 37.42 & 0.57 \\ \hline (-) training scenes & 31.78 & 0.142 & 37.14 & 0.76 \\ \hline (-) training scenes & 31.56 & 0.152 & 37.11 & 0.91 \\ \hline \hline (+) perceptual loss & 31.72 & 0.125 & 37.40 & 0.54 \\ \hline \end{tabular} \end{table} Table 3: Ablation study. We report the average PSNR and LPIPS scores on the entire test set, as well as PSNRs and average pixel-wise standard deviation on a static segment from the Abandoned-School scene. On the benefits of jitter-conditioned convolutionsTable 3 shows the benefits of using jitter-conditioned kernels in the first and last convolution modules. Without these, we observe a 0.54 dB PSNR drop. Table 3 also suggests that the last layer is the one that benefits the most from using a jitter-conditioned kernel. On the importance of accurate motion compensationTo quantify the importance of accurate motion compensation, we trained our architecture both without motion compensation and on top of estimated motion vectors, which were estimated using RAFT [54] with weights pre-trained on Sintel [7]. In both cases, the results show a significant PSNR drop. When the camera remains static, the variant without motion compensation works well in terms of reconstruction and temporal stability (as seen in Table 3), but the quality drops to the level of a single frame architecture when the camera moves (see the first two rows of Figure 9). Replacing the L1 loss to a perceptual lossWe find that the loss used in [59] (a weighted combination of SSIM and a VGG-based perceptual loss) improves the sharpness and perceived quality of high-frequency textures (most visible in the "deer" crop from Figure 9) at the cost of slightly more temporal inconsistencies. Quantitatively, this loss improves LPIPS scores significantly (from \(0.140\) to \(0.125\) for Ours-M) with minor PSNR and SSIM differences. ## 7 Conclusion and future work In this work, we present a novel neural supersampling approach that is \(4\times\) more efficient than previous published work by Xiao [59] while maintaining the same level of accuracy. We propose a new dataset, QRISP, specifically designed for the research and development of super-resolution algorithms for gaming applications. When trained on the proposed dataset, our algorithm outperforms DLSS 2.2 [36] in terms of visual quality. We believe that QRISP fills a gap in the dataset landscape and can serve as a valuable resource to advance the state-of-the-art in super-resolution techniques for gaming content. In future work, we plan to investigate quantizing the neural network component of our approach as this has the potential to make the algorithm even more efficient [42, 4, 5, 45]. We also plan to explore whether more sophisticated losses can address the blurriness that sometimes arises in the reconstruction of severely under-sampled high-frequency textures (the grass and trees behind the deer in Figure 9). Figure 10: Reconstruction of thin objects, with and without depth-informed motion vector dilation. Figure 9: Ablation study using our \(2\times\) medium-sized architecture. We ablate the following components (from left to right): all components except the reconstruction network which we run on a single low-resolution frame, warping, motion vectors (which we replace by estimated optical flow obtained using RAFT [54]), jitter-conditioned convolutions and the L1 loss which we replace to the loss defined in [59].
2306.09573
Reevaluation of Stark-induced transition polarizabilities in cesium
Extracting electroweak observables from experiments on atomic parity violation (APV) using the Stark interference technique requires accurate knowledge of transition polarizabilities. In cesium, the focus of our paper, the $6S_{1/2}\rightarrow{7S_{1/2}}$ APV amplitude is deduced from the measured ratio of the APV amplitude to the vector transition polarizability, $\beta$. This ratio was measured with a $0.35\%$ uncertainty by the Boulder group [Science 275, 1759 (1997)]. Currently, there is a sizable discrepancy in different determinations of $\beta$ critically limiting the interpretation of the APV measurement. The most recent value [Phys. Rev. Lett. 123, 073002 (2019)] of $\beta=27.139(42)\, \mathrm{a.u.}$ was deduced from a semi-empirical sum-over-state determination of the scalar transition polarizability $\alpha$ and the measured $\alpha/\beta$ ratio [Phys. Rev. A 55, 1007 (1997)]. This value of $\beta$, however, differs by $\sim 0.7\%$ or $2.8\sigma$ from the previous determination of $\beta=26.957(51)$ by [Phys. Rev. A 62, 052101 (2000)] based on the measured ratio $M1/\beta$ of the magnetic-dipole $6S_{1/2}\rightarrow{7S_{1/2}}$ matrix element to $\beta$. Here, we revise the determination of $\beta$ by [Phys. Rev. Lett. 123, 073002 (2019)], using a more consistent and more theoretically complete treatment of contributions from the excited intermediate states in the sum-over-state $\alpha/\beta$ method. Our result of $\beta=26.887(38)\, \mathrm{a.u.}$ resolves the tension between the $\alpha/\beta$ and $M1/\beta$ approaches. We recommend the value of $\beta=26.912(30)$ obtained by averaging our result and that of [Phys. Rev. A 62, 052101 (2000)].
H. B. Tran Tan, D. Xiao, A. Derevianko
2023-06-16T01:29:53Z
http://arxiv.org/abs/2306.09573v2
# Reevaluation of Stark-induced transition polarizabilities in cesium ###### Abstract Extracting electroweak observables from experiments on atomic parity violation (APV) using the Stark interference technique requires accurate knowledge of transition polarizabilities. In cesium, the focus of our paper, the \(6S_{1/2}\to 7S_{1/2}\) APV amplitude is deduced from the measured ratio of the APV amplitude to the vector transition polarizability, \(\beta\). This ratio was measured with a 0.35% uncertainty by the Boulder group [Science **275**, 1759 (1997)]. Currently, there is a sizable discrepancy in different determinations of \(\beta\) critically limiting the interpretation of the APV measurement. The most recent value [Phys. Rev. Lett. **123**, 073002 (2019)] of \(\beta=27.139(42)\) a.u. was deduced from a semi-empirical sum-over-state determination of the scalar transition polarizability \(\alpha\) and the measured \(\alpha/\beta\) ratio [Phys. Rev. A **55**, 1007 (1997)]. This value of \(\beta\), however, differs by \(\sim 0.7\%\) or \(2.8\sigma\) from the previous determination of \(\beta=26.957(51)\) by [Phys. Rev. A **62**, 052101 (2000)] based on the measured ratio \(M1/\beta\) of the magnetic-dipole \(6S_{1/2}\to 7S_{1/2}\) matrix element to \(\beta\). Here, we revise the determination of \(\beta\) by [Phys. Rev. Lett. **123**, 073002 (2019)], using a more consistent and more theoretically complete treatment of contributions from the excited intermediate states in the sum-over-state \(\alpha/\beta\) method. Our result of \(\beta=26.887(38)\) a.u. resolves the tension between the \(\alpha/\beta\) and \(M1/\beta\) approaches. We recommend the value of \(\beta=26.912(30)\) obtained by averaging our result and that of [Phys. Rev. A **62**, 052101 (2000)]. ## I Introduction Atomic parity violation (APV) plays an important role in probing the electroweak sector of the standard model (SM) of elementary particles at low energy. The information derived from table-top APV experiments is both complementary to and in competition with that from large-scale particle colliders (see, e.g., the review [1] and references therein). To date, the 1997 Boulder experiment [2] searching for APV in \({}^{133}\)Cs remains the most accurate. A substantial body of work has been devoted to the interpretation of and the extraction of electroweak observables from the Boulder results. In its setup, the Boulder experiment [2] employed the \({}^{133}\)Cs \(6S_{1/2}\to 7S_{1/2}\) transition, whose \(E1\) amplitude nominally vanishes due to the parity selection rule. However, parity nonconserving (PNC) weak interactions between the atomic nucleus and electrons admix small components of \(P_{1/2}\) states into the nominal \(S_{1/2}\) states, thus opening the \(E1\) channel. Using the parity-mixed multi-electron states \(|6S^{\prime}_{1/2}\rangle\) and \(|7S^{\prime}_{1/2}\rangle\) and the hyperfine basis (see Eq. (4) below), the APV transition amplitude may be written as \[A_{fi}^{\rm PNC} = \langle 7S^{\prime}_{1/2},\,F_{f}M_{f}|-\mathbf{\mathcal{E}}_{L}\cdot \mathbf{D}|6S^{\prime}_{1/2},\,F_{i}M_{i}\rangle \tag{1}\] \[= i{\rm Im}(E1_{\rm PNC})\mathbf{\mathcal{E}}_{L}\cdot\langle F_{f}M_ {F_{f}}|\mathbf{\sigma}|F_{i}M_{F_{i}}\rangle\,,\] where \(\mathbf{\mathcal{E}}_{L}\) is the laser electric field driving the \(E1\) transition, \(\mathbf{D}\) is the electric dipole operator, and \(\mathbf{\sigma}\) is the Pauli matrix. Due to the smallness of \(E1_{\rm PNC}\), which is on the order of \(\sim 10^{-11}\) in atomic units, measuring the PNC transition amplitude \(A_{fi}^{\rm PNC}\) directly is a formidable challenge. To overcome this difficulty, it was suggested that one uses the Stark-interference technique [3; 4], which relies on the mixing of states of opposite parities due to an externally applied electric field. The transition rate \(R\) between the parity-mixed states then includes contributions from the Stark-induced \(E1\), magnetic-dipole \(M1\), and the PNC-induced amplitudes [5] \[R\propto|A_{fi}^{\rm Stark}+A_{fi}^{M1}+A_{fi}^{\rm PNC}|^{2}\,. \tag{2}\] Upon expansion, the right-hand side of Eq. (2) yields the Stark-PNC interference term, \(2{\rm Re}[A_{fi}^{\rm Stark}(A_{fi}^{\rm PNC})^{*}]\), whose sign is subject to the handedness of the experiment measuring \(R\). Thus, the PNC amplitude \(A_{fi}^{\rm PNC}\) can be extracted from the Stark-PNC interference term by measuring the changes in \(R\) under parity reversals. Based on the Stark-interference technique, the Boulder group [5] reported the following values \[\frac{{\rm Im}(E1_{\rm PNC})}{\beta}=\begin{cases}-1.6349(80)\,{\rm mV/cm}\\ \text{for}\,\,\,6S_{1/2},\,F_{i}=4\to 7S_{1/2},\,F_{f}=3\,,\\ -1.5576(77)\,{\rm mV/cm}\\ \text{for}\,\,\,6S_{1/2},\,F_{i}=3\to 7S_{1/2},\,F_{f}=4\,,\end{cases} \tag{3}\] where \(\beta\) is the atomic vector polarizability. A weighted average of the two values in Eq. (3) yields the nuclear-spin-independent observable, i.e., the nuclear weak charge, while their difference determines nuclear-spin-dependent effects, e.g., the nuclear anapole moment. For the extraction of these quantities, knowledge of the vector transition polarizability \(\beta\) is essential and substantial attention [6; 7; 8; 9; 10; 11] has been paid over the years to determining its value. Since 2000, the most accurate value of \(\beta\) has been determined based upon combining a semi-empirical calculation of the hyperfine-induced magnetic-dipole \(6S_{1/2}\to 7S_{1/2}\) transition amplitude \(M1\)[12] with a measurement of the ratio \(M1/\beta\)[11]. Another approach to estimating \(\beta\) combines a calculation of the scalar polarizability \(\alpha\) with the measurement of the ratio \(\alpha/\beta\)[13]. The latest most accurate determination of \(\beta\) was published by the Purdue group [10] who adopted the most accurate value of \(\alpha/\beta=9.905(11)\)[13] and used the sum-over-state (SoS) method to calculate \(\alpha\). Their calculation of \(\alpha\) were carried out using experimentally and theoretically determined matrix elements and energies. Although the uncertainties of the \(\alpha/\beta\)[10; 13] and \(M1/\beta\)[11; 12] approaches are comparable, both approximately at the level of \(0.2\%\), their central values differ by \(\sim 0.7\%\) or \(2.7\sigma\). This difference critically undermines the accuracy of extracting electroweak observables from the Boulder APV measurement. Recently, our theory group performed the most sophisticated to date _ab initio_ calculations of the \(E1\) transition matrix elements in Cs [14]. Here, we use these newly determined \(E1\) matrix elements and a SoS approach to reevaluate the scalar and vector polarizabilities \(\alpha\) and \(\beta\). We show that the updated value of \(\beta\) agrees well with that obtained from the \(M1/\beta\) method of Refs. [11; 12], thus reconciling the two alternative approaches (see Fig. 1). The paper is organized as follows. In Sec. II, we provide a review and derivation for the second-order transition polarizabilities \(\alpha\) and \(\beta\). In Sec. III, we detail the numerical methods employed for the computation of these quantities. In Sec. IV, we present our numerical values and error estimates and provide a comparison with previous results. Unless stated otherwise, atomic units are used throughout. ## II Stark-induced \(E1\) transitions and transition polarizabilities In the presence of a DC electric field, the initial and final \(S\) states of Cs admix states of opposite parities, thus enabling the otherwise forbidden \(E1\) transition between the \(6S_{1/2}\) and \(7S_{1/2}\) states. In this section, we rederive the conventional results for the Stark-induced \(E1\) transition amplitudes. The reader may refer to the original paper by Bouchiat and Bouchiat [4] for an alternative derivation. We start by introducing the hyperfine basis \[|n\,(IJ)FM_{F}\rangle=\sum_{M_{f}M_{I}}C^{FM_{F}}_{JMJM_{I}}|n\,JM_{J}\rangle| IM_{I}\rangle\,, \tag{4}\] whose members are formed by coupling electronic states \(|n\,JM_{J}\rangle\) of angular momentum \(\mathbf{J}\) and nuclear states \(|IM_{I}\rangle\) of spin \(\mathbf{I}\) to form states of definite total angular momentum \(\mathbf{F}=\mathbf{I}+\mathbf{J}\). Here, \(M_{F}\), \(M_{J}\), and \(M_{I}\) are the magnetic quantum numbers, \(n\) stands for the remaining quantum numbers, such as the principal quantum number of the electronic state, and \(C^{FM_{F}}_{JMJM_{I}}\) is the conventional Clebsch-Gordan coefficients. In the hyperfine basis, Eq. (4), the initial and final states involved in the \(i\to f\) transition are \[|i\rangle \equiv |n_{i}(IJ_{i})F_{i}M_{i}\rangle\,, \tag{5a}\] \[|f\rangle \equiv |n_{f}(IJ_{f})F_{f}M_{f}\rangle\,. \tag{5b}\] In the presence of an externally applied static electric field \(\mathbf{\mathcal{E}}_{S}\), the initial and final states acquire the admixtures \[|\delta i\rangle = -\sum_{a\neq i}|a\rangle\frac{\mathbf{\mathcal{E}}_{S}\cdot\mathbf{D}_{ ai}}{\Delta E_{ia}}\,, \tag{6a}\] \[|\delta f\rangle = -\sum_{a\neq f}|a\rangle\frac{\mathbf{\mathcal{E}}_{S}\cdot\mathbf{D}_{ af}}{\Delta E_{fa}}\,, \tag{6b}\] where \(\Delta E_{ab}\equiv E_{a}-E_{b}\) and \(\mathbf{D}_{ab}\equiv\langle a|\mathbf{D}|b\rangle\) is the electric dipole matrix element. If a laser is now applied, it can drive the transition \(i\to f\), whose Stark-induced \(E1\) transition amplitude is given by \[A_{fi}=-\langle\delta f|\mathbf{\mathcal{E}}_{L}\cdot\mathbf{D}|i\rangle-\langle f| \mathbf{\mathcal{E}}_{L}\cdot\mathbf{D}|\delta i\rangle=\mathcal{E}_{L}\mathcal{E}_{ S}a_{fi}\,, \tag{7}\] where \(\mathbf{\mathcal{E}}_{L}\) is the laser electric field. In the last step of Eq. (7), we have factored out the amplitudes of the electric fields and defined the Stark-induced transition polarizability \[a_{fi} \equiv \sum_{a\neq f}\frac{(\mathbf{\hat{\varepsilon}}\cdot\mathbf{D}_{fa})(\bm {\hat{\varepsilon}}\cdot\mathbf{D}_{ai})}{\Delta E_{fa}} \tag{8}\] \[+ \sum_{a\neq i}\frac{(\mathbf{\hat{\varepsilon}}\cdot\mathbf{D}_{fa})(\bm {\hat{\varepsilon}}\cdot\mathbf{D}_{ai})}{\Delta E_{ia}}\,.\] Note that \(a_{fi}\) still depends on the polarization vectors \(\hat{\mathbf{e}}\equiv\mathbf{\mathcal{E}}_{S}/\mathcal{E}_{S}\) and \(\hat{\mathbf{e}}\equiv\mathbf{\mathcal{E}}_{L}/\mathcal{E}_{L}\) of the DC and laser fields. Figure 1: Comparison of our value for the vector transition polarizability \(\beta\) with previous results [6; 7; 8; 9; 10; 11; 12]. The previous determinations of \(\beta\) are identified by the initial three letters of the first author’s last name and the abbreviated publication year. The left panel presents results from the sum-over-state approach, the middle panel those from the \(M1/\beta\) determination, and the right panel shows our recommended value for \(\beta\) obtained by taking a weighted average of our result and that of Ref. [12]. The expression for \(a_{fi}\) may be cast into a form more convenient for angular reduction. To this end, one uses the recoupling identity [15] \[(R^{(k_{1})}\cdot S^{(k_{1})})(U^{(k_{2})}\cdot V^{(k_{2})})=\sum_{Q }(-1)^{Q-k_{1}-k_{2}}\] \[\times\{R^{(k_{1})}\otimes U^{(k_{2})}\}^{(Q)}\cdot\{S^{(k_{1})} \otimes V^{(k_{2})}\}^{(Q)}\,, \tag{9}\] where the operators \(P^{(k_{1})}\), \(Q^{(k_{1})}\), \(R^{(k_{2})}\), and \(S^{(k_{2})}\) are irreducible tensor operators (ITOs) of ranks \(k_{1}\) and \(k_{2}\). In Eq. (9), a scalar product of two rank-\(k\) ITOs is understood as the following sum over their spherical components \[P^{(k)}\cdot Q^{(k)}=\sum_{q=-k}^{k}(-1)^{q}P^{(k)}_{q}Q^{(k)}\,, \tag{10}\] and a compound ITO of rank \(Q\) is defined as \[\{P^{(k_{1})}\otimes R^{(k_{2})}\}^{(Q)}_{q}=\sum_{q_{1}q_{2}}C^{Qq}_{k_{1}q_ {1}k_{2}q_{2}}P^{(k_{1})}_{q_{1}}R^{(k_{2})}_{q_{2}}\,, \tag{11}\] where \(q_{1}\) and \(q_{2}\) label the spherical basis components of the ITOs. The possible values of \(Q\) are limited by the triangular selection rule, i.e., \(|k_{1}-k_{2}|\leq Q\leq k_{1}+k_{2}\). In our case of the electric dipole couplings, the polarization and dipole operators in Eq. (9) are ITOs of rank 1. As a result, one has \[a_{fi} =\sum_{Q=0}^{2}(-1)^{Q}\{\hat{\mathbf{\varepsilon}}\otimes\hat{\mathbf{ \varepsilon}}\}^{(Q)}\cdot\left(\sum_{a\neq f}\frac{\{\mathbf{D}_{fa}\otimes\mathbf{D} _{ai}\}^{(Q)}}{\Delta E_{fa}}\right.\] \[\left.+(-1)^{Q}\cdot\sum_{a\neq i}\frac{\{\mathbf{D}_{fa}\otimes\mathbf{ D}_{ai}\}^{(Q)}}{\Delta E_{ia}}\right)\,, \tag{12}\] where we have used \(\{\hat{\mathbf{\varepsilon}}\otimes\hat{\mathbf{\varepsilon}}\}^{(Q)}_{q}=(-1)^{Q} \{\hat{\mathbf{\varepsilon}}\otimes\hat{\mathbf{\varepsilon}}\}^{(Q)}_{q}\). The term in Eq. (12) with \(Q=0\) corresponds to the scalar, that with \(Q=1\) to the vector, and the one with \(Q=2\) to the tensor (quadrupole) contributions to the transition polarizability. To simplify Eq. (12) further, one may introduce the effective ITOs, \[a[k]^{(Q)}_{q}\equiv\{\mathbf{D}\otimes R_{k}\mathbf{D}\}^{(Q)}_{q}\,, \tag{13}\] with the resolvent operator \(R_{k}\equiv(E_{k}-H_{0})^{-1}\), where \(H_{0}\) stands for the unperturbed atomic Hamiltonian. Since \(R_{k}\) has the spectral resolution \[R_{k}=(E_{k}-H_{0})^{-1}=\sum_{a\neq k}\Delta E_{ka}^{-1}|a\rangle\langle a|\,, \tag{14}\] and is a scalar (so that the combination \(R_{k}\mathbf{D}\) remains a rank-1 ITO), Eq. (12) may be written as \[a_{fi} =\sum_{Q=0}^{2}(-1)^{Q}\{\hat{\mathbf{\varepsilon}}\otimes\hat{\mathbf{ \varepsilon}}\}^{(Q)}\] \[\quad\cdot\left(\langle f|a[f]^{(Q)}|i\rangle+(-1)^{Q}\langle f|a[ i]^{(Q)}|i\rangle\right)\,. \tag{15}\] which, upon applying the Wigner-Eckart theorem, further simplifies to \[a_{fi} =\sum_{Q=0}^{2}w_{Q}(\hat{\mathbf{\varepsilon}},\hat{\mathbf{e}})\] \[\times\left[\langle f||a[f]^{(Q)}||i\rangle+(-1)^{Q}\langle f||a [i]^{(Q)}||i\rangle\right]\,, \tag{16}\] where the multipolar polarization weights \(w_{Q}(\hat{\mathbf{\varepsilon}},\hat{\mathbf{e}})\) are defined as \[w_{Q}(\hat{\mathbf{\varepsilon}},\hat{\mathbf{e}}) =(-1)^{Q}\sum_{q}(-1)^{q+F_{f}-M_{f}}\] \[\times\begin{pmatrix}F_{f}&Q&F_{i}\\ -M_{f}&-q&M_{i}\end{pmatrix}(\hat{\mathbf{\varepsilon}}\otimes\hat{\mathbf{e}})^{(Q)}_ {q}\,, \tag{17}\] where \(\begin{pmatrix}F_{f}&Q&F_{i}\\ -M_{f}&-q&M_{i}\end{pmatrix}\) is the \(3j\) symbol. Finally, by summing over magnetic quantum numbers, we obtain \[\langle f||a[f]^{(Q)}||i\rangle =(-1)^{F_{i}+I-J_{i}}[F_{f},Q,F_{i}]^{1/2}\] \[\times\begin{cases}Q&J_{f}&J_{i}\\ I&F_{i}&F_{f}\end{cases}\sum_{a_{a}J_{a}}\begin{cases}Q&J_{i}&J_{f}\\ J_{a}&1&1\end{cases}\] \[\times\frac{\langle n_{f}J_{f}||D||n_{a}J_{a}\rangle\langle n_{a} J_{a}||D||n_{i}J_{i}\rangle}{E_{n_{f}J_{f}}-E_{n_{a}J_{a}}}\,, \tag{18}\] where \([J_{1},J_{2},\ldots,J_{n}]\equiv(2J_{1}+1)(2J_{2}+1)\ldots(2J_{n}+1)\). The reduced matrix elements \(\langle f||a[i]^{(Q)}||i\rangle\) are given by the same formula, but with \(E_{n_{i}J_{i}}\) replacing \(E_{n_{f}J_{f}}\) in the energy denominator. We point out that due to the \(6j\) symbols in Eq. (18), the term with \(Q=2\) vanishes for \(J_{f}=J_{i}=1/2\), in particular for the transition \(6S_{1/2}\to 7S_{1/2}\) of interest, as expected. Conventionally, the Stark-induced transition polarizability \(a_{fi}\) is expressed as a linear combination of the second-order scalar and vector polarizabilities [4], \(\alpha\) and \(\beta\) \[a_{fi} =\alpha\,(\hat{\mathbf{e}}\cdot\hat{\mathbf{\varepsilon}})\delta_{F_{f}F_ {i}}\delta_{M_{f}M_{i}}\] \[+i\beta\,(\hat{\mathbf{e}}\times\hat{\mathbf{\varepsilon}})\cdot\langle F _{f}M_{f}|\mathbf{\sigma}|F_{i}M_{i}\rangle\,. \tag{19}\] These two terms map into the \(Q=0\) and \(Q=1\) contributions in Eq. (12), respectively. In other words, \[a_{fi} =-\sqrt{3[F_{f}]}w_{0}(\hat{\mathbf{\varepsilon}},\hat{\mathbf{e}})\alpha\] \[-\sqrt{2}\langle F_{f}||\sigma||F_{i}\rangle w_{1}(\hat{\mathbf{ \varepsilon}},\hat{\mathbf{e}})\beta\,, \tag{20}\] where, for the \(S_{1/2}\) states, the reduced matrix element \(\langle F_{f}||\sigma||F_{i}\rangle\) in the hyperfine basis (4) is given by \[\langle F_{f}||\sigma||F_{i}\rangle =\sqrt{6}(-1)^{I+F_{i}-1/2}\] \[\times\sqrt{[F_{f},F_{i}]}\begin{cases}1/2&F_{f}&I\\ F_{i}&1/2&1\end{cases}\,, \tag{21}\] where we have used \(\langle S=1/2||\sigma||S=1/2\rangle=\sqrt{6}\). In accordance with Eq. (18), one may then write \[\alpha= -\frac{\langle f||a[f]^{(0)}||i\rangle+\langle f||a[i]^{(0)}||i \rangle}{\sqrt{3(2F_{f}+1)}}\,, \tag{22a}\] \[\beta= -\frac{\langle f||a[f]^{(1)}||i\rangle-\langle f||a[i]^{(1)}||i \rangle}{\sqrt{2}\langle F_{f}||\sigma||F_{i}\rangle}\,, \tag{22b}\] or, as explicit sums over intermediate states, \[\alpha =\delta_{JfJi}\sqrt{\frac{1}{6}}\sum_{n_{a}J_{a}}\frac{(-1)^{J_{a} -J_{i}}}{\sqrt{3(2J_{i}+1)}}\] \[\times\langle n_{f}f_{f}||D||n_{a}J_{a}\rangle\langle n_{a}J_{a} ||D||n_{i}J_{i}\rangle\] \[\times\left(\frac{1}{E_{n_{f}J_{f}}-E_{n_{a}J_{a}}}+\frac{1}{E_{ n_{i}J_{i}}-E_{n_{a}J_{a}}}\right)\,, \tag{23a}\] \[\beta =-\frac{1}{2}\sum_{n_{a}J_{a}}\begin{cases}1&J_{i}&J_{f}\\ J_{a}&1&1\end{cases}\] \[\times\langle n_{f}f_{f}||D||n_{a}J_{a}\rangle\langle n_{a}J_{a}||D ||n_{i}J_{i}\rangle\] \[\times\left(\frac{1}{E_{n_{f}J_{f}}-E_{n_{a}J_{a}}}-\frac{1}{E_{ n_{i}J_{i}}-E_{n_{a}J_{a}}}\right)\,. \tag{23b}\] These equations recover the conventional expressions in the literature, see, e.g., formulae in Ref. [16] specialized for the initial and final states of the \(S_{1/2}\) character. In this case, the \(E1\) selection rules fix the intermediate states to the \(P_{1/2}\) and \(P_{3/2}\) angular characters. ## III Evaluation of the transition polarizabilities In the last section, we derived the second order transition polarizabilities \(\alpha\) and \(\beta\). In this section, we present the numerical methods with which these polarizabilities are calculated. Our approach is a blend of relativistic many-body methods of atomic structure and high-precision experimental values for atomic level energies. Since the Cs atom has 55 electrons, its electronic structure is relatively simple: it has a single valence electron outside the [Xe]-like closed-shell core. This simplicity greatly facilitates accounting for many-body effects due to the residual electron-electron interaction (correlation). In what follows, we describe several approximations of increasing complexity through which the correlation contributions to \(\alpha\) and \(\beta\) are computed. The lowest-order approximation in the electron-electron interaction is the mean-field Dirac-Hartree-Fock (DHF) method, wherein each electron experiences an "averaged" influence from all other electrons (and of course the Coulomb interaction with the nucleus). Within the DHF approach, we use the "frozen-core" approximation, where atomic orbitals in the [Xe]-like closed-shell core are computed self-consistently, and the valence orbitals are determined afterward in the resulting \(V^{N-1}\) DHF potential of the core. We point out that even at this lowest-order DHF level, the intermediate states involved in calculating \(\alpha\) and \(\beta\), Eqs. (23), span a countable yet infinite set of bound states and an innumerable set of states in the continuum. Since this Hilbert space is infinitely large, direct numerical summations, while possible, require different numerical implementations for various many-body methods. An elegant way to handle this issue is the \(B\)-spline approach popularized by the Notre Dame group [17; 18; 19]. This approach generates a _finite_ and numerically complete basis set that has been proven useful in evaluating otherwise infinite sums. In this approach, the set of eigenfunctions is a linear combination of \(B\)-spline functions covering a radial grid extending from the origin to \(R_{\text{max}}\), the radius of an artificially imposed spherical cavity. The Notre Dame approach is further refined by employing the dual-kinetic balance \(B\)-spline basis set [20] which helps mitigate the issue of spurious states and improve the numerical quality of orbitals both near and far away from the nucleus. The low-\(n\) orbitals from a \(B\)-spline finite basis set closely resemble those obtained with the conventional finite-difference techniques with a sufficiently large radial grid extent. We refer to these low-\(n\) orbitals as "physical" states. As \(n\) increases, this mapping deteriorates, so higher-\(n\) basis orbitals often differ substantially from their finite-difference counterparts; we refer to such states as "nonphysical" states. The value of \(n\) separating the physical and nonphysical parts of the pseudospectrum primarily depends on \(R_{\text{max}}\) and to some extent on the number of basis functions. The dependence on the cavity's radius is easily understood by recalling that low-\(n\) orbitals decays exponentially with increasing distance from the nucleus (origin) so they cannot "know about" the existence of a cavity of sufficiently large radius. In contrast, high-\(n\) orbitals have their maxima at larger distances and therefore are much more susceptible to the cavity's presence. Our \(B\)-spline basis set contains \(N=60\) basis functions of order \(k=9\) per partial wave generated in a cavity of radius \(R_{\text{max}}=250\,\text{a.u.}\). These parameters are chosen so that the fractional differences in the DHF eigenenergies between the basis set and the finite-difference approach for physical states (\(n^{\prime}\leq 12\)) are within \(0.015\%\). Similarly, the basis-set values of the \(E1\) matrix elements involving physical states differ from their finite-difference counterparts by less than \(0.1\%\). A detailed discussion of the proper mapping of the finite basis set orbitals to the physical states may be found in Ref. [14]. With the finite basis set, one may further facilitate the numerical evaluations of \(\alpha\) and \(\beta\) by splitting the summations in Eqs. (23a) and (23b) into the "core-valence" ("cv"), "main", and "tail" contributions \[\alpha =\alpha_{\text{cv}}+\alpha_{\text{main}}+\alpha_{\text{tail}}\,, \tag{24a}\] \[\beta =\beta_{\text{cv}}+\beta_{\text{main}}+\beta_{\text{tail}}\,, \tag{24b}\] where the cv terms correspond to summations over \(2\leq n_{a}\leq 5\), the main terms to summations over \(6\leq n_{a}\leq 12\), and the tail terms to summations over \(13\leq n_{a}\leq\infty\), respectively. The cv term comes from the core particle-hole intermediate states with excitations to the valence orbital blocked by the Pauli exclusion principle [21]. The infinity in \(13\leq n_{a}\leq\infty\) corresponds to the maximum number of basis set orbitals of a given angular character. We disregard the summation over Dirac negative-energy states, as their contribution in the length-gauge for dipole operators is suppressed by \(\alpha_{\rm fs}^{4}\), where \(\alpha_{\rm fs}\approx 1/137\) is the fine-structure constant. We have chosen the boundary \(n_{a}=12\) between the main and tail terms with the convention of the earlier work, Ref. [10], in mind. Since we have carefully chosen our finite basis set so that that \(B\)-spline single-electron orbitals with \(n_{a}\leq 12\) coincide with their finite-difference counterparts, the intermediate many-body states \(|n_{a}J_{a}\rangle\) in \(\alpha_{\rm main}\) and \(\beta_{\rm main}\) map into physical states. The next-level approximation is the Brueckner orbitals (BO) method which incorporates certain many-body effects beyond the DHF treatment. BOs qualitatively describe the phenomenon where the valence electron charge causes the atomic core to become polarized, thus inducing a dipole and higher-rank multipolar moments within the core. Consequently, the redistributed charges within the core attract the valence electron. Compared to the DHF approximation, the BO method improves the theory-experiment agreement for valence electron removal energies. In our work, the BO basis set is obtained by rotating the DHF set using the second-order self-energy operator, see Ref. [14] for further details. A further improvement upon the DHF and BO methods is the random phase approximation (RPA), which is a linear response theory implemented within the mean-field framework [22; 23]. The primary function of RPA is to account for the screening of externally applied fields by the core electrons. The main advantages of the RPA formalism are that RPA is an all-order method and the RPA transition amplitudes are gauge-independent. For more details about our finite-basis-set implementation of RPA, the reader is referred to Ref. [14]. The RPA(BO) approach incorporates both the core polarization and the core screening effects. The quality of the RPA(DHF) and RPA(BO) dipole matrix elements is substantially improved over the DHF or BO methods, see, again Ref. [14]. To proceed beyond RPA(DHF) and RPA(BO), we employ the all-order relativistic many-body coupled-cluster (CC) approach, which systematically accounts for correlation contributions at each level of approximation. In our recent work [14], \(E1\) matrix elements between the \(6,7S_{1/2}\) and \(nP_{1/2,3/2}\) states for \(6\leq n\leq 12\) were computed using the CCSDpTvT method. This method incorporates single (S), double (D), and triple (T) excitation from the reference DHF state [14] in the CC formalism. The "pTvT" qualifier in CCSDpTvT refers to a perturbative treatment of core triples and a full treatment of valence triples. In addition to an accurate treatment of the many-body effects, the CCSDpTvT \(E1\) matrix elements values include scaling, dressing, Breit, and QED corrections [14]. These CCSDpTvT values are the most complete theoretical determinations of the \(E1\) matrix elements in Cs to date. The results are complete through the fifth order of many-body perturbation theory and include some chains of topologically-similar diagrams to all orders. As such, the CCSDpTvT method is the most theoretically complete applied to correlation effects in Cs so far. Since the finite basis set used in Ref. [14] is identical to that employed in this work, we identify the CCSDpTvT many-body states with \(6\leq n\leq 12\) with the physical states and use the CCSDpTvT matrix elements to compute the main contribution to transition polarizabilities. ## IV Numerical results and discussions We have provided an overview of the numerical approaches employed in our calculations of the transition polarizabilities \(\alpha\) and \(\beta\). In what follows, we present our numerical results and estimates of uncertainties. In Table 1, we compile our numerical results for \(\alpha\) and \(\beta\). In addition to the DHF, BO, RPA(DHF), RPA(BO), and CCSDpTvT results, we list our values obtained from CC calculations of varying complexity. In particular, in the SD approximation, only linear singles and doubles are included, the CCSD approximation additionally incorporates nonlinear effects, the CCSDvT approximation includes full valence triples on top of CCSD, and finally CCSDpTvT(scaled) indicates a CCSDpTvT value rescaled using experimental values for the removal energies. See Ref. [14] for further details. The final values for \(\alpha\) and \(\beta\) are obtained by adding to the scaled CCSDpTvT values the Breit, QED, and basis extrapolation contributions to the \(E1\) matrix elements, as mentioned in Sec. III. Note that the different CC approximations only apply to the \(E1\) matrix elements in Eqs. (23). For the energy denominators, we have used the DHF, BO, RPA(DHF), RPA(BO) values in the corresponding approximations and experimental values for all CC approximations. Note also that the CC approximations were only used to compute the main terms, as mentioned in Sec. III. The cv and tail terms are only calculated up to RPA(BO), since (i) their contributions are much smaller than those of the main terms, (ii) full CCSDpTvT calculations are expensive, and (iii) the disparity between the high-\(n\) states in the tail terms and their physical counterparts is significant. The semi-empirical result for \(\beta\) is obtained by dividing our theoretically determined result for \(\alpha\) by the experimentally measured ratio \(\alpha/\beta=9.905(11)\)[13] (see below for further details on this point). As shown in Table 1, the BO correction has a larger impact on \(\beta\) than on \(\alpha\), differing by 9.6% from the DHF value of \(\beta\), while being only 2.6% away from the DHF value for \(\alpha\). In contrast, the RPA contribution appears to be much more important for \(\alpha\) than \(\beta\), with the RPA(DHF) and RPA(BO) values for \(\alpha\) differing from the DHF and BO values by around 20%, whereas the RPA(DHF) and RPA(BO) values for \(\beta\) are only 0.1-0.5% away from the corresponding DHF and BO values. SD shifts \(\alpha\) by 0.35% away from its RPA(BO) value while the SD change for \(\beta\) is at 2.2%. CCSD moves the SD value for \(\alpha\) by 2.5% and that for \(\beta\) by 1%. Adding valence triples amounts to a 4.9% shifts for \(\alpha\) and a 0.6% shift for \(\beta\) while perturbative core triples give rise to a 0.07% shift for \(\alpha\) and a 0.2% shift for \(\beta\). Semi-empirical scaling of removal energies changes \(\alpha\) by 0.2% and \(\beta\) by 0.4%, while Breit, QED, and basis extrapolation corrections are at the level of 0.03% for \(\alpha\) and 0.8% for \(\beta\). In Table 2, the cv, main, and tail contributions to \(\alpha\) and \(\beta\) in different approximations are presented explicitly. This allows us to determine the central values for our computations and estimate our uncertainties. The uncertainties in \(\alpha_{\rm main}\) and \(\beta_{\rm main}\) may be estimated by considering the convergence patterns of these terms across various approximations. Indeed, Fig. 2 shows the diminishing of contributions from terms of higher and higher order in many-body perturbation theory: the RPA contributions are large, the additional effects of nonlinear core singles and doubles and valence triples, although significantly smaller, are still substantial, whereas additional core triples and scaling effects are generally small. As a result of this observation, we estimate our uncertainty \(\sigma_{CC}\) as half the difference between the CCSDpTvT and scaled CCSDpTvT values. This uncertainty represents missing contributions from higher-order CC diagrams. The uncertainty \(\sigma_{\rm Breit+QED+basis}\) from the Breit, QED, and basis extrapolation contributions are assumed, conservatively, to be half the difference between the final and scaled CCSDpTvT values. The total uncertainties \(\sigma_{\rm main}\) in \(\alpha_{\rm main}\) and \(\beta_{\rm main}\) are obtained by adding \(\sigma_{CC}\) and \(\sigma_{\rm Breit+QED+basis}\) in quadrature. The contributions and uncertainties of the tail terms may be estimated by considering how much the DHF, BO, RPA(DHF), and RPA(BO) values for \(\alpha_{\rm main}\) and \(\beta_{\rm main}\) differ from the final CCSDpTvT results of these main terms. We observe that the RPA(DHF) and RPA(BO) approximations generally give better agreement with the final values, as to be expected since RPA is known to be responsible for a large portion of the electron correlation effects. As a result, we assume that the contributions from the tail terms are the average of the corresponding RPA(DHF) and RPA(BO) values, and that the uncertainties \(\sigma_{\rm tail}\) are half of the corresponding RPA(DHF) and RPA(BO) differences. Finally, since the cv terms are the same in the RPA(DHF) and RPA(BO) approaches, we take these values as our estimates for \(\alpha_{\rm cv}\) and \(\beta_{\rm cv}\). The uncertainties \(\sigma_{\rm cv}\) in these contributions are assumed to be half the corresponding BO and RPA(BO) differences. The total uncertainties in our evaluations of \(\alpha\) and \(\beta\) are obtained by adding \(\sigma_{\rm cv}\), \(\sigma_{\rm main}\), and \(\sigma_{\rm tail}\) in quadrature. In Table 3, the main term of \(\alpha\) is further broken down into contributions from intermediate states \(n_{a}P_{J}\) with different \(n_{a}\). This facilitates a detailed comparison between the result of this work and that of Ref. [10]. We first remind the reader that Ref. [10] estimated the contri \begin{table} \begin{tabular}{l r r r r r} & \(\alpha_{\rm main}\) & \(\alpha_{\rm cv}\) & \(\alpha_{\rm tail}\) & \(\beta_{\rm main}\) & \(\beta_{\rm cv}\) & \(\beta_{\rm tail}\) \\ \hline DHF & \(-348.24\) & 0.20 & \(-0.46\) & 29.221 & 0.001 & 0.056 \\ BO & \(-338.80\) & 0.21 & \(-0.94\) & 26.379 & 0.002 & 0.103 \\ RPA(DHF) & \(-276.33\) & 0.40 & \(-0.24\) & 29.309 & 0.003 & 0.006 \\ RPA(BO) & \(-273.69\) & 0.40 & \(-0.39\) & 26.319 & 0.003 & 0.042 \\ SD & \(-272.81\) & & & 26.907 & \\ CCSDpTvT & \(-279.63\) & & & 27.187 & \\ CCSDpTvT & \(-266.12\) & & & 27.343 & \\ CCSDpTvT & \(-265.93\) & & & 27.297 & \\ CCSDpTvT(scaled) & \(-266.31\) & & & 27.200 & \\ \hline Final & \(-266.39\) & 0.40 & \(-0.32\) & 26.996 & 0.003 & 0.024 \\ Uncertainty & 0.20 & 0.10 & 0.09 & 0.113 & 0.001 & 0.018 \\ \end{tabular} \end{table} Table 2: The behaviors of the core-valence, main, and tail contributions to \(\alpha\) and \(\beta\) across several approximations employed in this work. See the caption of Table 1 for an explanation of the notation. All values are given in atomic units. \begin{table} \begin{tabular}{l r r} & \(\alpha\) & \(\beta\) \\ \hline \multicolumn{3}{c}{This work} \\ DHF & \(-348.50\) & 29.278 \\ BO & \(-339.53\) & 26.483 \\ RPA(DHF) & \(-276.17\) & 29.318 \\ RPA(BO) & \(-273.68\) & 26.364 \\ SD & \(-272.73\) & 26.934 \\ CCSD & \(-279.55\) & 27.214 \\ CCSDvT & \(-266.04\) & 27.370 \\ CCSDpTvT & \(-265.85\) & 27.324 \\ CCSDpTvT (scaled) & \(-266.23\) & 27.227 \\ \hline Final & \(-266.31(23)\) & 27.023(114) \\ **Semi-empirical (\(\alpha/\beta\))** & & **26.887(38)** \\ \hline \multicolumn{3}{c}{Other works} \\ Sah20 [24] (Sum over states \(\alpha\)) & \(-268.65(27)\) & 27.12(4) \\ Toh19 [10] (Sum over states \(\alpha\)) & \(-268.81(30)\) & 27.139(42) \\ Dzu09 [9] (Sum over states \(\alpha\)) & 27.15(11) \\ Vas02 [8] (Sum over states \(\alpha\)) & \(-269.7(1.1)\) & 27.22(11) \\ Dzu00 [12] (\(M1\) calculation) & & 26.957(51) \\ Ben99 [25] (\(M1/\beta\) experiment) & & 27.024(80) \\ Saf99 [7] (Sum over states \(\alpha\)) & \(-268.6(2.2)\) & 27.11(22) \\ Saf99 [7] (Sum over states \(\beta\)) & & 27.16 \\ Dzu97 [6] (Sum over states \(\alpha\)) & \(-269.0(1.3)\) & 27.15(13) \\ Blu92 [16] (Sum over states \(\beta\)) & & 27.0(2) \\ \end{tabular} \end{table} Table 1: Numerical results for the scalar and vector \(6S_{1/2}\to 7S_{1/2}\) transition polarizabilities in \({}^{133}\)Cs in the Dirac-Hartree-Fock (DHF) approximation, the Brueckner orbitals (BO) approximation, the random-phase approximation (RPA) implemented on a DHF basis set, RPA implemented on a BO basis set, the coupled-cluster (CC) approximation with only linear singles and doubles (SD), the CC approximation with nonlinear treatment singles and doubles (CCSD), the CC approximation with linear and nonlinear singles and doubles, perturbative core triples, and valence triples (CCSDpTvT). The CCSDpTvT(scaled) result is obtained by using experimental results for removal energies to rescale single and double amplitudes. The final result is obtained by adding to CCSDpTvT(scaled) the Breit, QED, and basis extrapolation contributions. The semi-empirical result for \(\beta\) (in bold) is obtained by combining the theoretically determined result for \(\alpha\) with the experimentally measured ratio \(\alpha/\beta=9.905(11)\)[13]. All values are given in atomic units. butions from \(6,7P_{J}\) by using experimental values for the \(E1\) matrix elements between \(6,7S_{1/2}\) and these \(P\) states. In contrast, we estimate the contributions from \(6,7P_{J}\) by using theoretical CCSDpTvT values for the matrix elements from Ref. [14]. For \(6,7P_{1/2}\), the two approaches agree quite well, reflecting the fact that the theoretical CCSDpTvT matrix elements \(\langle 6,7S_{1/2}||D||6,7P_{1/2}\rangle\) from Ref. [14] are in good agreement with experiments. On the other hand, our estimates for the contributions from \(6,7P_{3/2}\) disagree quite substantially with those from Ref. [10], due to tensions between theoretical values for \(\langle 6,7S_{1/2}||D||6,7P_{3/2}\rangle\) from Ref. [14] and experimental results. Another noticeable feature of Table 3 is the significant difference between our values for the contributions from \(n_{a}P_{J}\) with \(n_{a}=8,\ldots,12\) and those of Ref. [10]. This discrepancy is due to the fact that Ref. [10] used for matrix elements \(\langle 6,7S_{1/2}||D||n_{a}P_{J}\rangle\) (\(n_{a}=8,\ldots,12\)) theoretical values from Ref. [26], which computed them in the SDpT approximation. In Table 3, we present our SD values for the contributions with \(n_{a}=8,\ldots,12\). One observes that our SD values generally agree with those used by Ref. [10], with the small deviations coming from the pT contributions and the fact that Ref. [26] used a different basis from ours, with less accurate mapping to the physical states. We next point out that Ref. [10] estimated the tail contribution by first computing \(\alpha_{\rm tail}\) in the DHF approximation, then rescaling this DHF result based on the fact that the DHF values for contributions from \(n_{a}=8,\ldots,12\) are \(\sim 30\%\) higher than the more accurate SDpT values. In this work, we adopt a slightly different method mentioned earlier, where we estimate \(\alpha_{\rm tail}\) by averaging the Figure 2: Convergence patterns for the main contributions \(\alpha_{\rm main}\) and \(\beta_{\rm main}\) to the second order scalar atomic polarizabilities with increasing complexity of the approximations for electron correlation effects. \begin{table} \begin{tabular}{c c c c c} \hline \hline & \(n_{a}\) & Toh19 [10] & SD & Final & Difference \\ \hline & \(6\) & \(-32.54\) & \(-32.44\) & \(9.94[-2]\) \\ & \(7\) & \(-37.35\) & \(-36.84\) & \(5.04[-1]\) \\ & \(8\) & \(-5.46[-2]\) & \(-5.500[-2]\) & \(-4.551[-1]\) & \(8.92[-2]\) \\ \(n_{a}P_{1/2}\) & \(9\) & \(-7.99[-2]\) & \(-7.824[-2]\) & \(-5.611[-2]\) & \(2.41[-2]\) \\ & \(10\) & \(-2.30[-2]\) & \(-2.239[-2]\) & \(-1.374[-2]\) & \(9.48[-3]\) \\ & \(11\) & \(-9.31[-3]\) & \(-9.036[-3]\) & \(-4.839[-3]\) & \(4.38[-3]\) \\ & \(12\) & \(-4.61[-3]\) & \(-4.472[-3]\) & \(-2.007[-3]\) & \(2.81[-3]\) \\ \hline & \(6\) & \(-92.93\) & \(-92.68\) & \(2.55[-1]\) \\ & \(7\) & \(-102.1\) & \(-101.1\) & \(1.00\) \\ & \(8\) & \(-2.43\) & \(-2.461\) & \(-2.215\) & \(2.13[-1]\) \\ \(n_{a}P_{3/2}\) & \(9\) & \(-4.69[-1]\) & \(-4.685[-1]\) & \(-4.042[-1]\) & \(6.51[-2]\) \\ & \(10\) & \(-1.65[-1]\) & \(-1.650[-1]\) & \(-1.372[-1]\) & \(2.81[-2]\) \\ & \(11\) & \(-7.79[-2]\) & \(-7.774[-2]\) & \(-6.375[-2]\) & \(1.41[-2]\) \\ & \(12\) & \(-4.34[-2]\) & \(-4.329[-2]\) & \(-3.489[-2]\) & \(8.53[-3]\) \\ \hline Main (6, 7) & \(-264.86\) & \(-263.00\) & \(1.86\) \\ Main (8–12) & \(-3.847\) & \(-3.387\) & \(0.46\) \\ Core-valence & \(0.2\) & \(0.40\) & \(0.20\) \\ Tail & \(-0.30\) & \(-0.32\) & \(-0.02\) \\ \hline Total & \(-268.81\) & \(-266.31\) & \(2.50\) \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of individual contributions to the scalar transition polarizability \(\alpha\) from intermediate states \(n_{a}P_{J}\) with \(n_{a}=6,\ldots,12\), as well as core-valence and tail terms, as computed by using the matrix elements provided by Ref. [10] and by us. The notation \(x[y]\) stands for \(x\times 10^{y}\). See the caption of Table 1 for an explanation of other notations. All values are given in atomic units. corresponding RPA(DHF) and RPA(BO) values. This approach stems from the observation that for individual contributions to \(\alpha_{\rm main}\) from \(n_{a}=8,\ldots,12\), the average of our RPA(DHF) and RPA(BO) values agree well with the final CCSDpTvT results. Reassuringly, our final value for \(\alpha_{\rm tail}\) is in good agreement with that of Ref. [10]. We note also that while our DHF value for \(\alpha_{\rm cv}\) agrees with that of Ref. [10], we choose to estimate this term using the RPA(DHF) and RPA(BO) methods, since these are more complete theoretical treatments. From Table 3, the origin of the difference between our estimate for \(\alpha\) and that of Ref. [10] is also clear. Out of the total disagreement of 2.50 a.u., 1.86 (74%) comes from the disagreement between experimental and theoretical values for \(\langle 6,7S_{1/2}|D||6,7P_{J}\rangle\), 0.46 (18%) originates from our use of the CCSDpTvT instead of SDpT values for the main contributions with \(n_{a}=8,\ldots,12\), and the remaining 0.20 (8%) comes from the cv contribution. We close by noting that, as may be observed from the lower panel of Fig. 2, the computation of \(\beta_{\rm main}\) does not converge as well with increasingly complex approximations as that for \(\alpha_{\rm main}\) (upper panel of Fig. 2). Indeed, whereas the final uncertainty in \(\alpha_{\rm main}\) is at 0.075%, the final uncertainty in \(\beta_{\rm main}\) is about six times worse, at 0.42%. This may be understood by noting that in \(\beta\), contributions from the \(nP_{3/2}\) intermediate states add with an opposite sign to those from \(nP_{1/2}\), whereas in \(\alpha\), contributions from \(nP_{1/2}\) and \(nP_{3/2}\) add with the same sign, due to the prefactor \((-1)^{J_{a}-J_{i}}\) in Eq. (23a). Since the \(nP_{1/2}\) and \(nP_{3/2}\) states are degenerate in the nonrelativistic limit, \(\beta\) is nonzero solely due to relativistic effects and is thus suppressed compared to \(\alpha\). This may also be understood from the observation that the matrix element of a rank-1 tensor (the vector polarizability) between the \(L=0\) states (the \(S\) states in the nonrelativistic limit) vanishes due to the angular selection rules, while the same matrix element between the \(S_{1/2}\) (\(J=1/2\)) states does not. The cancellation of terms and the resulting suppression of \(\beta\) render SoS computations of the vector polarizability less reliable than those for \(\alpha\). An improved evaluation of \(\beta\) involves, as in previous works, combining our theoretically determined value of \(\alpha=-266.30(21)\) with the experimentally measured ratio [13]\(\alpha/\beta=9.905(11)\) to obtain \(\beta=26.887(38)\). This semi-empirical value (in bold) for \(\beta\) is also presented in Table 1. It differs from the value of \(\beta=27.139(42)\) of Ref. [10] by 0.94% or 4.4\(\sigma\) while is only 0.26% or 1.1\(\sigma\) away from the \(M1/\beta\) value of \(\beta=26.957(51)\) of Ref. [12]. A comparison between our new value for \(\beta\) with previous results is presented in Fig. 1. We conclude that our determination of \(\beta\) brings the two alternative approaches (\(\alpha/\beta\) and \(M1/\beta\)) into an essential agreement. Finally, a weighted average of our value for \(\beta\) and that of Ref. [12] results in \[\beta=26.912(30)\,.\] This is the most accurate determination of the vector transition polarizability in Cs to date. Since these two values (ours and that of Ref. [12]) were obtained using different methods, potential cross-correlation effects are anticipated to be suppressed when taking the weighted average. Note that taking weighted average over all the values in Fig. 1 would be incorrect, as all the values on the left panel are statistically correlated. ## Acknowledgements We thank D. Elliott for a discussion. This work was supported in part by the U.S. National Science Foundation grants PHY-1912465 and PHY-2207546, by the Sara Louise Hartman endowed professorship in Physics, and by the Center for Fundamental Physics at Northwestern University.
2304.07327
OpenAssistant Conversations -- Democratizing Large Language Model Alignment
Aligning large language models (LLMs) with human preferences has proven to drastically improve usability and has driven rapid adoption as demonstrated by ChatGPT. Alignment techniques such as supervised fine-tuning (SFT) and reinforcement learning from human feedback (RLHF) greatly reduce the required skill and domain knowledge to effectively harness the capabilities of LLMs, increasing their accessibility and utility across various domains. However, state-of-the-art alignment techniques like RLHF rely on high-quality human feedback data, which is expensive to create and often remains proprietary. In an effort to democratize research on large-scale alignment, we release OpenAssistant Conversations, a human-generated, human-annotated assistant-style conversation corpus consisting of 161,443 messages in 35 different languages, annotated with 461,292 quality ratings, resulting in over 10,000 complete and fully annotated conversation trees. The corpus is a product of a worldwide crowd-sourcing effort involving over 13,500 volunteers. Models trained on OpenAssistant Conversations show consistent improvements on standard benchmarks over respective base models. We release our code and data under a fully permissive licence.
Andreas Köpf, Yannic Kilcher, Dimitri von Rütte, Sotiris Anagnostidis, Zhi-Rui Tam, Keith Stevens, Abdullah Barhoum, Nguyen Minh Duc, Oliver Stanley, Richárd Nagyfi, Shahul ES, Sameer Suri, David Glushkov, Arnav Dantuluri, Andrew Maguire, Christoph Schuhmann, Huu Nguyen, Alexander Mattick
2023-04-14T18:01:29Z
http://arxiv.org/abs/2304.07327v2
# OpenAssistant Conversations - Democratizing Large Language Model Alignment ###### Abstract Aligning large language models (LLMs) with human preferences has proven to drastically improve usability and has driven rapid adoption as demonstrated by ChatGPT. Alignment techniques such as supervised fine-tuning (SFT) and reinforcement learning from human feedback (RLHF) greatly reduce the required skill and domain knowledge to effectively harness the capabilities of LLMs, increasing their accessibility and utility across various domains. However, state-of-the-art alignment techniques like RLHF rely on high-quality human feedback data, which is expensive to create and often remains proprietary. In an effort to democratize research on large-scale alignment, we release OpenAssistant Conversations, a human-generated, human-annotated assistant-style conversation corpus consisting of 161,443 messages distributed across 66,497 conversation trees, in 35 different languages, annotated with 461,292 quality ratings. The corpus is a product of a worldwide crowd-sourcing effort involving over 13,500 volunteers.2 To demonstrate the OpenAssistant Conversations dataset's effectiveness, we present OpenAssistant, the first fully open-source large-scale instruction-tuned model to be trained on human data. A preference study revealed that OpenAssistant replies are comparably preferred to GPT-3.5-turbo (ChatGPT) with a relative winrate of 48.3% vs. 51.7% respectively. We release our code3 and data4 under fully permissive licenses. [FOOTNO Introduction Artificial intelligence (AI), particularly in the field of natural language processing, has witnessed rapid progress in recent years. Major advancements are primarily driven by a straightforward formula: Take a simple transformer-based architecture, increase the number of parameters by enlarging the specified depth and width and finally, significantly scale the training corpus. Although models have for some time exhibited an extraordinary, super-human ability to fit the training data and generalize based on their trained objective [1; 2], their adoption among the general public has until recently been slow. This can be mainly attributed to misalignment between the model's predictions and the final intended usage. The alignment of AI systems with human values, intentions, and preferences is a vital and intricate challenge within the AI research domain. This refers to the process of ensuring that AI systems can not only successfully optimize surrogate provided training objectives, but also that their predictions are in line with their intended purpose and adhere to ethical and safety standards provided by humans. One possible solution is assistant-style fine-tuning of language models that has recently emerged as a promising approach to making large language models more in line with human preferences by generating more desirable outputs [3] and thus making them more useful. A notable instance of such an assistant-style model is ChatGPT, which has gained unprecedented user growth due to remarkable capabilities demonstrated in a wide range of fields, but also the ease-of-use for the end user. Aligning the model's predictions is in this case accomplished by introducing human-generated examples of intended usage and using reinforcement learning from human feedback (RLHF) [4; 5]. In RLHF, the human acts as a teacher and provides feedback in the form of rewards or penalties. In more detail, Ouyang et al. [4] proposed a three stage procedure to align language models. * Collect human-generated demonstrations of desired behaviour and train a supervised fine-tuned (SFT) model. * Train a reward model (RM) on human-annotated rankings for different model outputs. * Use the RM as a reward function and fine-tune the SFT model to maximize the reward generated by its responses. This is achieved using the PPO algorithm [6]. It becomes apparent that the benefits of all the previous aforementioned stages, are predominantly imposed by the quality of the data used [7]. Despite this, availability of large-scale human feedback datasets for the open research community remains scarce. Most openly accessible datasets are comprised of synthetic data of instructions automatically generated by querying language models [8; 9; 10; 11; 12]. Unfortunately, these datasets are limited with respect to their complexity, creativity and quality, as they rely on a pre-specified list of possible instruction types. Without sufficiently broad and high quality data, even models with substantial size and pre-training would prove inadequate for building capable, helpful, and harmless AI assistants. Research in this area has predominantly been confined to a select few research labs with access to the required resources to engage in large-scale training and data collection. This monopolization of access to quality data undermines the potential for inclusive and diverse research endeavours, particularly in relation to alignment challenges, which arguably constitute some of the most crucial research areas of our time. In an effort to democatize research on aligning large language models, we introduce and release the OpenAssistant Conversations dataset. This dataset is the culmination of an extensive open- and crowd-sourcing initiative, and its release to the research community seeks to promote more inclusive research in this highly-influential domain. We provide a comprehensive analysis of the dataset, assessing ethical implications and safety considerations. We also fine-tune and release several assistant and preference models to further advance open access and research in this area. This transparency allows for iterative improvements on the released artifacts, fostering a more collaborative and inclusive research environment. Our belief is that our work makes a noteworthy contribution towards creating a research landscape that is more inclusive and democratized, thereby providing opportunities to researchers from diverse backgrounds. In the following sections, we delve into the intricacies of the OpenAssistant Conversations dataset and discuss its implications for the alignment of large language models and for society at large. Data Format The basic data structure is a _Conversation Tree (CT)_ with nodes representing written messages in a conversation. A CT's root node represents an initial prompt, given by the prompter. To avoid confusion, we call the roles of the conversation _prompter_ and _assistant_. This allows us to reserve the term _user_ for the human contributors. Both the prompter and assistant roles can be fulfilled by either a human user or a machine. Every tree node is labelled by its role, and can have multiple children of the opposite role, each of which represents a separate next step in the conversation. A path from the root to any node in the tree (including to itself) represents a valid conversation with prompter and assistant taking turns and is called a _thread_. Tree nodes are annotated with additional data such as user-provided labels and metadata, such as collection timestamp and indicated language. Each _assistant_ node further has a rank associated which orders it compared to replies of the parent prompt, according to user preferences. ## 3 Data Collection The OpenAssistant Conversations dataset is a comprehensive collection of conversational data that was obtained through a crowdsourcing effort involving more than 13,000 volunteers. The data was collected using a web-app interface5, which facilitated the process by dividing it into five separate steps: _prompting_, _labelling prompts_, _adding reply messages as prompter or assistant_, _labelling replies_, and _ranking assistant replies_. The dataset was curated with content moderation and spam filtering as key components of the annotation pipeline, ensuring high quality and safety standards. Footnote 5: Reachable at [https://open-assistant.io/](https://open-assistant.io/) Volunteers completed over 625,000 tasks in total, resulting in the collection of over 10,000 fully annotated and filtered Conversation Trees. We hope the resulting dataset will be an important resource for researchers studying natural language processing and machine learning, as it allows for the development and testing of new algorithms and models for conversational AI. By providing such a large and diverse dataset, the OpenAssistant Conversations dataset opens up new avenues of research in the field, enabling researchers to explore the complexities of human language and interactions in ways that were not possible before [13]. Example User Interface (UI) displays of the data collection platform can be found in Appendix B. In the following sections, we provide more details regarding the various aspects of the data collection pipeline. ### Single-Step Collection The process of data collection in this study is structured to be both efficient and effective by breaking the work down into single units and advancing multiple conversation trees one step at a time. This approach minimizes data loss due to user attrition and ensures that every unit of work is captured for Figure 1: An example Conversation Tree (CT) of depth 4 containing 12 messages. Any path from the root prompt to a node is a valid thread. utilization. The users are presented with a range of task types, either by choice or through random sampling (weighted according to current requirements). The task types include creating prompts, replying as an assistant, replying as a promporter, labeling prompts or replies, and ranking promporter or assistant replies. Create a prompt.Users are required to write an initial prompt that forms the root of a new conversation tree. As this task is highly popular among users, a lottery system is employed to manage the selection of new prompts, with only a fixed number of prompts being chosen for continuation at any given moment. This method serves to regulate the influx of new prompts and maintain a balanced distribution of tasks. Reply as assistant.Replying as an assistant is a more labor-intensive task that necessitates users to carefully consider their responses and often engage in external research to provide a helpful and relevant answer to the promter's request. This task type, despite its demanding nature, has been reported to be the most enjoyable by many users due to the diverse array of topics covered. To account for the increased effort required for this task, a reward system has been implemented to incentivize users to participate. See Figure 8 for a UI preview. Reply as promter.The task of replying as a promter, on the other hand, does not impose strict quality requirements but instead emphasizes on the importance of diversity to accommodate various use-cases. Examples of promter replies may include asking for clarification, modifying the original intent, posing a follow-up question, or changing the direction of the conversation altogether. Label a prompt or reply.Users are presented with a message from the database along with the preceding conversation thread (if available) and are asked to categorize the message according to three dimensions: spam detection, guideline adherence, and quality. For spam detection, users assess whether the message is unsuitable for inclusion in the dataset, such as instances of obvious spam or trolling. Messages flagged as spam by multiple users are automatically removed from the dataset. Guideline adherence is evaluated through a set of labels that determines whether the contribution aligns with the established guidelines (see Figure 6). These labels encompass the message being in a language other than the specified one, containing personally identifiable information, hate speech, sexual content, or being deemed inappropriate. Messages labelled in this manner are subsequently reviewed by human moderators. Quality labels require users to rate the message on a five-point Likert scale across dimensions such as quality, creativity, humorousness, politeness, and harmlessness. These labels are stored for later analysis and application. Notably, users can voluntarily assign these labels (as well as spam & guideline adherence labels) to any message within the system, even as part of another task, as an additional contribution. Rank assistant replies.Users are presented with two or more responses to the same parent message and asked to rank them in order of preference. This allows for a comparative analysis of the various responses and helps in identifying the most effective and engaging replies (Figure 7). In summary, this data collection methodology effectively divides work into single units, minimizes data loss due to user attrition, and captures valuable information for future analysis and application. By offering users a diverse range of task types, the study encourages active participation and ensures the collection of rich and varied data for a comprehensive understanding of the subject. ### Message Tree State Machine The tree state machine serves as a systematic approach to managing the progression of message trees throughout the data collection process. This method ensures that each tree undergoes a series of states until it reaches completion, beginning with the creation of new trees by randomly sampling from the pool of initial prompts. The various states that a message tree passes through include the _initial prompt review state_, _growing state_, and _end state_, as well as the _aborted low-grade state_ for trees that are deemed unsuitable for inclusion in the dataset. Upon the creation of a new tree, it enters the _initial prompt review state_, where multiple users are tasked with providing labels to assess its quality and suitability. This state plays a crucial role in identifying any potential issues with the initial prompt, such as spam or content that violates the established guidelines. If the provided labels indicate that the tree contains spam or unsuitable content, it is transitioned to the _aborted low-grade state_ and subsequently removed from the dataset. Conversely, if the tree passes the _initial prompt review state_, it proceeds to the _growing state_. The _growing state_ involves the continuous issuance of tasks to users, such as providing replies, labels, and rankings, to facilitate the development and expansion of the conversation tree. This state is essential for collecting diverse and rich data, as it allows for the accumulation of multiple interactions and the exploration of various conversation paths, given the same initial prompt. The _growing state_ continues until the tree reaches its _end state_, which is defined by a maximum number of messages or other predetermined criteria. Parameters within the data collection platform govern the behaviour of the tree state machine, such as the average number of messages collected for each parent message or the maximum tree depth. These parameters enable researchers to fine-tune the data collection process according to their specific research goals and requirements, ensuring a more targeted and efficient approach to gathering data. Parameters varied during the collection of the dataset. Current settings can be found in Appendix F. In summary, the tree state machine is a structured and systematic method for managing the progression of message trees during the data collection process. By guiding each tree through a series of states, from initial prompt review to growing and reaching its _end state_, the tree state machine ensures the collection of high-quality, diverse, and relevant data. Additionally, the inclusion of platform parameters allows for the customization of the data collection process to align with specific research objectives, further enhancing the effectiveness and utility of this approach. ### Ranking merging Reinforcement learning from human feedback (RLHF) [14; 15] comprises a set of techniques that all aim to optimize the output distribution of a language model using the preference structure provided by human rankers. To get a preference structure that is well aligned to users, we cannot just rely on the opinions of individual rankers, due to the high variance in human preferences. Since our objective is to collect data for a generally capable digital assistant, every ranking of possible responses is performed by K independent rankers, (see, section 3.1). Once this is done, we need to fuse these K individual opinions into one consensus opinion usable in training preference models. We perform this preference fusion by treating it as a ranked-voting problem, whose objective it is to maintain the preferences as faithfully as possible. The method chosen for this is known as "ranked pairs" or "Tideman's method" [16]. Simplified, this method creates a sorted list of "winners" according to the strength of the preference of one element over the others. The way the preference strength is measured is by considering all preference pairs in the input votes: For example, if the votes are two times \(A>B>C\) and one time \(B>A>C\), this would mean that the pair \(A>B\) exists 2 times, while \(A>C\) exists three times. The method then orders sorts the winners by winning strength, i.e. here \(A>C\) would happen before \(A>B\), and constructs a directed graph using the preferences, i.e. \(A>C\) would become an edge \(A\to C\). Edges are added one by one according to their weight (higher first), as long as no cycle is produced. If the edge would produce a cycle, it is skipped as the corresponding dominant preferences are already in the graph. The resulting directed acyclic graph can be turned into a preference structure by recursively removing source nodes and adding them to the back of a preference list, since source nodes are, by construction, not preferred over any other item left in the graph. In practice, one can speed up the construction by not explicitly constructing the preference graph and fusing the graph con- and destruction. ### Contributor Guidelines To achieve a high degree of quality and consistency across a wide range of contributors, we issue clear and detailed guidelines. A full copy of these guidelines at the present time can be found in Appendix A. Our guidelines follow three main goals: 1. Clarify the meanings, scales, and criteria for assigning labels and rankings during the labelling and ranking tasks 2. Make assistant responses be polite, helpful, concise, friendly, and safety-aware and 3. Instruct prompts and promter replies to explore a diverse and challenging set of inputs to the assistant role. In particular, the guidelines establish a framework for safely interacting with an automated assistant by drawing inspiration from the concept of _informed consent_. Rather than categorically denying large parts of request categories, we aim to provide the prompter with useful feedback, for example drawing special awareness to dangerous activities, elaborating on weaknesses of automated assistants, such as hallucinations, and discouraging and denying requests asking for illegal or highly inappropriate content. In our validation experiments in training assistant models based on OpenAssistant Conversations, we observe a high degree of consistency of the trained models' outputs with our given guidelines. Although guideline adherence is already high in our models after training, our approach is completely compatible with deploying additional safety measures during inference, such as secondary models to filter or modify ill-suited user input. ### Quality Control & Content Moderation We take a multi-pronged approach to quality assurance, with the main pillars being a system of reward points & leaderboards, and manual review of flagged content by human moderators. This both maximizes the quality of contributions, while effectively utilizing the limited time of the volunteer moderators. In an effort to demonstrate progress and achievement to users, and to encourage high-quality contributions, our system allocates points for the completion of tasks. These points contribute to various leaderboards, including daily, weekly, monthly, and all-time rankings. A level system also exists, wherein higher point accumulation results in elevated levels, reflecting veteran status and engagement. In the future, this system could potentially be developed further to facilitate preferential access to more engaging tasks or similar perks. The distribution of points is contingent upon task type, as certain tasks require greater effort, such as the _reply as assistant_ task (compared to the _create a prompt_ task). A significant portion of points is deferred and reliant on interactions with other users. For instance, a user's assistant reply may gather many additional points if it is subsequently deemed non-spam and highly ranked by other users. Inversely, points may be reduced or lost for answers that are labeled as spam or down-voted by consensus of other users. Within the moderator section of the website, an alternative leaderboard, designated the _Trollboard_, is exhibited. This leaderboard assesses users based on an aggregate of negative labels, reports, and down-votes received for their contributions. This approach enables human moderators to proactively scrutinize potentially misbehaving users in a comprehensive manner. The Trollboard has proven to be an effective tool in addressing the numerical disparity between users and moderators, maximizing the collective efforts of contributors to identify undesirable contributions. Users further have the option to report messages to moderators for manual review, either via the platform, or directly via communication on a community chat server. Moderators have the ability to delete individual messages, or all messages of a given user, at their own discretion. Deleted messages are retained, but marked as deleted and not exported for training. ## 4 Dataset Composition We release several variants of the OpenAssistant Conversations dataset representing various levels of filtering. The full dataset consists of 161,443 messages distributed across 66,497 conversation trees, in 35 different languages, annotated with 461,292 quality ratings. This includes 8,576 synthetic messages, leaving 152,867 human-submitted messages. Of the 66,497 total conversation trees, we consider 10,968 complete, meaning the full number of messages has been collected and the moderation process for these trees has been concluded. These completed trees contain 92,365 messages. The set of categories for which Likert-scale human labels are collected is Creativity, Quality, Humor, Helpfulness, Violence, and Rudeness. The set of categories for which binary human labels are collected is Language Mismatch, Not Appropriate, Personally Identifiable Information, Hate Speech, and Sexual Content. We additionally release the rank of each assistant message compared to other assistant messages submitted for the same prompt, computed from the preference rankings of several human annotators. Of the 161,443 total messages, 69,614 are assistant replies and 91,829 are user prompts. Related to this, 52,159 conversation trees consist of only a single initial user prompt which has not yet received any assistant replies. The dataset is dominated by English and Spanish messages as illustrated in Figure 2. The prominence of English is expected as a result of the community around OpenAssistant originating in the English-speaking open-source machine learning community. The high quantity of Spanish messages can be attributed to the publicity given to OpenAssistant by prominent figures in the Spanish machine learning community. Figure 3 illustrates how a small number of power users contributed a significant proportion of the dataset. This must be taken into account when considering possible biases in the data. Although significant effort went into preventing responses directly copied from other sources, it is possible that some users utilised automated techniques to enter data and this should also be kept in mind. Figure 3: Distribution of Messages Figure 2: Relative share of the most frequent languages in the dataset. User Demographics and Satisfaction To gain a deeper understanding of the contributors' demographics, a Google Form survey was sent out as an announcement on the project's Discord channel. The method of recruiting via Discord is biased towards users who are present on the platform and have been active around the time of the announcement. Therefore, we intend to send out e-mails to registered users in the future. Fluency in English can also affect the willingness to participate in the survey. Translations of the survey are planned in languages with higher representations, such as Spanish, to ensure that responses from monolingual users are not missed. The survey consists of 3 parts with questions on demographic information, personal motivation and user satisfaction. Since prompts are received from all over the world in multiple languages, we wanted to make sure that demographic questions, such as levels of completed education [17] are in fact reliable to everyone. We have omitted questions on ethnicity and nationality. Instead, we asked questions about English proficiency, country of origin and the language for which most of their contributions were made. At the time of the release of this paper, a total of 226 participants have answered the survey, with the overwhelming majority, 201 being male and only 10 of the respondents female. Only 5 of our respondents self-identified as non-binary / other, and 10 preferred not to answer. Despite this homogeneity, the respondents do differ in their level of education and motivation10 for contribution. They understand and use artificial intelligence at different levels9 and have different use cases for the technology11. People were in general very happy to have contributed to the project, with 94.25% either agreeing or strongly agreeing with the statement "Overall, I'm glad I have contributed to OpenAssistant.".2 For about 40%, this has been their very first time contributing to a community project. ## 6 Experimental Validation ### Instruction Tuning To evaluate and demonstrate the effectiveness of the OpenAssistant Conversations dataset, we focus on the development and evaluation of fine-tuned language models based on Pythia [2] and LLaMA [1]. Pythia is a state-of-the-art language model with a permissive open-source license, while LLaMA is a powerful language model with a bespoke non-commercial license. Figure 4: Demography of 226 respondents We release a suite of fine-tuned language models, including instruction-tuned Pythia-12B, LLaMA-13B, and LLaMA-30B, which represents our largest model to date. In order to assess the performance of these models, we evaluate the performance of the Pythia-12B model. We have chosen to focus our analysis on this model due to its open-source nature, which makes it widely accessible and applicable to a diverse range of applications. To evaluate the performance of Pythia-12B, we conducted a user preference study comparing its output to that of OpenAI's gpt-3.5-turbo model. As of the time of writing, this study has garnered 348 submissions, amounting to a total of 7042 comparisons. After excluding ties, which account for 16.4% of the total comparisons, we found that Pythia-12B has a win rate of 48.3% (95% confidence interval of \(\pm\) 1.28%, \(N=5,889\)) against gpt-3.5-turbo. This result implies that the answers generated by Pythia-12B are 93.5% as preferable as those produced by gpt-3.5-turbo, indicating that our fine-tuned Pythia model is a strong competitor in the realm of large-scale language models. For more details on the user preference study, we refer to Appendix E. ### Preference Modelling In addition to the instruction-tuned models, we also release trained reward models based on Pythia-1.4B and Pythia-12B. These models have been trained on OpenAssistant data using [4] and [18] as a guideline. The utilization of reward models trained on real-world data allows for more accurate and adaptive responses to user input, which is essential for the development of effective and user-friendly AI assistants. We plan to release LLaMA-30B models trained on Reinforcement Learning with Human Feedback (RLHF), as this approach has the potential to yield significant improvements in model performance and adaptability. However, the development and training of RLHF-based models are still ongoing, and further effort is required to ensure the successful integration of this training methodology into. ### Spam and Toxicity In the pursuit of understanding the concordance between human and automated toxicity detection, we employ toxicity detection methods based on Detoxify [19] to obtain automated ratings for six distinct categories, classifying whether a message is toxic, obscene, threatening, insulting, attacking a certain identity or sexually explicit. We limit our analysis to those languages that are supported by the toxicity detection method, covering English, Spanish, Russian, French, and Italian. It is worth noting that these languages represent the majority of messages (over 83%). Using automated toxicity ratings, we are able to systematically assess the correlation between these ratings and human-assigned toxicity labels (hate speech, not appropriate, and sexual content). Based on a sample of 115,153 messages, we compute the correlation between automatic and human-annotated toxicity labels, which is visualized in Figure 5. This analysis provides valuable insights into the efficacy of automated toxicity detection in comparison to human judgment. We see a strong correlation between human and automatic labels in at least one element of each row and column of the correlation matrix, suggesting strong agreement between human annotators and off-the-shelf toxicity detection models. The results serves to validate the capabilities and and show limitations of AI-driven toxicity detection and may inform future work in this area. In addition to analyzing the correlation between human-assigned toxicity labels and automated ratings, we extend the application of the Detoxify model to assess the efficacy of the moderation process for the same languages described earlier. To facilitate this analysis, we define two categories of messages: _deleted_ messages, which encompass those that either failed to pass the community moderation process or were subsequently manually removed by moderators, and _retained_ messages, which successfully made it through to the dataset. In order to provide a comprehensive evaluation of the moderation process, we calculated average values for each of the six Detoxif categories for both _deleted_ and _retained_ messages. The values obtained for this analysis are based on a sample of 74,781 messages. It is important to note that we excluded messages in trees that were still incomplete at the time of export, as these messages may still be subject to removal by the moderation process. Our analysis, presented in Table 1 shows that the values for all six toxicity categories are markedly higher for _deleted_ messages compared to _retained_ messages. This significant difference demonstrates the effectiveness of the moderation processes in place, as messages removed from the dataset are on average rated as significantly more toxic by the Detoxify model than messages allowed to remain in the dataset. We note that while _deleted_ messages are rated as more toxic than _retained_ messages by the Detoxify model across all categories, the average toxicity values for these messages are still small. This implies toxicity ratings from models like Detoxify alone are not sufficient to determine when messages are unsuitable for inclusion in the dataset. Reasons for deleting non-toxic messages may include a lack of factual accuracy, or poor grammar. Additionally, messages which are children of deleted messages must themselves be deleted even if they appear to be acceptable in isolation. ## 7 Limitations In this section, we discuss the limitations of the dataset and the corresponding implications for the use of the large language models (LLMs) that we train on it. The limitations of our dataset arise mainly from the subjective and cultural biases of the annotators, the uneven distribution of contributions among users, and the possibility of unsafe content. We emphasize that our models should be employed for academic research purposes only and that researchers should exercise caution in evaluating safety and bias when applying these models to downstream tasks. **Subjective and Cultural Biases.** The open nature of our project introduces a unique set of challenges when it comes to controlling for biases within the dataset. Annotators from diverse backgrounds contribute to the dataset, with demographics that are simultaneously heterogeneous and homogeneous. Contributors come from all around the world and have varied interests, but they tend to share certain characteristics such as age and gender. Specifically, 89.1% of the annotators identify as male, with a median age of 26. This demographic profile may inadvertently introduce biases in the dataset, as it is bound to reflect the values, perspectives, and interests of the annotators. **Uneven Distribution of Contributions.** Although the dataset benefits from the contributions of a large number of users, their participation levels differ significantly. More engaged users contribute \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & Toxicity & Obscene & Threat & Insult & Identity Attack & Explicit & N \\ State & & & & & & & \\ \hline Deleted & 4.625\% & 1.965\% & 0.411\% & 2.085\% & 0.651\% & 1.39\% & 3422 \\ Retained & 0.988\% & 0.574\% & 0.102\% & 0.715\% & 0.121\% & 0.177\% & 71359 \\ \hline \hline \end{tabular} \end{table} Table 1: Detoxify outputs across six categories of toxicity, comparing _deleted_ and _retained_ messages. Figure 5: Correlation between human labels and Detoxify outputs for all messages in Detoxify-supported languages. a greater number of annotations, which leads to an overrepresentation of their values and interests in the dataset. Consequently, the dataset may not adequately capture the diverse perspectives that a more balanced distribution of contributions could have provided. **Possibility of Unsafe Content.** While we have implemented measures to detect and remove harmful messages from the dataset, our system is not infallible. It is possible that the dataset still contains unsafe content. We believe that the open nature of the project allows for data filtering to be conducted in a transparent manner, ultimately converging on the highest possible standards. Nevertheless, the presence of unsafe content in the dataset raises concerns about the safety of the LLMs trained on it. Given the limitations discussed above, we advocate for the use of our LLMs in academic research contexts only. We strongly encourage researchers to thoroughly investigate the safety and bias of the models before employing them in downstream tasks. It is important to recognize that the released models may exhibit unsafe behavior and are likely susceptible to prompt injection attacks. The alignment of LLMs is a crucial aspect of AI research, and we hope that our contributions can help advance the field of AI alignment. However, it is important to acknowledge that current alignment techniques are not perfect and can even exacerbate certain biases [20]. As such, researchers should exercise caution when using these models and be cognizant of their limitations. We stress the importance of using these models for academic research purposes only and urge researchers to carefully consider the safety and bias implications when applying these models to downstream tasks. Additionally, it is essential to continue refining alignment techniques and advancing the field of AI alignment in order to mitigate these limitations and develop more reliable and robust LLMs. ## 8 Safety and Ethical Implications We presented the OpenAssistant Conversations dataset, an outcome of a crowd-sourcing initiative aimed at promoting research in the area of alignment in LLMs. We recognize that sufficiently powerful language models can have a significant impact on society [21], and therefore we believe it is essential to promote transparency and ethical considerations in their development and deployment. These models are often prone to generating inaccurate information about people, places, or facts, a phenomenon commonly known as 'hallucinations' [22, 23]. LLMs can also produce toxic or hateful content and fail to follow provided user-constraints [24]. Additionally, these models tend to incorporate biases present in their training data, leading to unfair and discriminatory outputs [25]. While methods such as RLHF can mitigate some of these shortcomings, they may exacerbate others [26, 20]. We hope that alignment can fix some of these issues [4], but it is important to acknowledge that achieving alignment is a complex and ongoing challenge. Our team has put in significant effort to ensure that the community has access to an open-source high-quality dataset free of unethical or harmful responses. We believe that creating a safe and respectful environment for our users is paramount, and we encourage them to generate prompts and replies that are not only polite, but also creative and detailed. To ensure the quality of our dataset, we have established strict contributor guidelines that all users must follow. These guidelines are designed to prevent harmful content from being added to our dataset, and to encourage contributors to generate high-quality responses. Previous sections and the contributor guidelines in Appendix A, provide detailed information. Overall, our goal is to create a dataset that is both useful and safe for future research. We believe that it is essential to conduct alignment research at an appropriate pace relative to improving general capabilities. By releasing the OpenAssistant Conversations dataset, we hope to facilitate further research in this area. ## Acknowledgments and Disclosure of Funding Our greatest thanks go to the many volunteer contributors, of human data, code, moderation, documentation, and community organization. Absent of any financial incentives, this project is a stunning and unprecedented display of global cooperation of humans for the purpose of advancing and democratizing AI research. In addition, several organizations have contributed to this project with resources: Redmond AI provided training compute. Stability AI and Hugging Face provided inference compute. We thank Olivier Dehaene at Hugging Face for close collaboration and personal support. Weights & Biases provided their full MLOps solution to the entire team. LAION provided legal input and acts as the website addressee. We thank Luke Thomas Kaiser for running evaluations on model bias.
2308.00945
Reward Shaping for Building Trustworthy Robots in Sequential Human-Robot Interaction
Trust-aware human-robot interaction (HRI) has received increasing research attention, as trust has been shown to be a crucial factor for effective HRI. Research in trust-aware HRI discovered a dilemma -- maximizing task rewards often leads to decreased human trust, while maximizing human trust would compromise task performance. In this work, we address this dilemma by formulating the HRI process as a two-player Markov game and utilizing the reward-shaping technique to improve human trust while limiting performance loss. Specifically, we show that when the shaping reward is potential-based, the performance loss can be bounded by the potential functions evaluated at the final states of the Markov game. We apply the proposed framework to the experience-based trust model, resulting in a linear program that can be efficiently solved and deployed in real-world applications. We evaluate the proposed framework in a simulation scenario where a human-robot team performs a search-and-rescue mission. The results demonstrate that the proposed framework successfully modifies the robot's optimal policy, enabling it to increase human trust at a minimal task performance cost.
Yaohui Guo, X. Jessie Yang, Cong Shi
2023-08-02T04:57:45Z
http://arxiv.org/abs/2308.00945v1
# Reward Shaping for Building Trustworthy Robots in Sequential Human-Robot Interaction ###### Abstract Trust-aware human-robot interaction (HRI) has received increasing research attention, as trust has been shown to be a crucial factor for effective HRI. Research in trust-aware HRI discovered a dilemma -- maximizing task rewards often leads to decreased human trust, while maximizing human trust would compromise task performance. In this work, we address this dilemma by formulating the HRI process as a two-player Markov game and utilizing the reward-shaping technique to improve human trust while limiting performance loss. Specifically, we show that when the shaping reward is potential-based, the performance loss can be bounded by the potential functions evaluated at the final states of the Markov game. We apply the proposed framework to the experience-based trust model, resulting in a linear program that can be efficiently solved and deployed in real-world applications. We evaluate the proposed framework in a simulation scenario where a human-robot team performs a search-and-rescue mission. The results demonstrate that the proposed framework successfully modifies the robot's optimal policy, enabling it to increase human trust at a minimal task performance cost. ## I Introduction Human trust plays a crucial role in human-robot interaction (HRI) as it mediates the human's reliance on the robot, thus directly affecting the effectiveness of the human-robot team [1, 2, 3]. As a result, researchers have proposed _trust-aware_ human-robot planning [4], which equips a robot with the ability to estimate and anticipate human trust and enables it to strategically plan its actions to foster better cooperation, improve teamwork, and ultimately enhance the overall performance of the human-robot team. Trust-aware HRI explicitly consider human trust in the robot's decision-making processes [4, 5, 6]. Chen et al. [4] modeled the sequential HRI process as a partially observable Markov decision process (POMDP) and incorporated human trust into the state transition function, allowing the robot to optimize its objectives in the interaction process. However, the authors discovered a dilemma -- maximizing task rewards often leads to decreased human trust, while maximizing human trust would compromise task performance. Guo et al. further investigated the dilemma and showed that the robot could intentionally "deceive" its human partner in order to maximize its rewards [5]. To address this issue, they proposed adding a "trust-seeking" term to the reward function to encourage the robot to increase human trust and avoid deception. However, it remains unclear how to design such terms. The problem of balancing the robot's total task reward and human trust is fundamentally a bi-objective optimization problem, as illustrated in figure 1. It has been shown in previous studies that there does not always exist a unique dominant policy for the robot - a policy that can maximize the robot's total reward and human trust at the same time [5, 4]. A prevalent concept in bi-objective optimization involves determining the Pareto front, which represents an optimal balance between conflicting objectives, thereby allowing the system designer to select a suitable policy from this set. However, computing the Pareto front in an MDP can be computationally heavy, and it requires expert knowledge to choose an optimal one from the Pareto front. Instead, we address the reward design problem in trust-aware sequential HRI by reward-shaping, a method used in reinforcement learning to guide the learning process of an agent by modifying its reward function. We model the sequential HRI process as a two-player Markov game and aim to design a shaping reward that can guide the robot to gain human trust while guaranteeing small performance Fig. 1: This figure illustrates the idea of improving human trust through reward shaping. The horizontal axis is the total task reward while the vertical axis is the total trust-related reward. The shaded area stands for the collection of policies of our interest. A point \(\mathbf{V}^{\pi}\in\mathbb{R}^{2}\) indicates the value a policy \(\pi\) can earn during the interaction process. The policy \(\pi^{*}\) is the optimal policy for earning the task reward as its value attains the maximum along the total task reward axis; policy \(\pi^{t*}\) is the optimal policy for gaining human trust. The bold line is the Pareto front, i.e., the set of policies that yield the best possible trade-offs between human trust and total task rewards. Our goal is to find a policy \(\pi^{\prime}\) such that it seeks to improve human trust at a small cost of task reward. loss. Specifically, we prove that if the shaping reward is a carefully designed potential-based function, the performance loss can be bounded by the potential function evaluated at the final states of the Markov game. We show that applying the proposed framework to the experience-based trust model results in a linear program that can be efficiently solved. To evaluate the proposed method, we simulated a scenario where a human-robot team performs a search-and-rescue (SAR) mission. The results demonstrate that our method successfully modifies the robot's optimal policy, enabling it to increase human trust while satisfying the constraint on performance loss. The contributions of this work include: * Proposing a computational framework for balancing task performance and human trust in trust-aware sequential HRI. * Developing a novel reward-shaping method for designing the reward term with a theoretical guarantee on the performance loss. The remainder of this paper is organized as follows: we first review relevant literature in section II. In section III, we formulate the trust-aware human-robot interaction problem as a two-player Markov game. In section IV, we present the reward-shaping method for designing the trust-seeking term. In section V, we present the simulation scenario used to evaluate the proposed framework. Finally, in section VI, we discuss the results and summarize the work. ## II Related Work In this work, we investigate the problem of reward design for trustworthy robots in trust-aware sequential HRI. Our method is developed upon previous research, including studies on human trust in robots and trust-aware decision-making, as well as reward shaping. We review the relevant literature in this section. It is worth noting that trust has been widely studied in different areas and given different definitions. In particular, in this work, we use the definition given by Lee and See [1], which highlights the uncertainty in HRI: "trust is the attitude that an agent will help achieve an individual's goals in a situation characterized by uncertainty and vulnerability". ### _Computational Trust Model in HRI_ Previous literature attempted to understand the development and evolution of human trust in an HRI process. Empirical studies have been conducted to understand the temporal dynamics of trust when a person interacts with autonomy repeatedly [7, 8, 9, 10, 11, 12, 13]. Three major properties that characterize how a person's trust in autonomy changes due to moment-to-moment interactions with autonomy are summarized in [8, 14], namely _continuity_, _negativity bias_, and _stabilization_. In addition, several computational trust models have been developed, including [8, 15, 16, 17, 18]. Xu and Dudek proposed the online probabilistic trust inference model (OPTIMo) [15], which employs Bayesian networks to estimate human trust based on the autonomous agent's performance and human behavioral signals. Soh et al. [16] proposed a Bayesian model that combines Gaussian processes and recurrent neural networks to predict trust levels for different tasks. Based on the three properties identified in empirical studies, Guo and Yang [8] modeled trust as a Beta random variable, parameterized by positive and negative interaction experiences a person has with a robot. In their following work, they extended the model to the multi-human-multi-robot case by introducing trust propagation between agents [18]. For a detailed review of the computational models, see [3]. ### _Trust-aware Planning_ Endowed with a trust prediction model, a robot can predict how human trust changes due to moment-to-moment interactions and in turn plan its actions accordingly. Existing studies in trust-aware decision-making modeled HRI processes as Markov decision processes (MDPs). Chen et al. [4] proposed the trust-POMDP to let a robot actively estimate and exploit its human teammate's trust. Their human-subject study showed that purely optimizing task performance in a human-robot team may lead to decreased human trust. Losey and Sadigh [19] modeled human-robot interaction as a two-player POMDP where the human does not know the robot's objective. They proposed 4 ways for the robot to formulate the human's perception of the robot's objective and showed the robot will be more communicative if it assumes the human trusts the robot and thus increases the human's involvement. ### _Reward Shaping in Trust-aware HRI_ As pure task-driven rewards lead to low human trust, Chen et al. [4] showed that pure trust-seeking rewards would correct such issues but lead to suboptimal task performance. Guo et al. [5] further examined the robot's optimal policies under different human behavior assumptions and suggested that adding a decaying trust-seeking term to the reward function would encourage the robot to seek high human trust during the initial interaction and maintain high human trust throughout the entire process. The idea of adding an additional term to the task reward is called _reward shaping_, which is a technique initially developed to accelerate the learning speed of an agent in reinforcement learning. Ng et al. [20] demonstrated that in an infinite-horizon MDP, the optimal policy with respect to the shaped reward remains optimal in the original model, provided that the shaping reward can be expressed as a particular combination of potential functions. In the following section, we leverage this potential-based reward-shaping approach to devise a trust reward that yields minimal performance degradation. ## III Problem Formulation In this section, we formulate the sequential HRI problem as a two-player Markov game. In addition, we present the experience-based trust dynamics model, which will be later used in the proposed framework. ### _Sequential HRI as a Two-player Game_ We consider the scenario where a human \(h\) and a robot \(r\) work as a team collaboratively for \(N\) rounds, as shown in figure 2. We formulate this process as a finite-horizon two-player Markov game \(M=\langle\mathcal{S},\mathcal{A}^{h},\mathcal{A}^{r},T,R^{h},R^{r}\rangle\), where \(\mathcal{S}\) is the set of environment states, \(\mathcal{A}^{h}\) and \(\mathcal{A}^{r}\) are the sets of actions available to the human and the robot, \(T\) is a Markov kernel from \(\mathcal{S}\times\mathcal{A}^{h}\times\mathcal{A}^{r}\) to \(\mathcal{S}\) (\(\sigma\)-algebra omitted) specifying the transitional probability of the process, and \(R^{h}\) and \(R^{r}\) are the reward functions of the two agents. At round \(n\), these two agents observe the current state \(s_{n}\in\mathcal{S}\) of the environment and then take actions. The robot selects an action \(a_{n}^{r}=\pi(s_{n})\in\mathcal{A}^{r}\) according to its policy \(\pi\). Then, the human observes the robot's action and takes action \(a_{n}^{h}=f\left(s_{n},a_{n}^{r}\right)\in\mathcal{A}^{h}\) according to his or her policy \(f\). Their actions transition the environment to a new state \(s_{n+1}\) according to the probability \(T\left(\cdot|s_{n},a_{n}^{h},a_{n}^{r}\right)\), and give the human and the robot rewards \(x_{n}^{h}=R^{h}\left(a_{n}^{h},a_{n}^{r},s_{n},s_{n+1}\right)\) and \(x_{n}^{r}=R^{r}\left(a_{n}^{h},a_{n}^{r},s_{n},s_{n+1}\right)\) respectively. The game starts from the initial state \(s_{1}\) and terminates after \(N\) steps at state \(s_{N+1}\), where the latter depends on the policies \(\pi\) and \(f\). The goal of the robot is to maximize the expected discounted total payoff \[J_{M}(s_{1})=\mathbb{E}\left[\sum_{n=1}^{N}\gamma^{n-1}x_{n}^{r}\right]. \tag{1}\] The optimal policy, possibly not unique, is a policy \(\pi^{*}\) that maximizes \(J_{M}(s_{1})\), i.e., \(\pi^{*}=\arg\max_{\pi\in\Pi}J_{M}(s_{1})\), where \(\Pi\) is the set of available policies. To analyze the proposed model, we define the value function. Given any policy \(\pi\) at round \(n\), we define the value function \(V_{n}^{\pi}:\mathcal{S}\rightarrow\mathbb{R}\) as the expected discounted reward by the end of round \(N\), i.e., \[V_{n}^{\pi}(s)=\mathbb{E}_{\pi}\left[\sum_{i=n}^{N}\gamma^{i-n}x_{i}^{r}\right], \tag{2}\] where \(x_{i}\) is the reward received by the robot during round \(i\) by following the policy \(\pi\) from state \(s\) and thereafter, and the expectation is over the state-transitions taken upon executing \(\pi\). Our approach assumes that the human agent plays a supervisory role by observing the robot's actions before deciding on their own action. This mirrors real-world scenarios where humans can intervene in robots' operations. The cases where the human and robot agents act simultaneously is a special setting where the human policy \(f\) is independent of the robot's current action \(a_{n}^{r}\), i.e., \(f\left(s_{n},a_{n}^{r}\right)=f^{\prime}(s_{n})\) for some \(f^{\prime}:\mathcal{S}\rightarrow\mathcal{A}^{h}\). Furthermore, we assume the game is Markovian. Non-Markovian games can also be formulated in this framework through Markovian embedding, i.e., we can include more variables in the state space to make the transitional probability and the policies Markovian. Finally, given our assumption that the policy \(f\) of the human is stationary, the game is deemed Markovian from the robot's perspective. This allows us to treat the game as an MDP for analysis purposes. ### _Experience-based Trust Dynamics_ To model human trust dynamics, we utilize the experience-based trust model developed in [8]. This model is general enough to be applied to various human-robot interaction (HRI) settings since it only requires the robot's performance as input. In the model, trust \(t_{n}\) before the \(n\)th interaction is defined as a random variable that follows a Beta distribution (\(t_{n}\sim\mathrm{Beta}(\alpha_{n},\beta_{n})\)). The two positive shape parameters, \(\alpha_{n}\) and \(\beta_{n}\), both greater than or equal to 1, represent the cumulative positive and negative interaction experience the human has had with the robot, and they are updated by \[(\alpha_{n+1},\beta_{n+1})=(\alpha_{n}+w^{s}p_{n},\beta_{n}+w^{f}(1-p_{n})), \tag{3}\] where \(w^{s}p_{n}\) and \(w^{f}(1-p_{n})\) are the experience gains from the robot's success and failure, and parameters \(w^{s}\) and \(w^{f}\) determine the unit gains. Here, \(p_{n}\in[0,1]\) represents the robot's performance measure on the \(n\)th task. ## IV Reward Shaping for Trustworthy Policy Previous research has shown that the robot may exhibit manipulative behavior to achieve better task rewards at the cost of losing human trust. To prevent the robot from engaging in this manipulative behavior, we introduce a trust reward function \(R^{t}\) such that, at the end of round \(n\), the robot receives the composite reward \(R:=R^{t}+R^{r}\) instead of \(R^{r}\). Such reward \(R^{t}\) shapes the behavior of the learning agent in a Markov process and thus is named _shaping reward_ in literature. By introducing the shaping reward \(R^{t}\), the original Markov game \(M=\langle\mathcal{S},\mathcal{A}^{h},\mathcal{A}^{r},T,R^{h},R^{r}\rangle\) is transformed into a new game \(M^{\prime}=\langle\mathcal{S},\mathcal{A}^{h},\mathcal{A}^{r},T,R^{h},R\rangle\), where \(R\) is the composite reward. Our research question is how to design Fig. 2: A general sequential HRI framework. the shaping reward \(R^{t}\) to promote trust while minimizing the loss of task performance. In particular, let \(\pi^{\prime*}=\arg\max_{\pi\in\Pi}J_{M^{\prime}}(s_{1})\) be an optimal policy in \(M^{\prime}\). Then the performance loss that the robot suffers amounts to \(V_{1}^{\pi^{\prime*}}(s_{1})-V_{1}^{\pi^{\prime*}}(s_{1})\). Our objective is to select a proper \(R^{t}\) to increase human trust while limiting the performance loss by some \(\epsilon>0\). ### _Bounding the Performance Loss_ Ng et al. showed that, for an infinite-horizon MDP, the optimal policy in \(M^{\prime}\) is also optimal in \(M\) if \(R^{t}\) is a potential-based shaping function [20]. A shaping reward \(R^{t}\) is said to be _potential-based_ if there exists some real-valued function \(\Phi:\mathcal{S}\rightarrow\mathbb{R}\) such that for all \(s\in\mathcal{S}-\{s_{1}\},a^{r}\in\mathcal{A}^{r},a^{h}\in\mathcal{A}^{h},s^{ \prime}\in\mathcal{S}\), \[R^{t}\left(a^{r},a^{h},s,s^{\prime}\right)=\gamma\Phi\left(s^{\prime}\right)- \Phi(s).\] This result indicates which types of shaping rewards can ensure zero performance loss, i.e., \(V_{1}^{\pi^{*}}(s_{1})-V_{1}^{\pi^{\prime*}}(s_{1})=0\), for an infinite-horizon MDP. However, our scenario is different in two aspects: first, we have a finite horizon; second, we allow some sacrifice in task performance to improve human trust. Despite these differences, we are inspired by the proof technique in [20] and propose that a carefully-designed potential-based shaping reward \(R^{t}\) can guarantee a small performance loss in our setting. **Theorem 1**.: _Let \(M=\langle\mathcal{S},\mathcal{A}^{h},\mathcal{A}^{r},T,R^{h},R^{r}\rangle\) and \(M^{\prime}=\langle\mathcal{S},\mathcal{A}^{h},\mathcal{A}^{r},T,R^{h},R\rangle\) be two \(N\)-horizon Markov games with \(R=R^{r}+R^{t}\). Let \(\pi^{*}=\arg\max_{\pi\in\Pi}J_{M}(s_{1})\) and \(\pi^{\prime*}=\arg\max_{\pi\in\Pi}J_{M^{\prime}}(s_{1})\) be two optimal policies w.r.t. \(M\) and \(M^{\prime}\) respectively and \(\epsilon\) be a positive number. Suppose there exists some real-valued function \(\Phi:\mathcal{S}\rightarrow\mathbb{R}\) such that_ \[R^{t}\left(a^{r},a^{h},s,s^{\prime}\right)=\gamma\Phi\left(s^{\prime}\right)- \Phi(s) \tag{4}\] _for all \(s,s^{\prime}\in\mathcal{S}\). Then_ \[V_{1}^{\pi^{*}}(s_{1})-V_{1}^{\pi^{\prime*}}(s_{1})\leqslant\epsilon \tag{5}\] _if_ \[\mathbb{E}_{\pi^{\prime*}}[\Phi(s_{N+1})]-\mathbb{E}_{\pi^{*}}[\Phi(s_{N+1})] \leqslant\gamma^{-N}\epsilon, \tag{6}\] _where \(s_{N+1}\) is the final state of the Markov process when starting from state \(s_{1}\) and following the corresponding policy for \(N\) rounds._ Proof.: Let \[x_{n}^{t}=R^{t}\left(a_{n}^{h},a_{n}^{r},s_{n},s_{n+1}\right) \tag{7}\] and define the value function of \(M^{\prime}\) similar to Eq. (2) as \[V_{n}^{\prime\pi}(s)=\mathbb{E}_{\pi}\left[\sum_{i=n}^{N}\gamma^{i-n}\left(x_{ i}^{r}+x_{i}^{t}\right)\right]. \tag{8}\] By Eq. (2) and Eq. (8), we have \[V_{1}^{\prime\pi}(s_{1})-V_{1}^{\pi}(s_{1})=\mathbb{E}_{\pi}\left[\sum_{n=1}^{ N}\gamma^{n-1}x_{n}^{t}\right]. \tag{9}\] We can express the second term on the right-hand side by Eqs. (4) and (7) as \[\mathbb{E}_{\pi}\left[\sum_{n=1}^{N}\gamma^{n-1}x_{n}^{t}\right] \tag{10}\] \[= \mathbb{E}_{\pi}\left[\sum_{n=1}^{N}\gamma^{n-1}R^{t}\left(a_{n}^{ r},a_{n}^{h},s_{n},s_{n+1}\right)\right]\] \[= \mathbb{E}_{\pi}\left[\sum_{n=1}^{N}\gamma^{n}\Phi(s_{n+1})-\sum_ {n=1}^{N}\gamma^{n-1}\Phi(s_{n})\right]\] \[= \gamma^{N}\mathbb{E}_{\pi}[\Phi(s_{N+1})]-\Phi(s_{1}).\] By combining Eqs. (9) and (10) we obtain \[V_{1}^{\prime\pi}(s_{1})-V_{1}^{\pi}(s_{1})=\gamma^{N}\mathbb{E}_{\pi}[\Phi(s_ {N+1})]-\Phi(s_{1}). \tag{11}\] Now we can bound the performance loss when executing \(\pi^{\prime*}\) instead of \(\pi^{*}\) on \(M\): \[V_{1}^{\pi^{*}}(s_{1})-V_{1}^{\pi^{\prime*}}(s_{1})\] \[= V_{1}^{\prime\pi^{*}}(s_{1})-V_{1}^{\prime\pi^{\prime*}}(s_{1})\] \[+\gamma^{N}(\mathbb{E}_{\pi^{\prime*}}[\Phi(s_{N+1})]-\mathbb{E}_{ \pi^{*}}[\Phi(s_{N+1})])\] \[\leqslant \epsilon,\] where the first equality follows from Eq. (11), the first inequality holds because \(\pi^{\prime*}\) is the optimal policy for \(M^{\prime}\), and the last inequality follows from the hypothesis Eq. (6). Theorem 1 allows us to bound the performance loss by the value of the potential function on the terminal states of the game. It directly implies the following corollary: **Corollary 1**.: _Let \(M\) and \(M^{\prime}\) be two Markov games that satisfy the conditions in theorem 1, \(\pi^{*}\) and \(\pi^{\prime*}\) be two optimal policies respectively, and \(R^{t}\) be a potential-based shaping reward. Let \(\mathcal{S}_{s_{1}}^{\pi}(N+1)\) be the set of states that are reachable from state \(s_{1}\) when following policy \(\pi\) after \(N\) rounds. Then_ \[V_{1}^{\pi^{*}}(s_{1})-V_{1}^{\pi^{\prime*}}(s_{1})\leqslant\epsilon \tag{12}\] _if_ \[\max\Phi\left(\mathcal{S}_{s_{1}}^{\pi^{\prime*}}(N+1)\right)-\min\Phi\left( \mathcal{S}_{s_{1}}^{\pi^{*}}(N+1)\right)\leqslant\gamma^{-N}\epsilon.\] Proof.: Apparently, \[\mathbb{E}_{\pi^{\prime*}}[\Phi(s_{N+1})]-\mathbb{E}_{\pi^{*}}[\Phi(s_ {N+1})]\] \[\leqslant \max\Phi\left(\mathcal{S}_{s_{1}}^{\pi^{\prime*}}(N+1)\right)-\min \Phi\left(\mathcal{S}_{s_{1}}^{\pi^{*}}(N+1)\right)\] \[\leqslant \gamma^{-N}\epsilon.\] The result follows from theorem 1. Compared with theorem 1, the condition in corollary 1 is easier to verify. ### _Trust-seeking Shaping Reward via Experience-based Trust Dynamics_ In this section, we apply corollary 1 and the experience-based trust model to design a shaping reward for encouraging trust-seeking behavior. Recall that, in the experience-based trust model, human trust is represented as an experience tuple \((\alpha,\beta)\). Suppose we have a Markov game \(M=\langle\mathcal{S},\mathcal{A}^{h},\mathcal{A}^{r},T,R^{h},R^{r}\rangle\) where the human trust \((\alpha,\beta)\) can be extracted from the state variable \(s\) as \((\alpha,\beta)=g(s)\). We define a potential-based trust reward as \[\begin{split} R^{t}\left(a^{r},a^{h},s,s^{\prime}\right)& =\gamma\phi\left(g\left(s^{\prime}\right)\right)-\phi(g(s))\\ &=\gamma\phi\left(\alpha^{\prime},\beta^{\prime}\right)-\phi( \alpha,\beta),\end{split} \tag{13}\] with \(\phi\) be to determined. Here the actual potential function is \(\Phi:=\phi\circ g\). Let \(R=R^{r}+R^{t}\) be the new reward and \(M^{\prime}=\langle\mathcal{S},\mathcal{A}^{h},\mathcal{A}^{r},T,R^{h},R\rangle\) be the transformed Markov game. We will apply corollary 1 to bound the performance loss. For any policy \(\pi\), \[\Phi\left(\mathcal{S}_{s_{1}}^{\pi}(N+1)\right)=\phi\left(g\left(\mathcal{S}_{ s_{1}}^{\pi}(N+1)\right)\right)=\phi\left(\mathcal{E}_{s_{1}}^{\pi}(N+1) \right), \tag{14}\] where \(\mathcal{E}_{s_{1}}^{\pi}(N+1)\) is the collection of reachable trust state at round \(N+1\). Based on Eq. (3), trust \((\alpha_{N+1},\beta_{N+1})\) at round \(N+1\) is \[\Big{(}\alpha_{1}+w^{s}\sum_{n=1}^{N}p_{n},\beta_{1}+w^{f}\left(N-\sum_{n=1}^ {N}p_{n}\right)\Big{)},\] where \(p_{1},p_{2},\ldots,p_{N}\) are the robot's performance values during the interaction. Since \(\sum_{n=1}^{N}p_{n}\in[0,N]\), \((\alpha_{N+1},\beta_{N+1})\) lies on the line \[l_{N+1}=\{(\alpha_{1}+w^{s}t,\beta_{1}+w^{f}(N-t))\ |\ t\in[0,N]\}, \tag{15}\] which indicates that \[\mathcal{E}_{s_{1}}^{\pi}(N+1)\subseteq l_{N+1}. \tag{16}\] Therefore, by Eqs. (14) and (16), it suffices to have \[\max\phi(l_{N+1})-\min\phi(l_{N+1})\leqslant\gamma^{-N}\epsilon \tag{17}\] for the condition in corollary 1 to hold, where \(l_{N+1}\) is given in Eq. (15). Eq. (17) constrains the choices of \(\phi\) such that the performance loss is within \(\epsilon\). In addition, we should design the function \(\phi\) such that the shaping reward \(R^{t}\) optimizes human trust. For example, if we want to increase human trust, we should reward the robot for if the future state has higher trust and penalize the robot otherwise. By Eq. (13), one way to achieve this is to add the following constraints: \[\begin{split}\gamma\phi\left(\alpha^{\prime},\beta^{\prime} \right)-\phi(\alpha,\beta)\geqslant 0,\,\mathrm{if}\ \frac{\alpha^{\prime}}{\alpha^{\prime}+\beta^{\prime}}\geqslant\frac{\alpha}{ \alpha+\beta};\\ \gamma\phi\left(\alpha^{\prime},\beta^{\prime}\right)-\phi( \alpha,\beta)<0,\,\mathrm{otherwise}.\end{split} \tag{18}\] Here \(\frac{\alpha^{\prime}}{\alpha^{\prime}+\beta^{\prime}}\geqslant\frac{\alpha}{ \alpha+\beta}\) indicates trust state \((\alpha^{\prime},\beta^{\prime})\) has higher expected trust compared to \((\alpha,\beta)\), since we assume human trust follows the Beta distribution \(\mathrm{Beta}(\alpha,\beta)\). Another example is trust calibration. If our goal is to calibrate human trust around a point \(t^{*}\), we can let \(R^{t}\) to reward the robot if human trust is moving towards \(t^{*}\) by forcing \(\gamma\phi\left(\alpha^{\prime},\beta^{\prime}\right)-\phi(\alpha,\beta) \geqslant 0\,\mathrm{if}\left|\frac{\alpha^{\prime}}{\beta^{\prime}+\alpha^{ \prime}}-t^{*}\right|\leqslant\left|\frac{\alpha}{\beta+\alpha}-t^{*}\right|\) and \(\gamma\phi\left(\alpha^{\prime},\beta^{\prime}\right)-\phi(\alpha,\beta)<0\) otherwise. ## V Case Study To assess our framework's effectiveness, we simulate a search-and-rescue (SAR) mission, comparing the robot's optimal policies, both with and without a shaping reward. ### _The SAR Mission_ The SAR mission was inspired by the work of Wang et al. [21], where a human and a robot work together to search multiple sites in a town for potential hazards. At each site, the robot enters first to scan for threats and then advises the human whether to wear protective gear before entering. However, wearing the heavy gear is time-consuming, and if there is no threat, it wastes valuable time. On the other hand, if the human chooses not to wear the protective gear and there is a threat, they risk injury. The objective is to complete the mission as quickly as possible while minimizing the human's health loss. We assume that the human-robot team starts to search from site \(1\) until site \(N\). At site \(n\), a threat indicator \(\eta_{n}\) is drawn from a Bernoulli distribution \(\mathrm{Bern}(d_{n})\). There is a threat in site \(n\) if \(\eta_{n}=1\) and no threat otherwise. The danger level \(d_{n}\) is drawn from the uniform distribution \(\mathrm{U}[0,1]\). The human-robot team does not know \(\eta_{n}\) or \(d_{n}\). Instead, prior to the start of the mission, the team is provided with an estimation \(d_{n}^{h}\) of \(d_{n}\). Before entering site \(n\), the robot will analyze the site based on its sensory input and reach a more accurate estimation \(d_{n}^{r}\) of \(d_{n}\). \(d_{k}^{h}\) and \(d_{k}^{r}\) follow Beta distribution \(\mathrm{Beta}(\kappa^{h}d_{n},\kappa^{h}(1-d_{n}))\) and \(\mathrm{Beta}(\kappa^{r}d_{n},\kappa^{r}(1-d_{n}))\) respectively. We assume that \(\kappa^{r}>\kappa^{h}\geqslant 1\) such that the robot has a more accurate assessment of \(d_{n}\) compared with the human. We summarize the probability model as follows: \[\begin{split}\text{Threat indicator}&\eta_{n}\overset{ \mathrm{i.i.d.}}{\sim}\mathrm{Bern}(d_{n})\\ \text{Danger level}& d_{n}\overset{\mathrm{i.i.d.}}{\sim} \mathrm{U}[0,1]\\ \text{Human's estimation of }d_{n}& d_{n}^{h}\overset{ \mathrm{i.i.d.}}{\sim}\mathrm{Beta}(\kappa^{h}d_{n},\kappa^{h}(1-d_{n}))\\ \text{Robot's estimation of }d_{n}& d_{n}^{r}\overset{\mathrm{i.i.d.}}{\sim} \mathrm{Beta}(\kappa^{r}d_{n},\kappa^{r}(1-d_{n}))\\ \end{split}\] We formulate the SAR mission as a two-player Markov game \(M=\langle\mathcal{S},\mathcal{A}^{h},\mathcal{A}^{r},T,R^{h},R^{r}\rangle\) introduced in section III-A. The state space \(\mathcal{S}=[1,\infty)^{2}\) comprises all possible experience pairs \((\alpha,\beta)\) that represent the human's trust. The initial state is \(s_{1}=(\alpha_{1},\beta_{1})\). The robot's action set, denoted as \(\mathcal{A}^{r}=\{0,1\}\), includes two options: recommending wearing and recommending not wearing the protective gear, represented by \(a^{r}=1\) and \(a^{r}=0\), respectively. Similarly, the human's action set, denoted as \(\mathcal{A}^{h}=\{0,1\}\), includes two options: wearing or not wearing protective gear, represented by \(a^{h}=1\) and \(a^{h}=0\), respectively. We define the robot's performance at site \(n\) as \(p_{n}=\mathbf{1}\left\{a_{n}^{r}=\eta_{n}\right\}\), which evaluates to 1 if the robot's recommendation agrees with the presence of the threat (\(a_{n}^{r}=\eta_{n}\)) and 0 otherwise. The state \((\alpha_{n},\beta_{n})\) at site \(n\) transitions to \[(\alpha_{n+1},\beta_{n+1})=(\alpha_{n}+w^{s}p_{n},\beta_{n}+w^{f}(1-p_{n}))\] at site \(n+1\) as specified in the experience-based trust model. We assume the robot has already learned the parameters \(w^{s}\) and \(w^{f}\) from its previous interactions with the human. We also assume that the robot and the human share the same task reward, i.e., \(R^{h}=R^{r}=-w^{\text{H}}\Delta^{\text{H}}-w^{\text{T}}\Delta^{\text{T}}\), where \(\Delta^{\text{H}}\) and \(\Delta^{\text{T}}\) are the time cost and the health cost for the human-robot team and \(w^{\text{H}}\) and \(w^{\text{T}}\) are the corresponding weights. The values of \(\Delta^{\text{H}}\) and \(\Delta^{\text{T}}\) are given in table I, and \(w^{\text{H}}\) and \(w^{\text{T}}\) are set to 1 and 0.2. The discount factor \(\gamma\) is set to 0.9. We assume that the human follows the reverse-psychology policy as introduced in [5], where the human will likely comply with the robot's recommendation when human trust is high and will do the opposite when trust is low. Specifically, we have \[\Pr\left(a_{n}^{h}=a_{n}^{r}\right)=\frac{\alpha_{n}}{\alpha_{n}+\beta_{n}}\text { and }\Pr\left(a_{n}^{h}\neq a_{n}^{r}\right)=\frac{\beta_{n}}{\alpha_{n}+\beta_{n}},\] where \(\frac{\alpha_{n}}{\alpha_{n}+\beta_{n}}\) is the expected human trust since the human trust follows the beta distribution \(\text{Beta}(\alpha_{n},\beta_{n})\). Given the model conditions above, we can calculate the probabilities of all four cases listed in table I, for different actions \(a^{r}\). This enables us to determine the expected immediate reward for each \(a^{r}\) and thus apply the value iteration method to derive an optimal policy for the robot. ### _Shaping the Reward_ Suppose that we are interested in increasing human trust during the interaction while limiting the performance loss by \(\epsilon\). We apply the reward-shaping technique developed in section IV to achieve this goal. We define the shaping reward as \(R^{t}\left(\alpha,\beta,\alpha^{\prime},\beta^{\prime}\right)=\gamma\Phi\left( \alpha^{\prime},\beta^{\prime}\right)-\Phi(\alpha,\beta)\), where the function \(\Phi:\mathcal{S}\rightarrow\mathbb{R}\) is to be determined. We observe that, from a state \((\alpha,\beta)\), the next state can either be \(\left(\alpha^{\uparrow},\beta^{\uparrow}\right)=\left(\alpha+w^{s},\beta\right)\), where the expectation of trust increases, or \(\left(\alpha^{\downarrow},\beta^{\downarrow}\right)=\left(\alpha,\beta+w^{f}\right)\), where the expectation of trust decreases. Instead of using the generic method in Eq. (18), we can incentivize the trust-seeking behavior by maximizing the reward difference between the two future states, i.e., maximizing the value \(R^{t}\left(\alpha,\beta,\alpha^{\uparrow},\beta^{\uparrow}\right)-R^{t} \left(\alpha,\beta,\alpha^{\downarrow},\beta^{\downarrow}\right)\). Combining with the performance loss constraint in Eq. (17), we obtain the following optimization problem \[\begin{split}\max&\;R^{t}\left(\alpha,\beta,\alpha^{ \uparrow},\beta^{\uparrow}\right)-R^{t}\left(\alpha,\beta,\alpha^{\downarrow}, \beta^{\downarrow}\right)\\ \text{s.t.}&\;\max\Phi(l_{N+1})-\min\Phi(l_{N+1}) \leqslant\gamma^{-N}\epsilon,\end{split} \tag{19}\] where \(l_{N+1}\) is defined in Eq. (15). We choose a linear potential function: \(\Phi(\alpha,\beta)=a\alpha+b\beta\) with \(a,b\in\mathbb{R}\). With some algebraic manipulation, (19) becomes a clean linear program: \[\begin{split}\max&\;aw^{s}-bw^{f}\\ \text{s.t.}&\;\frac{-\gamma^{-N}\epsilon}{N} \leqslant aw^{s}-bw^{f}\leqslant\frac{\gamma^{-N}\epsilon}{N}.\end{split} \tag{20}\] As the above program is underdetermined, we enforce an extra constraint that \(b=0\) and then obtain the optimal solution \((a,b)=(\frac{\gamma^{-N}\epsilon}{Nw^{s}},0)\). Therefore, the optimal potential function is \(\Phi(\alpha,\beta)=\frac{\gamma^{-N}\epsilon}{Nw^{s}}\alpha\), and the shaping reward is \[R^{t}\left(\alpha,\beta,\alpha^{\prime},\beta^{\prime}\right)=\frac{\gamma^{ -N}\epsilon}{Nw^{s}}\left(\gamma\alpha^{\prime}-\alpha\right). \tag{21}\] ### _Simulation Results_ Let \(M\) be the original Markov game without the shaping reward and \(M^{\prime}\) be the one with shaping reward \(R^{t}\). We solve \(M^{\prime}\) with 4 different values of \(\epsilon\) and, in figure 3, plot the optimal action \(\pi^{\prime\ast}(\alpha_{1},\beta_{1})\) in \(M^{\prime}\), the value function \(V_{1}^{\prime\pi^{\prime\ast}}(\alpha_{1},\beta_{1})\) of \(\pi^{\prime\ast}\) in \(M^{\prime}\) (defined in Eq. (8)), and the value function \(V_{1}^{\pi^{\prime\ast}}(\alpha_{1},\beta_{1})\) of \(\pi^{\prime\ast}\) in \(M\) (defined in Eq. (2)), for various values of \((\alpha_{1},\beta_{1})\). In the optimal action plots, the black area corresponds to the states where the optimal action of the robot is not recommending the human to wear protective gear, i.e., \(a_{1}^{r}=0\), while the white area corresponds to recommending to wear the gear, i.e., \(a_{1}^{r}=1\). At the first site, the robot's perceived danger level \(d_{1}^{r}\) is 0.06, which indicates that the "trustworthy" action is to recommend the human not to wear the gear. In figure (a)a, we set \(\epsilon=0\) so there is no shaping reward. The optimal action is reversed around the 45-degree line, which means the robot will reverse its action when the human trust crosses a threshold. This manipulative behavior is consistent with the finding in [5]. In figure (b)b, we set \(\epsilon=30\) to allow reward shaping at the cost of a moderate amount of performance loss. We can observe that the black area grows larger compared with that of figure (a)a, which implies the trust reward guided the robot to gain human trust by recommending not to wear the gear. In figure (d)d, we set a large value of 300 for \(\epsilon\), and the black area is the largest among all the settings, indicating that the robot will choose the "righteous" action in most cases. Moreover, a comparison across \(V_{1}^{\pi^{\prime\ast}}\) with different \(\epsilon\) values shows that the task reward loss caused by the shaping reward is within \(\epsilon\), and initial states with higher trust have a higher value in \(M\) when \(\epsilon\) is set to be higher. This shows the effectiveness of the proposed reward-shaping method. Finally, we notice an intriguing behavior wherein, as \(\epsilon\) increases, the value function \(V_{1}^{\prime\pi^{\prime\ast}}\) exhibits higher values at states characterized by lower trust. For instance, given \(\epsilon=300\), states denoted by low \(\alpha_{1}\) and high \(\beta_{1}\) (low-trust states), display greater values in contrast to those states with high trust. This observation may be attributed to two factors. Firstly, with an increment in \(\epsilon\), the shaping reward \(R^{t}\) magnitude also escalates, as is evidenced by Eq. (21). As a result, the pattern of \(V_{1}^{\prime\pi^{\prime\ast}}\) is largely dictated by the pattern of \(\sum_{n}R^{t}\). Secondly, it appears that the algorithm identifies low-trust states as having a greater potential for earning \(R^{t}\), particularly when compared to higher-trust states. In conclusion, the simulation results show that our reward-shaping method successfully guides the robot to actively gain human trust, overcoming the manipulative behavior in the pure performance-driven setting. ## VI Conclusion In this work, we proposed a framework to balance task reward and human trust. We formulated the problem as a reward-shaping problem and proposed a novel framework to solve it. We evaluated the proposed framework in a simulation scenario where a human-robot team performs a search-and-rescue mission. The results showed that the proposed framework successfully modifies the robot's optimal policy, enabling it to increase human trust with a minimal task performance cost. However, the work should be viewed in light of the following limitations. First, we only provide a sufficient condition to guarantee small performance loss. A necessary condition is in need to complete the theory. Second, we used linear potential functions for designing the shaping reward. The effectiveness of other forms of potential functions can be investigated in future research.
2306.16453
Autonomous Distribution of Programmable Multiqubit Entanglement in a Dual-Rail Quantum Network
We propose and analyze a scalable and fully autonomous scheme for preparing spatially distributed multiqubit entangled states in a dual-rail waveguide QED setup. In this approach, arrays of qubits located along two separated waveguides are illuminated by correlated photons from the output of a nondegenerate parametric amplifier. These photons drive the qubits into different classes of pure entangled steady states, for which the degree of multipartite entanglement can be conveniently adjusted by the chosen pattern of local qubit-photon detunings. Numerical simulations for moderate-sized networks show that the preparation time for these complex multiqubit states increases at most linearly with the system size and that one may benefit from an additional speedup in the limit of a large amplifier bandwidth. Therefore, this scheme offers an intriguing new route for distributing ready-to-use multipartite entangled states across large quantum networks, without requiring any precise pulse control and relying on a single Gaussian entanglement source only.
Joan Agustí, Xin H. H. Zhang, Yuri Minoguchi, Peter Rabl
2023-06-28T18:00:02Z
http://arxiv.org/abs/2306.16453v2
# Autonomous distribution of programmable multi-qubit entanglement in a dual-rail quantum network ###### Abstract We propose and analyze a scalable and fully autonomous scheme for preparing spatially distributed multi-qubit entangled states in a dual-rail waveguide QED setup. In this approach, arrays of qubits located along two separated waveguides are illuminated by correlated photons from the output of a non-degenerate parametric amplifier. These photons drive the qubits into different classes of pure entangled steady states, for which the degree of multipartite entanglement can be conveniently adjusted by the chosen pattern of local qubit-photon detunings. Numerical simulations for moderate-sized networks show that the preparation time for these complex multi-qubit states increases at most linearly with the system size and that one may benefit from an additional speedup in the limit of a large amplifier bandwidth. Therefore, this scheme offers an intriguing new route for distributing ready-to-use multipartite entangled states across large quantum networks, without requiring any precise pulse control and relying on a single Gaussian entanglement source only. As quantum computing and quantum communication systems with an increasing number of coherently integrated components become technologically available, a growing demand for efficient schemes to transfer quantum states or distribute entanglement across different parts of such networks will arise [1; 2; 3; 4]. While basic protocols to do so are well-known and have already been successfully implemented in a variety of platforms [5; 6; 7; 8; 9; 10; 11; 12; 13; 14], it is envisioned that in future quantum devices, entanglement must be generated and interchanged among many thousands of qubits within a limited coherence time. It is currently considered unlikely that a simple serial application of existing protocols can meet these demands, which motivates the continued search for alternative quantum communication strategies that are fast, parallelizable, and, ideally, require a minimal amount of classical control. In this Letter, we describe a fully autonomous entanglement distribution scheme, which exploits an intriguing physical effect, namely the formation of multipartite entangled stationary states in a cascaded dual-rail quantum network. Specifically, we consider a configuration as shown in Fig. 1, where spatially separated qubits located along two photonic waveguides are illuminated by the correlated output of a non-degenerate parametric amplifier [15]. Previously, it has already been proposed to use broadband squeezed reservoirs for generating bipartite entanglement between separated qubit pairs [16; 17; 18; 19; 20; 21; 22] or, for specific arrangements, between qubits along a 1D channel [23; 24] or in coupled arrays [25; 26]. Here we show, first of all, that this concept can be generalized to produce, under ideal conditions, an arbitrary number of maximally entangled qubit pairs over large distances. Moreover, we find that the entanglement shared between different sets of qubits can be adjusted by simply changing the local qubit-photon detunings. This provides a convenient way to 'program' different classes of multipartite entangled states without the need for any time-dependent control or additional non-local operations. To evaluate the scalability of this approach, we simulate the formation of these multipartite entangled states under more realistic conditions, taking in particular a finite bandwidth of the squeezing source into account. We find that the maximal number of entangled qubit pairs, \(N_{\rm ent}\), remains rather robust under the influence of experimental imperfections and that the total preparation time, \(T_{\rm prep}\sim N_{\rm ent}\), scales at most linearly with the system size, independently of the complexity of the prepared state. In the limit of a large amplifier bandwidth, the intrinsic parallelization of the preparation scheme can be exploited to further reduce \(T_{\rm prep}\), which shifts the technological requirements for scalability from the control of Figure 1: Sketch of a dual-rail quantum network, where qubits along two separated waveguides are driven by the correlated output of a non-degenerate parametric amplifier and relax into a pure steady state \(|\psi_{0}(r,\vec{\delta}_{A},P)\rangle\). As shown in the inset, the qubits in waveguide \(A\) (\(B\)) are detuned from the central photon frequency \(\omega_{A}\) (\(\omega_{B}\)) by \(\delta_{A,i}\) (\(\delta_{B,i}\)) and the qubit-waveguide coupling is assumed to be fully directional. See text for more details. many qubits to the optimization of a single Gaussian squeezing source. This can be advantageous for many applications in optical, microwave, or hybrid [27; 28; 29; 30] quantum networks, where such photonic devices are currently developed [31; 32; 33; 34; 35; 36; 37]. _Model._--We consider a dual-rail quantum network as depicted in Fig. 1, where two sets of qubits \(\eta=A,B\) are coupled to two separate photonic channels. The waveguides are connected to a common non-degenerate parametric amplifier, which we model by a two-mode squeezing interaction (\(\hbar=1\)) \(H_{\chi}=ig(a_{A}^{\dagger}a_{B}^{\dagger}-a_{A}a_{B})\) for two local modes with bosonic annihilation operators \(a_{A}\) and \(a_{B}\). These photons then decay into the respective waveguides with rate \(\kappa\) and drive the qubits into a correlated state. For the following analysis, we assume that the qubit-waveguide coupling is fully directional [38; 39; 40] and label the qubits by the index \(i=1,\ldots,N\) along the direction of propagation. Such conditions can be realized by using circulators [41; 42; 43; 44; 45; 46], chiral waveguides [40], or other types of directional couplers [47; 48; 49; 50]. We first focus on the limit of a broadband amplifier, \(\kappa\rightarrow\infty\), in which case the dynamics of the photons can be adiabatically eliminated to obtain an effective master equation (see [51] for more details) \[\dot{\rho}_{\rm q}=-i[H_{\rm casc},\rho_{\rm q}]+\sum_{\eta=A,B}\gamma{\cal D }[J_{\eta}]\rho_{\rm q} \tag{1}\] for the reduced qubit density operator \(\rho_{\rm q}\). Here \(\gamma\) denotes the decay rate of each individual qubit and \({\cal D}[C]\rho=C\rho C^{\dagger}-\{C^{\dagger}C,\rho\}/2\). In Eq. (1) we have already rewritten the underlying directional qubit-qubit interactions in terms of a coherent Hamiltonian evolution with \[H_{\rm casc}=\sum_{\eta,i}\frac{\delta_{\eta,i}}{2}\sigma_{\eta,i}^{z}+i\frac{ \gamma}{2}\sum_{\eta,j>i}\left(\sigma_{\eta,i}^{+}\sigma_{\eta,j}^{-}-{\rm H. c.}\right), \tag{2}\] and purely dissipative processes with collective jump operators \[J_{A}=\cosh(r)L_{A}-\sinh(r)L_{B}^{\dagger}, \tag{3}\] \[J_{B}=\cosh(r)L_{B}-\sinh(r)L_{A}^{\dagger}, \tag{4}\] where \(L_{\eta}=\sum_{i=1}^{N}\sigma_{\eta,i}^{-}\). In this broadband limit, the system is thus fully determined by the squeezing parameter \(r=2\tanh^{-1}(2g/\kappa)\), characterizing the degree of two-mode squeezing of the photon source, and the two sets of qubit detunings, \(\vec{\delta}_{\eta=A,B}=(\delta_{\eta,1},\delta_{\eta,2},\ldots,\delta_{\eta,N})\). _Steady states._--Equation (1) describes an open quantum many-body system with competing coherent and dissipative processes, which in general drive the qubits into a highly mixed steady state. However, in the following, we show that there exist specific conditions under which the steady state of the network, \(\rho_{\rm q}^{0}=|\psi_{0}\rangle\langle\psi_{0}|\), is not only pure but also exhibits different degrees of multipartite entanglement that can be controlled by the local detunings \(\delta_{\eta,i}\). We start our analysis by considering the simplest case of a single pair of qubits (\(N=1\)) and \(\delta_{A,1}=\delta_{B,1}=0\), as originally discussed in Ref. [16]. In this case, one can explicitly show that the unique steady state of Eq. (1) is \(|\psi_{0}\rangle=|\Phi_{1,1}^{+}\rangle\), where \[|\Phi_{i,j}^{+}\rangle=\frac{\cosh(r)|0_{A,i}\rangle|0_{B,j}\rangle+\sinh(r) |1_{A,i}\rangle|1_{B,j}\rangle}{\sqrt{\cosh(2r)}} \tag{5}\] approaches a maximally entangled Bell state for \(r\gg 1\). This state satisfies the dark-state conditions \(J_{\eta}|\psi_{0}\rangle=0\) and \(H_{\rm casc}|\psi_{0}\rangle=0\), which implies that once the qubits have reached the steady state, they completely decouple from the squeezed photonic bath. Consequently, they no longer affect successive qubits along the waveguide. Importantly, this observation remains true even for finite detunings satisfying \(\delta_{A,1}+\delta_{B,1}=0\), which then allows us to systematically identify also more complex multi-qubit steady states by proceeding in two steps. First, we set \(\vec{\delta}_{B}=-\vec{\delta}_{A}\), such that, according to the argument from above, qubits with the same index decouple pairwise from the photonic reservoir. The network then relaxes into the pure steady state \(|\psi_{0}\rangle=|\Phi_{\parallel}\rangle\), where \[|\Phi_{\parallel}\rangle=\bigotimes_{i=1}^{N}|\Phi_{i,i}^{+}\rangle \tag{6}\] is the product of \(N\) consecutive Bell pairs of the type given in Eq. (5). Interestingly, this result is independent of the total number of qubit pairs, similar to what has been found for coupled spin chains [26] or discrete cavity arrays [25]. In the second step, we make use of the form-invariance of the cascaded master equation in Eq. (1) under unitary transformations of the type [38] \[U_{i,i+1}=e^{i\theta_{i,i+1}(\vec{s}_{B,i}+\vec{s}_{B,i+1})^{2}}, \tag{7}\] where \(\vec{s}_{\mu}=(\sigma_{\mu}^{x},\sigma_{\mu}^{y},\sigma_{\mu}^{z})/2\) and the mixing angle satisfies \(\tan(\theta_{i,i+1})=(\vec{\delta}_{B,i}-\vec{\delta}_{B,i+1})/\gamma\). Under these transformations, one finds that \(U_{i,i+1}J_{\eta}U_{i,i+1}^{\dagger}=J_{\eta}\) and \[U_{i,i+1}H_{\rm casc}(\vec{\delta}_{A},\vec{\delta}_{B})U_{i,i+1}^{\dagger}\ =\ H_{\rm casc}(\vec{\delta}_{A},P_{i,i+1}\vec{\delta}_{B}), \tag{8}\] where the permutation \(P_{i,i+1}\) exchanges \(\delta_{B,i}\) and \(\delta_{B,i+1}\). In other words, given a pure steady state \(|\psi_{0}\rangle\) for a certain detuning pattern \(\vec{\delta}_{B}\), the state \(|\psi_{0}^{\prime}\rangle=U_{i,i+1}|\psi_{0}\rangle\) is a pure steady state of the same network with a permuted pattern of detunings, \(\vec{\delta}_{B}=P_{i,i+1}\vec{\delta}_{B}\). This form-invariance now allows us to construct a large family of multipartite entangled steady states, which are parametrized by (i) the squeezing parameter \(r\), (ii) the set of detunings \(\vec{\delta}_{A}\) for qubits in waveguide \(A\) and (iii) a permutation \(P\) that fixes the detunings in waveguide \(B\) to be \(\vec{\delta}_{B}=-P\vec{\delta}_{A}\). By decomposing \(P=\prod_{\sigma}P_{i,i,\sigma_{\mu}+1}\) into a product of nearest-neighbor transpositions, we can start with the state in Eq. (6) and then use the relation below Eq. (8) to derive an explicit expression for the corresponding steady state, \[|\psi_{0}(r,\vec{\delta}_{A},P)\rangle=\prod_{\sigma}U_{i_{\sigma},i_{\sigma}+1} |\Phi_{\parallel}\rangle. \tag{9}\] Importantly, this is also the unique steady state of the network, as discussed in more detail in [51]. A graphical illustration of Eq. (9) is presented in Fig. 2 (a). _Entanglement._--To investigate the entanglement properties of the family of states in Eq. (9), we start with the case \(N=2\) and choose the only nontrivial permutation \(P=P_{1,2}\). We obtain \[|\psi_{0}\rangle=\frac{\gamma|\Phi_{1,1}^{+}\rangle|\Phi_{2,2}^{+}\rangle+i \Delta|\Phi_{1,2}^{+}\rangle|\Phi_{2,1}^{+}\rangle}{\sqrt{\gamma^{2}+\Delta^{2 }}}, \tag{10}\] where \(\Delta=\delta_{A,1}-\delta_{A,2}\). In Fig. 2 (b) and (c) we visualize the entanglement structure of this state in terms of the concurrences \(\mathcal{C}_{ij}\equiv\mathcal{C}(\rho_{A,i|B,j})\)[55; 56] of the reduced bipartite qubit states, \(\rho_{A,i|B,j}\). For \(\Delta=0\), we find that for parallel pairs \(\mathcal{C}_{ii}\simeq 1\) already for moderate values of \(r\gtrsim 1\), consistent with the state \(|\Phi_{\parallel}\rangle\). For \(|\Delta|\gg\gamma\) the same is true for diagonal pairs, i.e., \(\mathcal{C}_{12}=\mathcal{C}_{21}\simeq 1\). For all intermediate parameters, the state is a genuine four-partite entangled state [57], and belongs to the set of locally maximally entanglable states [58] for \(r\gg 1\). For a larger number of qubits, we can use the entanglement entropy \(\mathcal{S}(\rho_{r})=-\mathrm{Tr}\{\rho_{r}\ln\rho_{r}\}\) for a reduced state \(\rho_{r}\) to study the entanglement between different bi-partitions of the network. First of all, this analysis shows that \(\mathcal{S}_{A}\equiv\mathcal{S}(\rho_{A})=-N\ln\left[x^{x}(1-x)^{(1-x)}\right]\), where \(x=\cosh^{2}(r)/\cosh(2r)\), only depends on the squeezing parameter \(r\). This can be understood from the fact the unitaries \(U_{i,i+1}\) only act within subsystem \(B\). Thus, with respect to this partition, the states in Eq. (9) can be understood as generalized 'rainbow states' [26; 59; 60] with a volume-law entanglement \(\mathcal{S}(\rho_{A})\simeq N\ln 2\) for \(r\gtrsim 1\). In contrast, for partitions along the chain, the entanglement entropy \(\mathcal{S}_{n}=\mathcal{S}(\rho_{[1,\ldots,n]})\) depends not only on the chosen permutation \(P\), but also on the pattern of detunings \(\vec{\delta}_{A}\). This is illustrated in Fig. 2 (d) and (e), where we consider as an example the detunings \(\delta_{A,i}=(i-1)\Delta\) and the reversed order, \(\delta_{B,i}=-P_{\mathrm{rev}}\delta_{A,i}=-\delta_{A,N+1-i}\), in waveguide \(B\). For \(\Delta\gg\gamma\) the unitaries in Eq. (9) correspond to approximate SWAP operations and \(\mathcal{S}_{n}\simeq 2n\ln 2\). Instead, for \(\Delta\lesssim\gamma\), the entangling unitaries \(U_{i,i+1}\approx\sqrt{\mathrm{SWAP}}\) generate more multipartite entanglement across the whole chain, which reduces the block-entanglement \(\mathcal{S}_{n}\) correspondingly. In general, different choices for \(\vec{\delta}_{A}\) and \(P\) can be used to define certain blocks of qubits that are entangled among each other, independently of their physical location. _Preparation time._--So far we have shown that a single two-mode squeezing source is in principle enough to entangle an arbitrary number of qubits. However, for practical applications, we must still evaluate the time \(T_{\mathrm{prep}}\) that it takes to prepare this state. To do so we first continue with the analysis of the ideal qubit master equation in Eq. (1) and study the relaxation dynamics toward the steady state \(|\psi_{0}\rangle\), assuming that at \(t=0\) all qubits are initialized in state \(|0\rangle\). In Fig. 3 this evolution is shown in (a) for the bipartite entangled state \(|\Phi_{\parallel}\rangle\) with \(\vec{\delta}_{A}=0\) and in (b) for the multipartite entangled state considered in Fig. 2 (e). In the bipartite case, we observe a successive, pairwise formation of Bell states with a total time \(T_{\mathrm{prep}}\sim N\). Interestingly, already for \(\delta_{A,i}=0\), this preparation time is faster than a sequential preparation of \(N\) independent Bell pairs, i.e., \(T_{\mathrm{prep}}(N)<NT_{\mathrm{prep}}(N=1)\). For detuned qubits the preparation time decreases further and \(T_{\mathrm{prep}}(N)\simeq T_{\mathrm{prep}}(N=1)\) for \(\Delta\gtrsim\gamma\), i.e., all pairs are prepared in parallel. For multipartite entangled states, where the differences \(|\delta_{A,i}-\delta_{A,j}|\) are necessarily small, a full parallelization is not possible, but even in this case we obtain an intrinsic advantage compared to a sequential distribution of entanglement, followed by local gates. Note that for the same detunings \(\vec{\delta}_{A}\), the relax Figure 2: (a) Graphical illustration of Eq. (9). Starting from \(\vec{\delta}_{B}=-\vec{\delta}_{A}\), each transposition required to obtain the final detuning pattern corresponds to a unitary operation \(U_{i,i+1}\) applied to the state \(|\Phi_{\parallel}\rangle\). (b) Bipartite entanglement expressed in terms of the concurrences \(\mathcal{C}_{ij}\) for the four-qubit state in Eq. (10) as a function of \(r\), and in (c) as a function of \(\Delta\) for \(r=1\). (d) Sketch of the detuning pattern for the family of multipartite states described in the text and different partitions for evaluating the entanglement entropy. (e) Entanglement entropy \(\mathcal{S}_{n}\) as a function of \(n\), for different detunings \(\Delta\) and \(r=1\). ation time \(T_{\text{prep}}\) is independent of the permutation \(P\). _Scalability._--All the results so far have been derived within the infinite-bandwidth approximation, which underlies Eq. (1) and assumes that correlated photons are available at arbitrary detunings. Obviously, this assumption must break down when \(\delta_{\text{max}}=\max\{|\delta_{A,i}|\}\gtrsim\kappa\), but even for \(\delta_{A,i}=0\) it has been shown that any finite \(\kappa\) limits the transferable entanglement [22]. Therefore, to provide physically meaningful predictions about the scalability of the current scheme it is necessary to go beyond the assumption of a Markovian squeezed reservoir [16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26] and take finite-bandwidth effects into account. To do so we now simulate the dynamics of the state of the full network, \(\rho\), as described by the cascaded quantum master equation [51] \[\dot{\rho}= -i[H_{\chi},\rho]+\sum_{\eta}\kappa\mathcal{D}[a_{\eta}]\rho\] \[\sum_{\eta,i}\left(-i\frac{\delta_{\eta,i}}{2}[\sigma_{\eta,i}^{ z},\rho]+\gamma\mathcal{D}[\sigma_{\eta,i}^{-}]\rho+\frac{\gamma_{\phi}}{2} \mathcal{D}[\sigma_{\eta,i}^{z}]\rho\right)\] \[+\sum_{\eta,i}\sqrt{\kappa\gamma}\mathcal{T}[a_{\eta},\sigma_{ \eta,i}^{-}]\rho+\sum_{\eta,j>i}\gamma\mathcal{T}[\sigma_{\eta,i}^{-},\sigma_ {\eta,j}^{-}]\rho. \tag{11}\] Here we have already included a finite dephasing rate \(\gamma_{\phi}\) for each qubit and introduced the superoperator \(\mathcal{T}[O_{1},O_{2}]\rho=[O_{1}\rho,O_{2}^{\dagger}]+[O_{2},\rho O_{1}^{ \dagger}]\) to model directional interactions between all nodes along the same waveguide. In Fig. 4 (a) we plot the steady-state concurrences \(\mathcal{C}_{ii}\) for the case \(\delta_{A,i}=0\) and different ratios \(\beta=\kappa/\gamma\). We see that a finite bandwidth \(\kappa\) reduces the maximal amount of entanglement for the first pair [22] and also results in a gradual decay of the entanglement along the chain. By using a linear extrapolation, \(N_{\text{ent}}=\mathcal{C}_{11}/(\mathcal{C}_{11}-\mathcal{C}_{22})\), we can use these finite-size simulations to extract the maximal number of pairs that can be entangled for a given \(\beta\) and dephasing rate \(\gamma_{\phi}\). These results are summarized in Fig. 4 (b). We see that for otherwise ideal conditions, rather large numbers of \(N_{\text{ent}}\sim 10-100\) can be entangled for moderate \(\beta\), while the presence of dephasing or other imperfections sets additional limits on \(N_{\text{ent}}\). Note that these results are for \(\delta_{A,i}=0\), where the formation of the steady state is the slowest. Thus, these results represent approximate upper bounds for \(N_{\text{ent}}\) also for all other classes of multipartite entangled states. Additional plots for \(N_{\text{ent}}\) under different conditions are presented in [51]. Finally, let us return to the observed speedup for far-detuned qubits, but taking a finite amplifier bandwidth into account. In Fig. 4 (c) we investigate, first of all, the dependence of \(\mathcal{C}_{11}\) on the detuning \(\delta_{A,1}=\Delta\). As expected, this plot shows a significant decay of the entanglement for \(\Delta/\kappa>1\), from which we also deduce that \(\delta_{\text{max}}<\kappa\) must be satisfied in the multi-qubit case. Since for a parallel preparation with \(T_{\text{prep}}(N)\sim const.\) we require \(\delta_{\text{max}}\approx\gamma N\), we conclude that the number of pairs that can be entangled in parallel, \(N_{\parallel}\approx N_{\text{ent}}\), is actually comparable to the total number of entangled pairs for \(\vec{\delta}_{A}=0\). As a minimal illustration of this behavior, we consider in Fig. 4 (d) the example of \(N=4\) pairs with \(\delta_{A,i}=\Delta(i-1)\). We plot the concurrence of the last pair, \(\mathcal{C}_{44}\), for a fixed dephasing rate \(\gamma_{\phi}\) and increasing detuning \(\Delta\). Up to \(\Delta\sim\kappa\), entanglement increases due to a reduced preparation time, while for larger detunings finite-bandwidth effects set in and degrade the entanglement again. Figure 3: (a) Relaxation into a bipartite entangled state for \(\vec{\delta}_{A}=0\) and (b) into a multipartite entangled state for \(\vec{\delta}_{B}=-P_{\text{rev}}\vec{\delta}_{A}\) and \(\Delta=\gamma/5\). In both cases \(N=5\). (c) Scaling of the preparation time \(T_{\text{prep}}\) for different ratios \(\Delta/\gamma\), where \(\delta_{A,i}=\Delta(i-1)\) and \(\vec{\delta}_{B}=-\vec{\delta}_{A}\). We define \(T_{\text{prep}}\) via the condition \((1-\mu(T_{\text{prep}}))/N=0.001\), where \(\mu=\text{Tr}[\rho_{\text{q}}^{2}]\) is the purity. For the examples in (a) and (b), \(T_{\text{prep}}\) is indicated by the dashed vertical line. In all plots \(r=1\). _Conclusions._--In summary, we have presented a fully autonomous scheme for distributing entanglement among two distant sets of qubits. Within the same setup, states with varying degrees of bi- and multipartite entanglement can be prepared by adjusting the squeezing strength and the local qubit detunings, while retaining a preparation time that scales at most linearly with \(N\). Compared to related autonomous protocols discussed for single waveguides [23; 24; 38; 39] or locally coupled chains [25; 26], the use of a propagating two-mode entangled source offers the possibility to entangle qubits that are arbitrarily far apart [51] and a systematic way to parallelize the scheme by increasing the bandwidth of the amplifier. This makes this approach very attractive for long-distance entanglement distribution schemes with long-lived spins or narrow-bandwidth optical emitters, but also for local area quantum networks [61; 62; 63], where multiple nodes can be simultaneously entangled with a limited amount of control. _Note added._ During the completion of our manuscript, we became aware of a related but independent work on the stabilization of entangled states in two driven spin chains coupled by a waveguide [64]. _Acknowledgements._--We thank Aashish Clerk, Matthias Englbrecht, Tristan Kraft, Barbara Kraus, and Kirill Fedorov for many stimulating discussions. This work was supported by the European Union's Horizon 2020 research and innovation program under grant agreement No. 899354 (SuperQuLAN) and the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)-522216022. Most of the computational results presented were obtained using the CLIP cluster [65]. This research is part of the Munich Quantum Valley, which is supported by the Bavarian state government with funds from the Hightech Agenda Bayern Plus.
2304.00448
Approximation in weighted holomorphic Besov spaces in C^n
We study certain weighted Bergman and weighted Besov spaces of holomorphic functions in the polydisk and in the unit ball. We seek Mergelyan-type conditions on the non-radial weight function to guarantee that the dilations of a given function tend to the same function in norm; in particular, we seek conditions on the non-radial weights to ensure that the analytic polynomials are dense in the space.
Ali Abkar
2023-04-02T05:00:31Z
http://arxiv.org/abs/2304.00448v1
# Approximation in weighted holomorphic Besov spaces in \(\mathbb{C}^{n}\) # Approximation in weighted holomorphic Besov spaces in \(\mathbb{C}^{n}\) Ali Abkar Department of Pure Mathematics, Faculty of Science, Imam Khomeini International University, Qazvin 34149, Iran Email: [email protected] **Abstract.** We study certain weighted Bergman and weighted Besov spaces of holomorphic functions in the polydisk and in the unit ball. We seek Mergelyan-type conditions on the non-radial weight function to guarantee that the dilations of a given function tend to the same function in norm; in particular, we seek conditions on the non-radial weights to ensure that the analytic polynomials are dense in the space. **Keywords**: Bergman space, analytic Besov space, dilation, non-radial weight, angular weight **MSC2020**: 46E15, 46E20, 30H25, 32A36 ## 1 Introduction In the study of Banach spaces of analytic functions on sub-domains of \(\mathbb{C}^{n}\), an important question is to know whether the polynomials are dense in the space. In the complex plane, the first result on the approximation by polynomials in the space of \(p\)-th power area integrable functions on simply connected domains was obtained by Torsten Carleman in his 1923 paper [6]. Indeed, Carleman approximated functions in the Bergman space of the domain, but at the time the phrase "Bergman space" was not yet coined. Carleman's result was then extended to more general regions by O.J. Farrell (see [7], [8]). Farrell proved that for each \(p\)-th power area integrable function \(f\), there is a sequence of polynomials \(p_{n}\) such that \(p_{n}\to f\) in norm. We should remark that the Taylor polynomials of a given function do not necessarily approximate the function in norm (see [13], and [16]). Instead, one tries to show that the dilations \(f_{r}(z)=f(rz)\) for \(0\leq r<1\), converge to \(f\) in norm. It is easily understood that the dilations \(f_{r}\) that are defined on bigger domains are more well-behaved and tractable than the function \(f\) itself. In this paper we are concerned with approximation in weighted Bergman and weighted Besov spaces in several complex variables. We mean by the polydisk the subset \[\mathbb{D}^{n}=\{z=(z_{1},...,z_{n})\in\mathbb{C}^{n}:|z_{j}|<1,\ 1\leq j\leq n\}\] of the \(n\)-dimensional complex space, and by the unit ball the set \[\mathbb{B}_{n}=\{z=(z_{1},...,z_{n})\in\mathbb{C}^{n}:|z|^{2}=|z_{1}|^{2}+ \cdots+|z_{n}|^{2}<1\}.\] Let \(\mathrm{Hol}(\mathbb{D}^{n})\) denote the space of holomorphic functions on \(\mathbb{D}^{n}\). For \(0<p<\infty\), the Bergman space on the polydisk \(\mathbb{D}^{n}\) is defined as \[A_{\alpha}^{p}(\mathbb{D}^{n})=\mathrm{Hol}(\mathbb{D}^{n})\cap L^{p}(\mathbb{ D}^{n},dV_{\alpha})\] where \(dV_{\alpha}(z)=dA_{\alpha}(z_{1})\cdots dA_{\alpha}(z_{n})\), and \[dA_{\alpha}(z_{k})=(1-|z_{k}|^{2})^{\alpha}dx_{k}\,dy_{k}.\] In general, \(\alpha\) is bigger than \(-1\), but in this paper we shall assume that \(\alpha\geq 0\). If \(\alpha=0\), we write \(dV(z)=dA(z_{1})\cdots dA(z_{n})\). Let \(w\) be a positive function on \(\mathbb{D}^{n}\). For \(0<p<\infty\), the weighted Bergman space \(A_{w}^{p}(\mathbb{D}^{n},dV_{\alpha})\) consists of holomorphic functions \(f\) on the polydisk such that the integral \[\|f\|_{A_{w}^{p}(\mathbb{D}^{n},dV_{\alpha})}^{p}=\int_{\mathbb{D}^{n}}|f(z)|^ {p}w(z)dV_{\alpha}(z)\] is finite. In a similar fashion, let \(\mathrm{Hol}(\mathbb{B}_{n})\) denote the space of holomorphic functions on the unit ball. The Bergman space on the unit ball is defined as \[A^{p}(\mathbb{B}_{n})=\mathrm{Hol}(\mathbb{B}_{n})\cap L^{p}(\mathbb{B}_{n},dv)\] where \(dv\) is the normalized volume measure on \(\mathbb{B}_{n}\). Indeed, the weighted Bergman space \(A_{w}^{p}(\mathbb{B}_{n},dv)\) consists of holomorphic functions \(f\) in the unit ball such that \[\|f\|_{A_{w}^{p}(\mathbb{B}_{n},dv)}^{p}=\int_{\mathbb{B}_{n}}|f(z)|^{p}w(z)dv (z)<\infty.\] Let \(0<r<1\) and \(z=(z_{1},...,z_{n})\). We mean by \(rz=(rz_{1},...,rz_{n})\) and \[\frac{z}{r}=\left(\frac{z_{1}}{r},...,\frac{z_{n}}{r}\right).\] In this paper, we seek conditions on the weight function \(w\) (defined on \(\mathbb{D}^{n}\) or \(\mathbb{B}_{n}\)) to guarantee that the dilations \(f_{r}\) converge to \(f\) in norm. Recall that if the weight function is radial, the problem is well-known; the difficulty arises when we consider non-radial weights. Since \(f_{r}\) are holomorphic on a bigger domain, they can be approximated by polynomials so that \(f\) can be approximated by polynomials too (provided that the weighted space contains the polynomials). We shall consider weights that satisfy the following Mergelyan-type condition: there is a non-negative integer \(k\) and an \(r_{0}\in(0,1)\) such that \[r^{k}w\left(\frac{z}{r}\right)\leq Cw(z),\ \ \ \ r_{0}\leq r<1,\,|z|<r. \tag{1}\] This condition on the weight function is universal; it does not depend on the underlying region. In the next section we shall provide examples of non-radial weights satisfying this condition. We will prove some approximation theorems both for Bergman spaces and for Besov spaces. We should remark that the definition of Besov spaces of holomorphic functions of several variables is slightly tricky; we shall present two different definitions for this spaces; the first definition that appears in Section SS3 is taken from [17]. The second definition, based on the notion of radial derivatives, is discussed in Section SS4. To the best of our knowledge, the latter definition is new and were unable to trace it back in the literature. Finally, in section 5, we introduce the concept of angular weights on polydisk. Theses are weights that just depend on the arguments of \(z_{1},...,z_{n}\). Similar approximation results are established for angular weights. To the best of the author's knowledge, the definition of angular weights are somewhat new, and have not yet been discussed in the literature. Polynomial approximation has many applications in operator theory of function spaces; see the papers [5], [11]. Although approximation theory for non-radial weights has been discussed in earlier papers like [9], [10], but there are still many problems to be settled; see the comprehensive account on this topic in [14]. We close this section by mentioning several results in one variable case, see for instance [1], [2], [3], [4]. ## 2 The Bergman spaces It is a classical theorem that if the weight function on the unit disk is radial, that is \(w(z)=w(|z|)\), then the polynomials are dense in the weighted Bergman space \(A^{2}_{w}(\mathbb{D},dA)\). This was proved by S. Mergelyan [12] under the assumption that \[\int_{0}^{1}rw(r)dr<\infty.\] The same result is of course true for the Bergman spaces \(A^{p}_{w}(\mathbb{D},dA)\), for \(0<p<\infty\). We assume now that \(w\) is an arbitrary weight function (not necessarily radial) for which \[\int_{\mathbb{D}}w(z)dA(z)<\infty.\] We shall see that under some mild condition on \(w\), the polynomials are dense in the weighted Bergman spaces \(A^{p}(\mathbb{D},d\mu_{\alpha})\). We begin with the following theorem. **Theorem 2.1**: _Let \(0<p<\infty\) and \(d\mu_{\alpha}(z)=w(z)dV_{\alpha}(z)\) be a finite positive measure on the polydisk such that \(w\) satisfies the condition (1). Then the polynomials are dense in the weighted Bergman space \(A^{p}(\mathbb{D}^{n},d\mu_{\alpha})\)._ _Proof_. Let \(f\in A^{p}(\mathbb{D}^{n},d\mu_{\alpha})\), and consider the dilatations \(f_{r}(z)=f(rz)\) for \(0\leq r<1\), and \(z\in\mathbb{D}^{n}\). By a change of variables, we see that for non-negative \(\alpha\) and big enough \(r\) (\(r\geq r_{0}\), see the condition (1) above) we have \[\int_{\mathbb{D}^{n}}|f_{r}(z)|^{p}d\mu_{\alpha}(z) =\frac{1}{r^{2n+k}}\int_{r\mathbb{D}\times\cdots\times r\mathbb{ D}}|f(z)|^{p}r^{k}w\left(\frac{z}{r}\right)\prod_{k=1}^{n}\Big{(}\frac{r^{2}-|z_{k} |^{2}}{r^{2}}\Big{)}^{\alpha}dV(z)\] \[\leq\frac{C}{r^{2n+k}}\int_{r\mathbb{D}\times\cdots\times r \mathbb{D}}|f(z)|^{p}w(z)dV(z)<\infty.\] We now use the dominated convergence theorem to conclude that \[\limsup_{r\to 1^{-}}\int_{\mathbb{D}^{n}}|f_{r}|^{p}d\mu_{\alpha}(z)\leq\int_{ \mathbb{D}^{n}}|f|^{p}d\mu_{\alpha}(z).\] This inequality together with Fatou's lemma implies that for each measurable subset \(E\subseteq\mathbb{D}^{n}\), \[\lim_{r\to 1^{-}}\int_{E}|f_{r}|^{p}d\mu_{\alpha}(z)=\int_{E}|f|^{p}d\mu_{ \alpha}(z). \tag{2}\] By the continuity property of integral on its domain, there is a \(\delta>0\) such that \[\mu(E)<\delta\implies\int_{E}|f|^{p}d\mu_{\alpha}(z)<\epsilon.\] It follows from Egorov's theorem that there is a subset \(E\subset\mathbb{D}^{n}\) such that \(\mu_{\alpha}(\mathbb{D}^{n}\setminus E)<\delta\) and \(f_{r}\to f\) uniformly on \(E\). We now write \[\int_{\mathbb{D}^{n}}|f_{r}-f|^{p}d\mu_{\alpha} =\int_{E}|f_{r}-f|^{p}d\mu_{\alpha}+\int_{\mathbb{D}\setminus E}|f _{r}-f|^{p}d\mu_{\alpha}\] \[\leq\int_{E}|f_{r}-f|^{p}d\mu_{\alpha}+2^{p}\int_{\mathbb{D}^{n} \setminus E}(|f_{r}|^{p}+|f|^{p})d\mu_{\alpha}. \tag{3}\] Due to the uniform convergence of \(f_{r}\) to \(f\) on \(E\), the first integral on the right-hand side of (3) tends to zero as \(r\to 1^{-}\). As for the second integral, we use (2) to obtain \[0\leq\liminf_{r\to 1^{-}}\int_{\mathbb{D}^{n}}|f_{r}-f|^{p}d\mu_{ \alpha}(z) \leq\limsup_{r\to 1^{-}}\int_{\mathbb{D}^{n}}|f_{r}-f|^{p}d\mu_{ \alpha}(z)\] \[\leq 2^{p+1}\int_{\mathbb{D}^{n}\setminus E}|f|^{p}d\mu_{\alpha}\] \[\leq 2^{p+1}\epsilon,\] from which it follows that \[\lim_{r\to 1^{-}}\int_{\mathbb{D}^{n}}|f_{r}-f|^{p}d\mu_{\alpha}(z)=0. \tag{4}\] Given \(\epsilon>0\), there is \(0<r_{0}<1\) such that \(\|f-f_{r_{0}}\|_{A^{p}(\mathbb{D}^{n},d\mu_{\alpha})}<\epsilon\). Since \(f_{r_{0}}\) is analytic in a bigger domain than the unit polydisk, we may use Mergelyan's theorem to approximate \(f_{r_{0}}\) uniformly on \(\mathbb{D}^{n}\) by a polynomial \(Q\). Since \(\mu_{\alpha}\) is a finite positive measure, we obtain \[\|f_{r_{0}}-Q\|_{A^{p}(\mathbb{D}^{n},d\mu_{\alpha})}^{p} =\int_{\mathbb{D}^{n}}|f_{r_{0}}-Q|^{p}d\mu_{\alpha}\] \[\leq\epsilon^{p}\int_{\mathbb{D}^{n}}d\mu_{\alpha}(z):=\epsilon^ {p}C,\] from which it follows that \[\|f-Q\|_{A^{p}(\mathbb{D}^{n},d\mu_{\alpha})}\leq\epsilon(1+C^{1/p}).\] This proves the theorem. \(\Box\) **Remark 2.2**: _The condition (1) can be weakened in the following way: there is a non-negative function \(g\) such that for sufficiently large \(r\) we have \(r^{k}w(z/r)\leq g(z)\) and_ \[\int_{\mathbb{D}^{n}}|f(z)|^{p}g(z)dV_{\alpha}(z)<\infty.\] _Again we use the dominated convergence theorem to obtain_ \[\limsup_{r\to 1^{-}}\int_{E}|f_{r}|^{p}d\mu_{\alpha}(z)\leq\int_{E}|f|^{p}d\mu_{ \alpha}(z).\] **Theorem 2.3**: _Let \(0<p<\infty\) and \(d\mu_{\alpha}(z)=w(z)dV_{\alpha}(z)\) be a finite positive measure on the polydisk such that for some non-negative integer \(k\), the function_ \[r\mapsto r^{k}w\left(\frac{z}{r}\right),\quad|z|<r,\] _is increasing in \(r\). Then the polynomials are dense in \(A^{p}(\mathbb{D}^{n},wdV_{\alpha})\)._ Proof.: It suffices to show that each function \(f\in A^{p}(\mathbb{D}^{n},d\mu_{\alpha})\) satisfies \[\limsup_{r\to 1^{-}}\int_{\mathbb{D}^{n}}|f_{r}|^{p}d\mu_{\alpha}(z)\leq\int_{ \mathbb{D}^{n}}|f|^{p}d\mu_{\alpha}(z).\] To do this, we make a change of variables to get \[\int_{\mathbb{D}^{n}}|f_{r}(z)|^{p}d\mu_{\alpha}(z)=\frac{1}{r^{2n+k}}\int_{r \mathbb{D}\times\cdots\times r\mathbb{D}}|f(z)|^{p}r^{k}w\left(\frac{z}{r} \right)\prod_{j=1}^{n}\Big{(}\frac{r^{2}-|z_{j}|^{2}}{r^{2}}\Big{)}^{\alpha}dV (z).\] Note that the functions \[r\mapsto r^{k}w\left(\frac{z}{r}\right),\quad r\mapsto\Big{(}\frac{r^{2}-|z_{ j}|^{2}}{r^{2}}\Big{)}^{\alpha}\] are increasing in \(r\) (by the assumption and the fact that \(\alpha\) is non-negative), so that the monotone convergence theorem applies to yield \[\limsup_{r\to 1^{-}}\int_{\mathbb{D}^{n}}|f_{r}(z)|^{p}d\mu_{\alpha}(z)=\int_{ \mathbb{D}^{n}}|f(z)|^{p}d\mu_{\alpha}(z).\] Now, we use an argument as in the proof of Theorem 2.1 to get the result. \(\Box\) **Example 2.4**: _(a). Let \(\alpha>-1\) and \(w(z)=\prod_{k=1}^{n}(\alpha+1)(1-|z_{k}|^{2})^{\alpha}\)._ \[r^{k}w\left(\frac{z}{r}\right)=\prod_{k=1}^{n}(\alpha+1)r^{k-2\alpha}(r^{2}-|z _{k}|^{2})^{\alpha}\leq w(z),\] _provided that \(k\geq 2\alpha+1\). Now Theorem 2.1 can be applied. (b). It is easily seen that the Gaussian weight_ \[w(z)=e^{-\beta|z|^{2}}=e^{-\beta(|z_{1}|^{2}+\cdots+|z_{n}|^{2})},\quad\beta>0,\] _satisfies the condition of the above theorem for \(k=0\), since for \(0<r<r\),_ \[w\left(\frac{z}{r}\right) =e^{\frac{-\beta}{r^{2}}(|z_{1}|^{2}+\cdots+|z_{n}|^{2})}\] \[\leq e^{-\beta(|z_{1}|^{2}+\cdots+|z_{n}|^{2})}\] \[=w(z).\] _The same is true for the non-radial weight_ \[w(z)=e^{-\beta(|x_{1}|^{2}+\cdots+|x_{n}|^{2})},\quad\beta>0,\] _where \(z=(z_{1},...,z_{n})\), \(z_{k}=x_{k}+iy_{k},\,1\leq k\leq n\). Note that in some instances the function \(r\mapsto w(z/r)\) is not increasing, but we may find some \(k\) for which \(r\mapsto r^{k}w(z/r)\) is increasing for \(r\) bigger than \(|z|\). For example, if we take \(w(z)=\exp(|z|)\), then_ \[\frac{d}{dr}w\left(\frac{z}{r}\right)=-\frac{|z|}{r^{2}}\exp\left(\frac{|z|}{r }\right)<0\] _while_ \[\frac{d}{dr}\left[rw(\frac{z}{r})\right]=\left(1-\frac{|z|}{r}\right)\exp \left(\frac{|z|}{r}\right)>0,\quad|z|<r.\] _Note also that \(r^{2}e^{|z|/r}\) is increasing for \(r>1/2\) since_ \[\frac{d}{dr}(r^{2}e^{|z|/r})=2re^{|z|/r}+r^{2}(\frac{-1}{r^{2}}e^{|z|/r})=(2r- 1)e^{|z|/r}>0.\] The Besov spaces In the unit disk, the weighted Dirichlet space \(\mathcal{D}^{p}_{w}\), \(1<p<\infty\), consists of analytic functions in the unit disk for which \[\|f\|_{\mathcal{D}^{p}_{w}}=\left(|f(0)|^{p}+\int_{\mathbb{D}}|f^{\prime}(z)|^{p }w(z)dA(z)\right)^{1/p}\] is finite. These spaces are particular cases of certain Banach spaces of analytic functions, namely, the weighted analytic Besov spaces. The weighted analytic Besov space \(\mathcal{B}^{p}_{w}\) consists of analytic functions \(f\) in the unit disk for which the integral \[\int_{\mathbb{D}}(1-|z|^{2})^{p-2}|f^{\prime}(z)|^{p}w(z)dA(z)\] is finite. The norm of a function in the weighted Besov space is given by \[\|f\|_{\mathcal{B}^{p}_{w}}=\left(|f(0)|^{p}+\int_{\mathbb{D}}(1-|z|^{2})^{p-2} |f^{\prime}(z)|^{p}w(z)dA(z)\right)^{1/p}.\] In higher dimensions, the definition of Besov spaces is more subtle. Let \(dv(z)\) be the normalized volume measure in the unit ball \(\mathbb{B}_{n}\) of \(\mathbb{C}^{n}\), and let \[d\tau(z)=\frac{dv(z)}{(1-|z|^{2})^{n+1}}\] be the Mobius invariant measure in \(\mathbb{B}_{n}\). For \(0<p<\infty\), the Besov space \(\mathcal{B}^{p}\) consists of all holomorphic functions \(f\in\mathbb{B}_{n}\) such that \[(1-|z|^{2})^{N}\frac{\partial^{m}f}{\partial z^{m}}(z)\in L^{p}(\mathbb{B}_{n },d\tau),\quad|m|=N,\] where \(N\) is any fixed positive integer satisfying \(pN>n\). According to [17, Theorem 6.1], the definition of \(\mathcal{B}^{p}\) is independent of the choice of \(N\). The "norm" is defined by \[\|f\|^{p}_{\mathcal{B}^{p}}=\sum_{|m|\leq N-1}\left|\frac{\partial^{m}f}{ \partial z^{m}}(0)\right|^{p}+\sum_{|m|=N}\int_{\mathbb{B}_{n}}\left|(1-|z|^{2 })^{N}\frac{\partial^{m}f}{\partial z^{m}}(z)\right|^{p}d\tau(z).\] When \(p=2\), \(\mathcal{B}^{2}\) plays the role of the Dirichlet space and is denoted by \(\mathcal{D}^{2}\). **Theorem 3.1**: _Let \(d\mu(z)=w(z)d\tau(z)\) be a finite positive measure on the unit ball such that \(w\) satisfies the condition (1). Then the polynomials are dense in \(\mathcal{D}^{2}_{w}(\mathbb{B}_{n},d\mu)\)._ _Proof_. As for approximation, it is enough to work with the following expression for norm (the constant term does not play any role in approximation): \[\|f\|^{2}_{\mathcal{D}^{2}_{w}}=\sum_{|m|=N}\int_{\mathbb{B}_{n}}\left|(1-|z|^ {2})^{N}\frac{\partial^{m}f}{\partial z^{m}}(z)\right|^{2}w(z)d\tau(z).\] Note that \[\|f_{r}\|^{2}_{\mathcal{D}^{2}_{w}}=\sum_{|m|=N}\int_{\mathbb{B}_{n}}\left|(1- |z|^{2})^{N}\frac{\partial^{m}f_{r}}{\partial z^{m}}(z)\right|^{2}w(z)d\tau(z).\] We now replace \(z\) by \(z/r\) to obtain \[\|f_{r}\|_{\mathcal{D}^{2}_{w}}^{2} =\sum_{|m|=N}\frac{1}{r^{k}}\int_{r\mathbb{B}_{n}}\left|\left(\frac{ r^{2}-|z|^{2}}{r^{2}}\right)^{N}r^{|m|}\frac{\partial^{m}f}{\partial z^{m}}(z) \right|^{2}r^{k}w\left(\frac{z}{r}\right)\frac{r^{2}}{(r^{2}-|z|^{2})^{n+1}}dv(z)\] \[\leq C\frac{r^{2+2|m|}}{r^{k+4N}}\sum_{|m|=N}\int_{r\mathbb{B}_{n} }\left|\left(1-|z|^{2}\right)^{N}\frac{\partial^{m}f}{\partial z^{m}}(z) \right|^{2}w(z)\frac{dv(z)}{(r^{2}-|z|^{2})^{n+1}}\] \[=C\frac{r^{2+2|m|}}{r^{k+4N}}\sum_{|m|=N}\int_{\mathbb{B}_{n}} \chi(r\mathbb{B}_{n})(z)\left|\left(1-|z|^{2}\right)^{N}\frac{\partial^{m}f}{ \partial z^{m}}(z)\right|^{2}w(z)\frac{dv(z)}{(r^{2}-|z|^{2})^{n+1}}\] \[\leq C\frac{r^{2+2|m|}}{r^{k+4N}}\sum_{|m|=N}\int_{\mathbb{B}_{n} }\left|\left(1-|z|^{2}\right)^{N}\frac{\partial^{m}f}{\partial z^{m}}(z) \right|^{2}w(z)\frac{dv(z)}{(r^{2}-|z|^{2})^{n+1}}.\] Since the function \[r\mapsto\frac{1}{(r^{2}-|z|^{2})^{n+1}},\quad|z|<r,\] is monotone decreasing in \(r\), the last integral converges ar \(r\to 1^{-}\), so that a version of the dominated convergence theorem can be applied to the first integral: \[\limsup_{r\to 1^{-}}\|f_{r}\|_{\mathcal{D}^{2}_{w}}^{2} =\limsup_{r\to 1^{-}}\sum_{|m|=N}\frac{1}{r^{k}}\int_{r \mathbb{B}_{n}}\left|\left(\frac{r^{2}-|z|^{2}}{r^{2}}\right)^{N}r^{|m|}\frac {\partial^{m}f}{\partial z^{m}}(z)\right|^{2}r^{k}w\left(\frac{z}{r}\right) \frac{r^{2}}{(r^{2}-|z|^{2})^{n+1}}dv(z)\] \[=\sum_{|m|=N}\int_{\mathbb{B}_{n}}\left|(1-|z|^{2})^{N}\frac{ \partial^{m}f}{\partial z^{m}}(z)\right|^{2}w(z)\frac{dv(z)}{(1-|z|^{2})^{n+1}}\] \[=\|f\|_{\mathcal{D}^{2}_{w}}^{2}.\] The rest of the proof goes in the same way as in the proof of Theorem 2.1; we just replace \(f\) and \(f_{r}\), by \(\frac{\partial^{m}f}{\partial z^{m}}\) and \(\frac{\partial^{m}f_{r}}{\partial z^{m}}\), respectively, to obtain \[\lim_{r\to 1^{-}}\|f_{r}-f\|_{\mathcal{D}^{2}_{w}}^{2}=\lim_{r\to 1^{-}}\sum_{|m|=N} \int_{\mathbb{B}_{n}}\left|(1-|z|^{2})^{N}\left(\frac{\partial^{m}f}{\partial z ^{m}}(z)-\frac{\partial^{m}f_{r}}{\partial z^{m}}(z)\right)\right|^{2}w(z)d \tau(z)=0.\] On the other hand, each dilation \(f_{r}\) can be approximated by polynomials (see [17, Theorem 6.2]), therefore, \(f\) can be approximated by polynomials. \(\Box\) We now state a similar statement for the analytic Besov spaces. **Theorem 3.2**: _Let \(d\mu(z)=w(z)d\tau(z)\) be a finite positive measure on the unit ball such that \(w\) satisfies the condition (1). Then the polynomials are dense in \(\mathcal{B}^{p}_{w}(\mathbb{B}_{n},d\mu)\)._ _Proof_. We begin by inspecting the expression \[\|f_{r}\|_{\mathcal{B}^{p}_{w}}^{p}=\sum_{|m|=N}\int_{\mathbb{B}_{n}}\left|(1- |z|^{2})^{N}\frac{\partial^{m}f_{r}}{\partial z^{m}}(z)\right|^{p}w(z)d\tau(z).\] Similar to the preceding case, we have \[\|f_{r}\|_{\mathcal{B}^{p}_{w}}^{p} =\sum_{|m|=N}\frac{1}{r^{k}}\int_{r\mathbb{B}_{n}}\left|\left( \frac{r^{2}-|z|^{2}}{r^{2}}\right)^{N}r^{|m|}\frac{\partial^{m}f}{\partial z^{ m}}(z)\right|^{p}r^{k}w\left(\frac{z}{r}\right)\frac{r^{2}}{(r^{2}-|z|^{2})^{n+1}}dv(z)\] \[\leq C\frac{r^{2+p|m|}}{r^{k+2pN}}\sum_{|m|=N}\int_{\mathbb{B}_{n }}\left|(1-|z|^{2})^{N}\frac{\partial^{m}f}{\partial z^{m}}(z)\right|^{p}w(z) \frac{dv(z)}{(r^{2}-|z|^{2})^{n+1}}.\] Again, this argument shows that \[\limsup_{r\to 1^{-}}\|f_{r}\|^{p}_{{\cal B}^{p}_{w}}\leq\|f\|^{p}_{{\cal B}^{p}_{w}},\] from which the result follows. \(\Box\) ## 4 Besov spaces: alternative definition Another approach in the study of Besov spaces in several complex variables is to use the concept of radial derivative. Let \(f\) be a holomorphic function in the unit ball \({\mathbb{B}}_{n}\). The radial derivative of \(f\) is defined by \[Rf(z)=\sum_{k=1}^{n}z_{k}\frac{\partial f}{\partial z_{k}}(z).\] According to [17], if \[f(z)=\sum_{k=0}^{\infty}f_{k}(z) \tag{5}\] is the homogenous expansion of \(f\) (each \(f_{k}\) is a polynomial of degree \(k\) in \(z_{1}\),...,\(z_{n}\)), then \[Rf(z)=\sum_{k=1}^{\infty}kf_{k}(z).\] The weighted analytic Besov space \({\bf B}^{p}_{w}({\mathbb{D}}^{n})\) consists of analytic functions \(f\) on a neighborhood of polydisk for which the integral \[\int_{{\mathbb{D}}^{n}}\Big{(}\prod_{j=1}^{n}(1-|z_{j}|^{2})^{p-2}\Big{)}|Rf( z)|^{p}w(z)dV(z)\] is finite. The norm of a function in \({\bf B}^{p}_{w}({\mathbb{D}}^{n})\) is given by \[\|f\|^{p}_{{\bf B}^{p}_{w}({\mathbb{D}}^{n})}=|f(0)|^{p}+\int_{{\mathbb{D}}^{n }}\Big{(}\prod_{j=1}^{n}(1-|z_{j}|^{2})^{p-2}\Big{)}|Rf(z)|^{p}w(z)dV(z).\] **Theorem 4.1**: _Let \(2\leq p<\infty\) and \(d\mu(z)=w(z)dV(z)\) be a finite positive measure on the polydisk such that \(w\) satisfies the condition (1). Then the polynomials are dense in the weighted analytic Besov space \({\bf B}^{p}_{w}({\mathbb{D}}^{n})\)._ _Proof_. As we pointed out earlier, the constant term in the definition of norm does not play any role in approximation, so that we just look at \[\|f_{r}\|^{p}_{{\bf B}^{p}_{w}({\mathbb{D}}^{n})}=\int_{{\mathbb{D}}^{n}}\Big{(} \prod_{j=1}^{n}(1-|z_{j}|^{2})^{p-2}\Big{)}|Rf_{r}(z)|^{p}w(z)dV(z).\] We first note that by (5) \[f_{r}(z)=f(rz)=\sum_{m=0}^{\infty}f_{m}(rz),\] from which it follows that \[(Rf_{r})(z)=\sum_{m=0}^{\infty}mf_{m}(rz).\] Replacing \(z\) by \(z/r\) we obtain \[\|f_{r}\|_{\mathbf{B}_{w}^{p}(\mathbb{D}^{n})}^{p} =\int_{\mathbb{D}^{n}}\Big{[}\prod_{j=1}^{n}(1-|z_{j}|^{2})^{p-2} \Big{]}|Rf_{r}(z)|^{p}w(z)dV(z)\] \[=\frac{1}{r^{k+2n}}\int_{r\mathbb{D}\times\cdots\times r\mathbb{ D}}\Big{[}\prod_{j=1}^{n}\Big{(}\frac{r^{2}-|z_{j}|^{2}}{r^{2}}\Big{)}^{p-2} \Big{]}|Rf(z)|^{p}r^{k}w(\frac{z}{r})dV(z)\] \[\leq\frac{C}{r^{k+2n}}\int_{r\mathbb{D}\times\cdots\times r \mathbb{D}}\Big{[}\prod_{j=1}^{n}\Big{(}\frac{r^{2}-|z_{j}|^{2}}{r^{2}}\Big{)} ^{p-2}\Big{]}|Rf(z)|^{p}w(z)dV(z).\] Since \(p-2\geq 0\), each of the functions \[r\mapsto\Big{(}\frac{r^{2}-|z_{j}|^{2}}{r^{2}}\Big{)}^{p-2}\] is increasing in \(r\), so that we can apply the generalized version of dominated convergence theorem to get \[\lim_{r\to 1^{-1}}\|f_{r}\|_{\mathbf{B}_{w}^{p}(\mathbb{D}^{n})}^{p}=\|f\|_{ \mathbf{B}_{w}^{p}(\mathbb{D}^{n})}^{p}.\] This latter implies that the dilations \(f_{r}\) tend to \(f\) in norm; \[\|f-f_{r}\|_{\mathbf{B}_{w}^{p}(\mathbb{D}^{n})}^{p}=0,\] which leads to the desired result. \(\Box\) ## 5 Angular weights on polydisk In contrast to radial weights defined on the unit disk, we may consider weights that depend only on the argument of \(z\); that is, \(w(re^{i\theta})=w(\theta)\). We may call such weights _angular weights_. For example, given \(\alpha>0\), \[w(z)=w(re^{i\theta})=(4\pi^{2}-\theta^{2})^{\alpha},\quad 0\leq\theta<2\pi,\] is an angular weight on the unit disk. Similarly, a positive continuous function \(w\) on the polydisk \(\mathbb{D}^{n}\) is said to be angular if \[w(r_{1}e^{i\theta_{1}},...,r_{n}e^{i\theta_{n}})=w(\theta_{1},...,\theta_{n}),\] meaning that \(w\) just depends on the arguments of the components of \(z\in\mathbb{D}^{n}\). **Some notations** For simplicity, given an \(n\)-tuple \(\mathbf{r}=(r_{1},...,r_{n})\), we write \(d\mathbf{r}=dr_{1}\cdots dr_{n}\), and for a given weight function \(w\), we write \(w(\mathbf{r})\) instead of \(\omega(r_{1},...,r_{n})\). In the same manner, \(w(\theta)\) is a short form for \(w(\theta_{1},...,\theta_{n})\), and \(d\theta\) stands for \(d\theta_{1}\cdots d\theta_{n}\). The distinguished boundary of \(\mathbb{D}^{n}\) is denoted by \(\mathbb{T}^{n}\), and \(|m|=m_{1}+\cdots+m_{n}\) where \(m=(m_{1},...,m_{n})\) is a multi-index consisting of non-negative integers. **Proposition 5.1**: _Let \(w\) be an angular weight function on the polydisk that satisfies_ \[\int_{\mathbb{T}^{n}}w(\theta)d\theta<\infty.\] _Then the Taylor polynomials of each function in \(A_{w}^{2}(\mathbb{D}^{n},dV_{\alpha})\) tend to the same function in norm. In particular, the polynomials are dense in \(A_{w}^{2}(\mathbb{D}^{n},dV_{\alpha})\)._ _Proof_. Let \(m=(m_{1},...,m_{n})\) be a multi-index, and \[f(z)=\sum_{|m|\geq 0}a_{m}z^{m}=\sum_{|m|\geq 0}a_{m_{1},...,m_{n}}z_{1}^{m_{1}} \cdots z_{n}^{m_{n}}\] be a function in \(A_{w}^{2}(\mathbb{D}^{n},dV_{\alpha})\). Let \(I^{n}\) stand for the \(n\) times Cartesian product of the unit interval \([0,1]\). Then using polar coordinates we have \[\|f\|_{A_{w}^{2}(\mathbb{D}^{n},dV_{\alpha})}^{2} =\int_{I^{n}}\int_{\mathbb{T}^{n}}\sum_{|m|\geq 0}|a_{m}|^{2}\prod _{k=1}^{n}\left(r_{k}^{2m_{k}}(1-r_{k}^{2})^{\alpha}r_{k}dr_{k}\right)\,w( \theta)d\theta\] \[=\Big{(}\int_{\mathbb{T}^{n}}w(\theta)d\theta\Big{)}\sum_{|m|\geq 0 }\frac{|a_{m}|^{2}\,\Gamma(m_{1}+1)\cdots\Gamma(m_{n}+1)(\Gamma(\alpha+1))^{n}} {2^{n}\Gamma(m_{1}+\alpha+1)\cdots\Gamma(m_{n}+\alpha+1)}.\] This implies that for \(p_{k}(z)=\sum_{0\leq|m|\leq k}a_{m}z^{m}\) we have \[\|f-p_{k}\|_{A_{w}^{2}(\mathbb{D}^{n},dV_{\alpha})}^{2}=\Big{(}\int_{\mathbb{ T}^{n}}w(\theta)d\theta\Big{)}\sum_{|m|\geq k+1}\frac{|a_{m}|^{2}\,\Gamma(m_{1}+1) \cdots\Gamma(m_{n}+1)(\Gamma(\alpha+1))^{n}}{2^{n}\Gamma(m_{1}+\alpha+1)\cdots \Gamma(m_{n}+\alpha+1)}.\] Now, we let \(k\) tend to infinity, so that \(p_{k}\to f\) in \(A_{w}^{2}(\mathbb{D}^{n},dV_{\alpha})\). \(\Box\) This result can be generalized in the following way. **Theorem 5.2**: _Let \(w\) be an angular weight function on the polydisk that satisfies_ \[\int_{\mathbb{T}^{n}}w(\theta)d\theta<\infty.\] _Then the polynomials are dense in \(A_{w}^{p}(\mathbb{D}^{n},dV_{\alpha})\), \(0<p<\infty\)._ _Proof_. Since \(w\) is angular, we see that \(w(z)=w(\frac{z}{r})\), so that \[\|f_{r}\|_{A_{w}^{p}(\mathbb{D}^{n},dV_{\alpha})}^{p} =\int_{\mathbb{D}^{n}}|f_{r}(z)|^{p}w(z)\prod_{k=1}^{n}(1-|z_{k}|^ {2})^{\alpha}dV(z)\] \[=\frac{1}{r^{2n}}\int_{r\mathbb{D}^{n}}|f(z)|^{p}w(z)\prod_{k=1}^ {n}\left(\frac{r^{2}-|z_{k}|^{2}}{r^{2}}\right)^{\alpha}dV(z)\] \[=\frac{1}{r^{2n}}\int_{r\mathbb{D}^{n}}|f(z)|^{p}w(z)\prod_{k=1}^ {n}\left(\frac{r^{2}-|z_{k}|^{2}}{r^{2}}\right)^{\alpha}dV(z).\] Again, we use \(\alpha\geq 0\) and the fact that the function \[r\mapsto\Big{(}\frac{r^{2}-|z_{k}|^{2}}{r^{2}}\Big{)}^{\alpha},\quad|z_{k}| \leq r,\] is increasing in \(r\) to obtain \[\limsup_{r\to 1^{-}}\int_{\mathbb{D}^{n}}|f_{r}(z)|^{p}w(z)dV_{\alpha}(z)=\int_{ \mathbb{D}^{n}}|f(z)|^{p}w(z)dV_{\alpha}(z),\] As in the proof of Theorem 2.1, this equality leads to \[\|f_{r}-f\|_{A^{p}_{w}(\mathbb{D}^{n},dV_{\alpha})}=0,\quad r\to 1^{-},\] from which the result follows. \(\Box\) We now consider weights that are products of a radial and an angular weight. **Proposition 5.3**: _Let \(w(r_{1}e^{i\theta_{1}},...,r_{n}e^{i\theta_{n}})=\omega(r_{1},...,r_{n})\nu( \theta_{1},...,\theta_{n}),\) where \(\omega\) and \(\nu\) are two weights on \(\mathbb{D}^{n}\) satisfying_ \[\int_{I^{n}}\Big{[}\prod_{k=1}^{n}(1-r_{k}^{2})^{\alpha}r_{k}\Big{]}\omega({ \bf r})d{\bf r}<\infty,\quad\int_{\mathbb{T}^{n}}\nu(\theta)d\theta<\infty.\] _Then the Taylor polynomials of each function in \(A^{2}_{w}(\mathbb{D}^{n},dV_{\alpha})\) tend to the same function in norm. In particular, the polynomials are dense in \(A^{2}_{w}(\mathbb{D}^{n},dV_{\alpha})\)._ _Proof._ A computation shows that \[\|f\|^{2}_{A^{2}_{w}(\mathbb{D}^{n},dV_{\alpha})} =\int_{I^{n}}\int_{\mathbb{T}^{n}}\sum_{|m|\geq 0}|a_{m}|^{2} \prod_{k=1}^{n}\Big{(}r_{k}^{2m_{k}}(1-r_{k}^{2})^{\alpha}r_{k}dr_{k}\Big{)} \,\omega({\bf r})\nu(\theta)d{\bf r}d\theta\] \[=\Big{(}\int_{\mathbb{T}^{n}}\nu(\theta)d\theta\Big{)}\sum_{|m| \geq 0}|a_{m}|^{2}\int_{I^{n}}\Big{[}\prod_{k=1}^{n}r_{k}^{2m_{k}}(1-r_{k}^{2} )^{\alpha}r_{k}\Big{]}\omega({\bf r})d{\bf r}\] \[=\Big{(}\int_{\mathbb{T}^{n}}\nu(\theta)d\theta\Big{)}\sum_{|m| \geq 0}|a_{m}|^{2}\gamma_{m},\] where \[\gamma_{m} =\int_{I^{n}}\Big{[}\prod_{k=1}^{n}r_{k}^{2m_{k}}(1-r_{k}^{2})^{ \alpha}r_{k}\Big{]}\omega({\bf r})d{\bf r}\] \[\leq\int_{I^{n}}\Big{[}\prod_{k=1}^{n}(1-r_{k}^{2})^{\alpha}r_{k} \Big{]}\omega({\bf r})d{\bf r}<\infty.\] Now, for \(p_{k}(z)=\sum_{0\leq|m|\leq k}a_{m}z^{m}\) we have \[\|f-p_{k}\|^{2}_{A^{2}_{w}(\mathbb{D}^{n},dV_{\alpha})}=\Big{(}\int_{\mathbb{ T}^{n}}\nu(\theta)d\theta\Big{)}\sum_{|m|\geq k+1}^{\infty}|a_{m}|^{2}\gamma_{m},\] which tends to zero as \(k\to\infty\). \(\Box\) **Theorem 5.4**: _Let \(w(r_{1}e^{i\theta_{1}},...,r_{n}e^{i\theta_{n}})=\omega(r_{1},...,r_{n})\nu( \theta_{1},...,\theta_{n}),\) where \(\omega\) and \(\nu\) are two weights on \(\mathbb{D}^{n}\) satisfying_ \[\int_{I^{n}}\Big{[}\prod_{k=1}^{n}(1-r_{k}^{2})^{\alpha}r_{k}\Big{]}\omega({ \bf r})d{\bf r}<\infty,\quad\int_{\mathbb{T}^{n}}\nu(\theta)d\theta<\infty,\] _and \(s^{k}\omega(r/s)\leq C\omega(r)\) for some integer \(k\geq 0\). Then the polynomials are dense in the weighted Bergman space \(A^{p}_{w}(\mathbb{D}^{n},dV_{\alpha}),\)\(0<p<\infty\)._ Proof.: Again, we see that for \(z=(r_{1}e^{i\theta_{1}},...,r_{n}e^{i\theta_{n}})\), \[\|f_{s}\|_{A^{p}_{w}(\mathbb{D}^{n},dV_{\alpha})}^{p} =\int_{\mathbb{D}^{n}}|f_{s}(z)|^{p}\omega(r)\nu(\theta)\prod_{j=1} ^{n}(1-|z_{j}|^{2})^{\alpha}dv(z)\] \[=\frac{1}{s^{2n+k}}\int_{s\mathbb{D}^{n}}|f(z)|^{p}s^{k}\omega(r/s )\prod_{j=1}^{n}\left(\frac{s^{2}-|z_{j}|^{2}}{s^{2}}\right)^{\alpha}\nu(\theta )dv(z)\] \[\leq\frac{C}{s^{2n+k}}\int_{s\mathbb{D}^{n}}|f(z)|^{p}\omega(r) \prod_{j=1}^{n}\left(\frac{s^{2}-|z_{j}|^{2}}{s^{2}}\right)^{\alpha}\nu(\theta )dv(z).\] Using the fact that for \(\alpha\geq 0\) and \(j\) fixed, \[s\mapsto\Big{(}\frac{s^{2}-|z_{j}|^{2}}{s^{2}}\Big{)}^{\alpha},\quad|z_{j}| \leq s,\] is increasing in \(s\) we obtain \[\limsup_{s\to 1^{-}}\|f_{s}\|_{A^{p}_{w}(\mathbb{D}^{n},dV_{\alpha})}^{p}\leq\|f \|_{A^{p}_{w}(\mathbb{D}^{n},dV_{\alpha})}^{p}.\] We now use the method introduced in the proof of Theorem 2.1 to get \[\|f_{s}-f\|_{A^{p}_{w}(\mathbb{D}^{n},dV_{\alpha})}=0,\quad s\to 1^{-},\] which in turn leads to the desired result. \(\Box\) ## 6 Declarations #### Ethical approval Not allpicable #### Competing interests The author declares no competing interests. #### Authors contribution Not allpicable #### Funding Not allpicable #### Availability of data and materials Data sharing is not applicable to this article as no data sets were generated or analyzed during the current study.
2307.01206
Confidence Ranking for CTR Prediction
Model evolution and constant availability of data are two common phenomena in large-scale real-world machine learning applications, e.g. ads and recommendation systems. To adapt, the real-world system typically retrain with all available data and online learn with recently available data to update the models periodically with the goal of better serving performance. In this paper, we propose a novel framework, named Confidence Ranking, which designs the optimization objective as a ranking function with two different models. Our confidence ranking loss allows direct optimization of the logits output for different convex surrogate functions of metrics, e.g. AUC and Accuracy depending on the target task and dataset. Armed with our proposed methods, our experiments show that the introduction of confidence ranking loss can outperform all baselines on the CTR prediction tasks of public and industrial datasets. This framework has been deployed in the advertisement system of JD.com to serve the main traffic in the fine-rank stage.
Jian Zhu, Congcong Liu, Pei Wang, Xiwei Zhao, Zhangang Lin, Jingping Shao
2023-06-28T07:31:00Z
http://arxiv.org/abs/2307.01206v1
# Confidence Ranking for CTR Prediction ###### Abstract. Model evolution and constant availability of data are two common phenomenon in large-scale real-world machine learning application, e.g. ads and recommendation system. To adapt, real-world system typically _retrain_ with all available data and _online learn_ with recent available data to update the models periodically with the goal of better serving performance. However, if model and data evolution results in a vastly different training manner, it may induce negative impact on online A/B platform. In this paper, we propose a novel framework, named _Confidence Ranking_, which designs the optimization objective as a ranking function with two different models. Our confidence ranking loss allows direct optimization of the logits output for different convex surrogate function of metrics, e.g. _AUC_ and _Accuracy_ depending on the target task and dataset. Armed with our proposed methods, our experiments show that the confidence ranking loss can outperform all baselines on CTR prediction of public and industrial datasets. This framework has been deployed in the ad system of JD.com to serve the main traffic in the fine-rank stage. Click-Through Rate Prediction; Loss function; Deep learning + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Online advertising; advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Online advertising; advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Online advertising; advertising advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Online advertising; advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Online advertising; advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Online advertising; advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Online advertising; advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Online advertising; advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Online advertising; advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Online advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Online advertising; advertising; Recommender systems. + Footnote †: journal: Information systems – Retrieval models and ranking; Online advertising; Online advertising; Online advertising; advertising; Recommender systems. and experimental analysis on our proposed confidence ranking loss showing the superiority over distillation. ## 2. Methods ### Preliminaries Formally, as illustrated in Figure 1, a CTR prediction pipeline in real-world system can be split to three parts: **(1)** Offline training: given old dataset \(\mathcal{D}_{old}\) that consists of continuous \(T\) days data with corresponding input sample \(x\) ad ground truth labels \(y\), we build a machine learning model \(f\) with parameters \(\theta\) for the aim to optimize in a sequential manner. We note the output logits with \(x\triangleq h(x;\theta)\), ground truth \(y\) and the corresponding probability with \(f(x;\theta)\triangleq\text{sigmoid}(h(x;\theta))\). The goal of first part is to optimize \(f\) on the \(\mathcal{D}_{old}\) with cross-entropy optimization: \[\operatorname*{arg\,min}_{\theta}\mathcal{L}_{old},\quad\text{where}\quad \mathcal{L}_{old}\triangleq\mathbb{E}_{(x,y)-\mathcal{D}_{old}}[\ell(y,f(x; \theta))] \tag{1}\] **(2)** Online serving: after the loss \(\mathcal{L}_{old}\) converge smaller than \(\delta\), where \(\delta\) is specified by cross-validation on \(\mathcal{D}_{old}\), we deploy the machine learning model \(f_{0}\) on the real-world system for the aim to serve the new arriving dataset \(\mathcal{D}_{new}\) where the ground truth \(y_{new}\) is unknown until the user clicks the item or leave the browser. Until now, we have demonstrate the pipeline of real world deployment applications. **(3)** Online learning: As we get new arriving data, we retrain previous deployed model \(f_{old}\) in order to overcome the challenge of inconsistency between \(\mathcal{D}_{old}\) and \(\mathcal{D}_{new}\) before online serving with new model \(f_{new}\). Benefited from mitigation on distributional drift, the strategy of online learning with recent data can efficiently improve the generalization on the next serving stage under the assumption that recent activities reflect the users' evolving interest. Despite effectiveness of the pipeline, this strategy does not take previous model's outputs as auxiliary information which covers the underlying relationship between prediction and ground truth. It keeps unknown that how a vastly different training manner induced by model and data evolution impact online performance. As we open an important question of how to learn better in both stages in our system, we would ideally like to define the concept "better" as difference of the metric score generated by \(f\) and \(f_{online}\) where \(f\) and \(f_{online}\) is the models that we currently train and deploy on the online serving platform respectively. Given a metric function \(\mathcal{M}(y,\hat{y})\), where \(y\) and \(\hat{y}\) is the ground truth labels and predicted values, we then seek to minimize the classification risk for \(f\), subject to the metric score generated by \(f\) should be better than \(f_{online}\): \[\operatorname*{arg\,min}_{\theta}\mathcal{L}\quad s.t.\quad C(f)>0 \tag{2}\] where \(C(f)\triangleq\mathcal{M}(y,f(x))-\mathcal{M}(y,f_{online}(x))\). In this way, we can design algorithm in the mini-batch training strategy in both stages as shown in Figure 1 with extra online model predictions. However, directly optimizing the metric score is not differentiate for deep models. For the purpose of designing a tractable algorithm, we will instead work with softer notion of "better", which evaluates the divergence of their metric scores. ### Confidence Ranking Metric loss typically are calculated by 0-1 loss. For example, accuracy and AUC of a mini-batch samples can be defined by \(acc=\frac{1}{n}\sum_{i=1}^{n}\mathbb{I}[y_{i}=\hat{y}_{i}]\) and \(auc=\frac{1}{nm}\sum_{i=1}^{n}\sum_{j=1}^{m}\mathbb{I}[\hat{y}_{j}^{+}>\hat{y }_{j}^{-}]\) respectively, where \(\mathbb{I}\) is indicator function. To ease optimization, researchers resort to surrogate score function when maximizing accuracy and aue (Chen et al., 2017). Thus, we can devise confidence-ranking loss to directly employ previous learned knowledge. **Confidence Ranking (CR) for Accuracy.** As we want to achieve better accuracy, the expected metric objective can be defined as \(C_{acc}(f)=\frac{1}{n}\sum_{i=1}^{n}\mathbb{I}[y_{i}(x_{i})>y_{i}f_{online}(x_{ i})]\). As we want to maximize this objective, we only need to induce a surrogate loss function to rank the point-wise model outputs which can be defined as: \[\ell_{CR}(f)\triangleq\mathbb{E}_{\{x,y\}-\mathcal{D}}\left[(\phi_{y}(f(x)-f_ {online}(x))\right] \tag{3}\] where we only consider scoring functions \(\phi_{y}\) that are _strictly proper_(Caktor et al., 2017) (e.g. logistic rank loss \(\phi_{y}(u,v)=log(1+exp^{-(u-v)})\) and square loss \(\phi_{y}(u,v)=(1-(u-v))^{2}\)). For simplicity, in this work we only consider the logistic loss function which can be defined as: \[\ell_{CR}=\frac{1}{n}\sum_{i=1}^{n}y_{i}log(1+exp^{-(u-v)})+(1-y_{i})log(1+exp^ {(u-v)}) \tag{4}\] where \(u\) and \(v\) are \(h(x_{i})\) and \(h_{online}(x_{i})\) respectively. **Relational Confidence Ranking (RCR) for AUC:** The point-wise loss that rank the output of current model with online deployed one ensures the network gradually perform better. To further improve the bipartite ranking performance of binary classification, we follow (Chen et al., 2017) in optimizing bipartite ranking performance. As we want to achieve better bipartite ranking performance, the expected metric objective can be defined as \(C_{auc}(f)=\frac{1}{nm}\sum_{i=1}^{n}\sum_{j=1}^{m}\mathbb{I}[(f(x_{i}^{+})-f( x_{j}^{-}))>(f_{online}(x_{i}^{+})-f_{online}(x_{j}^{-}))]\) where \(x_{i}^{+}\) and \(x_{j}^{-}\) are \(i\)-th positive sample and \(j\)-th negative sample. Thus, we define _relational confidence ranking_ risk as: \[\ell_{RCR}(f)\triangleq\mathbb{E}_{\{x^{+},x^{-}\}-\{\mathcal{P}^{+},\mathcal{ P}^{-}\}}\left[\phi(d_{f}(x^{+},x^{-})-d_{f_{online}}(x^{+},x^{-}))\right] \tag{5}\] where function \(d_{f}(x,z)=f(x)-f(z)\) performs calculating distance of different samples \(x\) and \(z\), and \(\mathcal{P}^{+}\) and \(\mathcal{P}^{-}\) are the positive and negative classes respectively. Similar to point-wise confidence-ranking loss, we select logistic loss function as our scoring function: \[\ell_{RCR}=\frac{1}{nm}\sum_{i=1}^{n}\sum_{j=1}^{m}log(1+exp^{-(u-v)}) \tag{6}\] where \(u\) and \(v\) are \(d_{h}(x_{i}^{+},x_{j}^{-})\) and \(d_{h_{online}}(x_{i}^{+},x_{j}^{-})\) respectively. Figure 1. brief system overview of CTR prediction pipeline. **Training with Confidence Ranking.** During training, multiple confidence ranking loss function, including the proposed point-wise CR loss and relational CR loss can be either alone or together with task-specific loss functions, e.g. cross-entropy for classification. Therefore, the final objective is defined as: \[\ell_{\text{ce}}+\lambda_{CR_{\text{race}}}\ell_{CR}+\lambda_{CR_{\text{race}}} \ell_{RCR} \tag{7}\] where \(\ell_{\text{ce}}\) is a cross-entropy loss in CTR prediction, \(\ell_{CR}\) and \(\ell_{RCR}\) are the point-wise and relational confidence ranking loss respectively, and \(\lambda_{CR_{\text{race}}}\) and \(\lambda_{CR_{\text{race}}}\) are tunable hyperparameters to control the loss terms. For sampling tuples of pos/neg samples in the proposed relational confidence ranking loss, we simply use all possible pairs in a given mini-batch. **Theorem 1**.: _(**Bias-Variance bound for confidence ranking**) Pick any convex loss \(\ell\). Suppose we have a teacher model \(p^{t}\) with corresponding empirical confidence ranking risk \(\widehat{R}(f)=\frac{1}{N}\sum_{n\in N}y(x_{n})\ell(f(x_{n})-f_{t}(x_{n})))\) and population risk \(R(f)=\mathbb{E}_{x}\left[p^{*}(x)\ell(f(x))\right]\) where \(f_{t}(x_{n})\) is the teacher output. For any predictor \(f\colon X\to\mathbb{R}^{L}\),_ \[\mathbb{E}\left[(\widehat{R}(f)-R(f))^{2}\right]\leq\mathbb{E}\left[(R(f_{t}) )^{2}\right] \tag{8}\] We have stated a statistical perspective on confidence ranking, resting on the observation that confidence ranking offers a bound which always approximating Bayes probabilities based on the performance of teacher model. However, this bound is not well qualified on deep learning architecture and may be loose and unstable for real-world application especially for the logistic confidence ranking loss. We note the comprehensive bound of confidence ranking requires specifying necessary conditions. Nonetheless, this qualitative bound can still hold majority conditions in practice. ## 3. Experiments on CTR prediction We evaluate our methods on Industrial, Avazu and Avito datasets with various controllable setting on CTR prediction. Our core algorithm is easy to implement with various machine learning platform. For industrial datasets, we develop it with TensorFlow while we conduct experiments with PyTorch implementation for public datasets. All of our experiments are conducted on one P40 GPU for public datasets and 8 A100 GPUs for industrial dataset respectively. **Datasets.** We perform our experiments on three datasets with two training setting. Industrial search ads dataset contains 59 numerical and categorical feature fields. All of the fields data are discretized and transformed into sparse anonymous features. This dataset has more than ten billion instances range over one month with hundreds millions of active users and items. Avazu is display recommendation dataset released on Kaggle that contain 40428967 samples with 22 feature fields. Avito is also released as ads click datasets on Kaggle containing 190107687 samples but only 16 feature fields. We construct public datasets by split into training/validation/test set by timestamp where the samples of last day is set for testing and penultimate day's data is set for validation and others are set for training. To split industrial dataset, we use traffic samples of previous 15 days as training set and the last day as test set. We summarize statistic of datasets in Table 1. **Experiments setup.** In real-world application, the prediction naturally influence the impression of items (i.e. items that have high confidence are more prone to be exposed to users.) and multiples times for training will cause severe _over-fitting_ issues and much more computation cost. Thus we adopt two various settings for evaluate our methods. The details of configuration are summarized as follows: (1) **One-Pass Setting**: we adopt one-pass training strategy to imitate _online-learning_ in a cycle serving-and-training process with constant daily and minute-level data. For industrial dataset, it's common setting for evaluating performance of our methods. However, for public datasets, all of them only contain item and user features without any information of the deployed model. To overcome, we first train a one-pass model on \(T\)-days training set and then launch the cycling serving-and-training process with followed \(\widehat{T}\)-days data (e.g. serve on \(T+1\)-th day data to get predictions as \(f_{\textit{online}}(x)\) then train with our method). We denote our experiments on industrial dataset only adopt one-pass training strategy. (2) **Standard Setting:** standard supervised learning. In this sense, all of our experiments train on training set with multiple epochs until the validation loss convergences. We use the outputs of previous epoch as the supervised signal for our proposed method. **Baselines.** Though our main motivation of our work is to utilize the confidence of online deployed model on target items, we still include several baselines under standard supervised learning and one-pass learning in order to benchmark state-of-the-art results. The simplest approach is (1) ERM: we train our networks with binary cross-entropy loss under two various settings. For majority of real-world recommendation and ads system, ctr prediction models are preferred to be trained with ERM; (2) various commonly adopted CTR prediction network architectures designed for recommendation and ads system, DNN, PNN[(11)], DCN[(13)], DeepFM[(5)]; (3) SC[(1)] integrates self-correction module into CTR prediction networks. Together training with ERM, it achieves state-of-the-art \begin{table} \begin{tabular}{l|c c c c c} \hline \hline Methods & Avazu & Avito & Avazu\({}^{*}\) & Avito\({}^{*}\) & Industrial\({}^{*}\) \\ \hline DNN & 75.05 & 77.71 & 74.32 & 77.50 & 75.92 \\ DCN & 74.99 & 77.66 & 74.30 & 77.58 & 75.99 \\ PNN & 75.06 & 77.80 & 74.49 & 77.57 & n/a \\ DeepFM & 75.24 & 77.73 & 74.69 & 77.50 & 76.02 \\ \hline DeepFM+KD & 75.41 & 78.01 & 74.83 & 77.54 & 76.18 \\ DeepFM+RKD\({}_{I}\) & 75.34 & 77.83 & 74.90 & 77.58 & 76.10 \\ DeepFM+SC & 75.36 & 77.78 & 74.85 & 77.53 & 76.14 \\ \hline DeepFM+CR\({}_{\text{race}}\) & 75.63 & 78.33 & 74.98 & 77.70 & 76.25 \\ DeepFM+RCR\({}_{\text{race}}\) & 75.59 & 78.59 & 75.05 & **77.73** & 76.20 \\ DeepFM+Both & **75.66** & **78.62** & **75.14** & 77.70 & **76.32** \\ \hline \hline \end{tabular} \end{table} Table 2. AUC(%) of test-set performance on Avito, Avazu and Industrial datasets with various backbone and training strategy. \({}^{*}\) denotes one-pass learning. The results are averaged over 3 runs. Std \(\leq\) 0.1%. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Datasets & Users & Items & Fields & Feature size & Instances \\ \hline Avazu & N/A & N/A & 22 & 2018012 & 40428967 \\ Avito & 3163597 & 28529 & 16 & 3419165 & 190107687 \\ Industrial & N/A & N/A & 59 & N/A & 12 Billion \\ \hline \hline \end{tabular} \end{table} Table 1. The statistic of CTR prediction datasets results on multiple CTR prediction datasets with minimal computation cost. (4) Knowledge distillation methods: since dark knowledge can induce useful gradients for model compression, we also adapt KD(Chen et al., 2017) and RKD(Chen et al., 2018) to our experimental setting. In this paper, we modify the feature-based RKD to logit-based method for aligning inter-sample distance of the logits output of the base model and current model. For all loss function we tune their loss balance term \(\lambda\) ranging from 0.1 to 2.0. We select the best results in our experiments. The best \(\lambda_{CR_{acc}}\) and \(\lambda_{RCR_{acc}}\) is [0.4, 0.5] for public datasets and [0.5, 1.0] for industrial dataset. **Main Results.** Table 2 compare the test-set AUC of our method on Click-Through-Rate prediction task. On Table 2, we first investigate the improvement brought by different feature interaction methods. We observe that PNN achieve best performance with marginal improvement on standard supervised learning setting but fails compared to DeepFM and DCN on one-pass setting. For convenience, we adopt DeepFM as backbone for our experiments. We can observe that the propose method CR outperforms all baselines no matter which setting is adopted. For standard supervised learning, it is also striking to see that on Avazu and Avito, our proposed CR and RCR both can outperform baselines by a large margin after trained with multiple epochs. We denote 0.1% improvement of AUC on Avazu and Avito is significant. For one-pass learning, we still observe that our proposed methods outperforms the backbone model but the margin is smaller than standard setting. It's because one-pass learning may not completely fit on the two public datasets. For industrial dataset, we carefully tune our proposed method with DeepFM due to its succinct implementation. Not surprisingly, it works as well. Compared to vanilla distillation, our methods improve 0.25/0.61/0.31/0.16/0.14% of the AUC respectively. **Inter-class margin Visualization.** In Figure 2, we show how sample margin, prediction mean value of positive and negative samples vary along time. The relational confidence ranking loss outperforms all the other method by a large margin. We can observe RCR both decrease the negative mean and increase the positive mean in Figure 1(b) and 1(c) leading to best bipartite ranking performance among all baseline methods in Table 2. We find CR both decreases negative and positive mean resulting in marginal improvement on sample margin. We demonstrate it's because CTR prediction dataset is usually dominated by negative samples and our loss function tends to depress the negative prediction. In Figure 2, we find KD and RKD\({}_{I}\) give more smooth curve compared to our methods which may constrain the model's learning ability. ## 4. Online A/B Experiments Our architecture comprises of two parts: (1) In online ad serving platform, we additionally collect the outputs \(y_{online}\) of \(f(x;\theta_{online})\) as a online deployed prediction into our training data which directly decides which item will be exposed; (2) impose our proposed ranking-based loss to encourage the network to learn better than the online deployed one in both _retraining_ and _online-learning_ stage. **Online A/B results.** Additional to offline experiments, we conduct online experiments on A/B platform from 2022-8-15 to 2022-8-19. Our online A/B test experiments split active users into two groups. The first group is served by the recommendation results generated by current main model while the second is served by Confidence Ranking results. As shown in table 3, We observe averaged 1.75% improvement on CTR and apply it to serve main traffic in our system. ## 5. Conclusion From the perspective of real-world application, we identify the problem of learning model for better generalization on _retraining_ and _online-learning_ stage compared to online deployed model. To address this problem, we propose a loss framework, named as Confidence Ranking, which compares various models' output predictions for maximizing the surrogate metric score. We extend this method to rank accuracy and \(\text{Auc}\) in CTR prediction. Our theoretical and experimental analysis shows that our method can effectively improve the results compared to cross-entropy optimization and distillation. \begin{table} \begin{tabular}{c c c c c|c} \hline \hline Day 1 & Day 2 & Day 3 & Day 4 & Day 5 & Average \\ \hline +1.62\% & +1.73\% & +1.90\% & +1.83\% & +1.67\% & 1.75\% \\ \hline \hline \end{tabular} \end{table} Table 3. Results of click-through rate improvement on a 5-day online A/B experiment. Figure 2. Sample margin, prediction mean of negative and positive samples on Avazu in one-pass setting.
2310.15864
Elementary excitations in the hybrid Bose-Fermi system induced by circularly polarized light in a two-dimensional gas of charge carriers with different masses
We developed a theory describing elementary excitations in the Bose-Fermi system induced by circularly polarized light in a two-dimensional (2D) gas of charge carriers with different masses. In such a hybrid system, the Fermi subsystem is a degenerate Fermi gas, whereas the Bose subsystem is a condensate of the light-induced composite bosons consisting of two fermions (electrons or holes) with different effective masses. The interaction of the single-particle excitations and the collective excitations (plasmons) in the Fermi subsystem with the Bogoliubov collective modes (bogolons) in the Bose subsystem is analyzed. The renormalization and damping (lifetime) of the excitations are calculated, and the possibility of their experimental observation is discussed. The developed theory can be applied to describe 2D condensed-matter structures containing charge carriers with different effective masses, including transition metal dichalcogenide monolayers and semiconductor quantum wells.
V. M. Kovalev, M. V. Boev, O. V. Kibis
2023-10-24T14:22:56Z
http://arxiv.org/abs/2310.15864v1
Elementary excitations in the hybrid Bose-Fermi system induced by circularly polarized light in a two-dimensional gas of charge carriers with different masses ###### Abstract We developed a theory describing elementary excitations in the Bose-Fermi system induced by circularly polarized light in a two-dimensional (2D) gas of charge carriers with different masses. In such a hybrid system, the Fermi subsystem is a degenerate Fermi gas, whereas the Bose subsystem is a condensate of the light-induced composite bosons consisting of two fermions (electrons or holes) with different effective masses. The interaction of the single-particle excitations and the collective excitations (plasmons) in the Fermi subsystem with the Bogoliubov collective modes (bogolons) in the Bose subsystem is analyzed. The renormalization and damping (lifetime) of the excitations are calculated, and the possibility of their experimental observation is discussed. The developed theory can be applied to describe 2D condensed-matter structures containing charge carriers with different effective masses, including transition metal dichalcogenide monolayers and semiconductor quantum wells. ## I Introduction All-optical control of electronic properties of condensed-matter structures by a high-frequency off-resonant electromagnetic field, which is based ideologically on the Floquet theory of periodically driven quantum systems ("Floquet engineering"), has become an established research area during last decades [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12]. Since the off-resonant field cannot be absorbed by electrons, it only dresses them, producing the composite electron-field states with unusual physical properties. Particularly, it has been demonstrated that such a dressing field can crucially modify electronic characteristics of various condensed-matter nanostructures, including semiconductor quantum wells [13], quantum rings [14], quantum dots [15], topological insulators [16; 17; 18; 19], carbon nanotubes [20], graphene and related two-dimensional materials [21; 22; 23; 24; 25; 26; 27], etc. Among many phenomena induced by a dressing field, the Floquet engineering of electron behaviour in various potential reliefs takes observed place. If the field is both strong and high-frequency, the electron dynamics can be described by the effective dressed potential which can be obtained from a "bare" potential by its averaging along the classical electron trajectory under the field over the field period. The most pronounced modification of the potentials takes place in low-dimensional electronic systems. Particularly, the two-dimensional (2D) repulsive Coulomb potential under a circularly polarized dressing field acquires an attractive area in its core [28], what leads to confinement of conduction electrons at repulsive potentials in quantum wells [29]. The same field-induced attraction can manifest itself in the processes of electron-electron interaction. Recently, it was demonstrated theoretically that the circularly polarized irradiation of two-dimensional conducting systems can produce composite bosons consisting of two electrons with different effective masses [28], which are stable due to the Fermi sea of conduction electrons [30]. As a result, an optically induced mixture of paired electrons and normal conduction electrons (the hybrid Bose-Fermi system) appears. Since the optically induced hybrid Bose-Fermi system [30] is interesting from viewpoint of possible light-induced superconductivity and superfluidity, the present article is aimed to study elementary excitations there. Physical properties of nanostructures and their response to external perturbations are determined by the spectrum of elementary excitations. Evidently, the type of elementary excitations existing in various physical systems depends on the quantum statistics of initial bare particles filling the system. In the past, only two quantum systems of the Fermi-type were known: The electron gas in metals (or semiconductors) and liquid helium, \({}^{3}\)He. As to the Bose-type liquid, its typical example was \({}^{4}\)He. All these quantum objects have the rich spectra of elementary excitations determining their unique physical properties at low temperatures [31; 32; 33]. Other interesting quantum systems are presented by the mixtures of the Bose and Fermi gases. In such mixtures, new interaction channels appear due to the interactions between bosons and fermions. As an example, a new type of paring between fermions due to the exchange by the excitations of the Bose subsystem may occur, including s-type [34; 35] and p-type [36] Fermi-particles pairing. Historically, all these types of the hybrid Bose-Fermi systems were initially considered to be applied to cold atomic systems [37; 38; 39; 40; 41; 42; 43]. However, the technological achievements in the design and fabrication of nanostructures have recently stimulated intensive theoretical discussions about new physical phenomena in the condensed-matter Bose-Fermi mixtures [44; 45; 46; 47; 48; 49]. Particularly, a possibility of experimental realizations of long-living 2D dipolar exciton systems or 2D exciton-polariton gases opens a way to create the condensed matter Bose-Fermi mixtures, where the Bose subsystem is either an exciton or exciton-polariton gas. Thus, the physics of the hybrid Bose-Fermi systems in low-dimensional structures is the established research area of modern science, which forms the basis for the present study. The specific renormalization of the Coulomb interaction between charged particles by a dressing field [28; 30] opens a way to form the mixture of two subsystems, where the first one is the degenerate Fermi gas of light and heavy normal electrons, whereas the second one is the Bose gas consisting of the bound two-electron composite bosons. At low temperatures, the latter may form the Bose-Einstein condensate (BEC), where composite bosons interact via the short-range potential due to a strong screening of their direct Coulomb interaction by normal electrons. Thus, under the external irradiation by a circularly polarized electromagnetic field (see Fig. 1), light electrons attractively interact with heavy electrons to form the two-electron composite bosons being in the BEC regime, whereas the remaining unpaired electrons form the degenerate Fermi gas. Certainly, it should be kept in mind that BEC in real systems depends on many additional physical factors (see, e.g., Ref. [50]) which should be analyzed carefully for samples planned to be studied experimentally. In the present article, we consider the renormalization of physical properties of individual excitations in the Fermi subsystem of unpaired electrons and study the properties of various collective modes in the light-induced Bose-Fermi mixture, including the polaron effect, the quasi-particle lifetime, the renormalization of the collective mode dispersion laws and their damping. The article is organized as follows. In Sec. II, we describe the model under consideration and introduce the Hamiltonian describing the interaction between normal electrons (the Fermi subsystem) and the light-induced Bose subsystem consisting of paired electrons with different masses. In Sec. III, the single-particle and collective modes in the optically induced hybrid Bose-Fermi systems are analyzed. The last two sections contain the conclusion and acknowledgements, whereas Appendix contains derivation of the interaction Hamiltonian for two electrons with different effective masses. ## II Model As it has been noted above, the light-induced Bose-Fermi system may occur in nanostructures containing charge carriers with different effective masses. For definiteness, we consider the electronic system in such a transition metal dichalcogenide material as MoS\({}_{2}\) monolayer which is under active study nowadays, showing unique optical and transport properties [51]. The conduction band of this material consists of the two non-equivalent valleys in the \(K\) and \(K^{\prime}\) points of the Brillouin zone, where each valley contains the two spin-split electron branches corresponding to the heavy electrons with the mass \(m_{h}=0.46m_{0}\) and the light electrons with the mass \(m_{l}=0.43m_{0}\), where \(m_{0}\) is the free electron mass (see Fig. 2). As a consequence, the circularly polarized irradiation of the monolayer may form the Bose subsystem consisting of two electrons with different effective masses [28; 30], which can be considered as composite bosons with the effective mass \(M=m_{h}+m_{l}\) and the charge \(2e\), where \(e=-|e|\) is the electron charge. In the following, we will assume the boson density to be small enough to consider the Bose subsystem as a gas of weakly interacting composite bosons. For the Fermi level plotted in Fig. 2, the Fermi subsystem contains unpaired light and heavy electrons. However, the density of light electrons much exceeds the density of heavy electrons since the ground branch corresponds to light electrons. To simplify the consideration of the light-induced Bose-Fermi mixture, we will neglect the contribution of heavy electrons into the Fermi subsystem and will assume that the Fermi subsystem consists only of light electrons with the mass \(m=m_{l}\). Another simplification of the model is related to the two valley structure of the MoS\({}_{2}\) Brillouin zone. Namely, we will not take into account the intervalley scattering processes because they require the extremely large momenta transfer between interacting particles, whereas all phenomena considered below occur in the long-wavelength limit corresponding to very small momenta. Since the Hamiltonian describing the light-induced electron pairing was analyzed earlier [28; 30] (see Appendix Figure 1: Sketch of the system under consideration: A two-dimensional system containing heavy electrons (large circles) and light electrons (small circles) under irradiation by a circularly polarized electromagnetic wave. As a result of the irradiation, the hybrid Bose-Fermi system consisting of composite bosons (paired heavy and light electrons) and the degenerate Fermi gas of normal electrons appears. Figure 2: Structure of the conduction band in MoS\({}_{2}\) monolayer: The electron energy spectrum in the two valleys (\(K\) and \(K^{\prime}\)) consists of the branches of heavy (\(h\)) and light (\(l\)) electrons with the mutually opposite spin orientation (the solid and dashed lines), where \(\mu\) is the Fermi energy. for details), the following analysis is devoted to the Hamiltonian describing the interaction processes in the light-induced Bose-Fermi system. Conventionally, the interactions of charge particles in various 2D structures are described by the two-dimensional Coulomb potential (see, e.g., Refs. [52] and [53]). Therefore, the interaction Hamiltonian for the considered hybrid Bose-Fermi system can be written as a sum of three terms, \(H=H_{BF}+H_{FF}+H_{BB}\), where the term \[H_{BF}=\int_{S}d^{2}\mathbf{r}\int_{S}d^{2}\mathbf{R}\,\hat{n}(\mathbf{r},t)U_ {BF}(\mathbf{r}-\mathbf{R})\hat{N}(\mathbf{R},t), \tag{1}\] describes the interaction between the Fermi and Bose subsystems with the two-dimensional Coulomb potential \[U_{BF}(\mathbf{r}-\mathbf{R})=\frac{2e^{2}}{\epsilon|\mathbf{r}-\mathbf{R}|}, \tag{2}\] the term \[H_{FF}=\frac{1}{2}\int_{S}d^{2}\mathbf{r}\int_{S}d^{2}\mathbf{r}^{\prime}\, \hat{n}(\mathbf{r},t)U_{FF}(\mathbf{r}-\mathbf{r}^{\prime})\hat{n}(\mathbf{r }^{\prime},t), \tag{3}\] describes the interactions of fermions within the Fermi subsystem with the two-dimensional Coulomb potential \[U_{FF}(\mathbf{r}-\mathbf{r}^{\prime})=\frac{e^{2}}{\epsilon|\mathbf{r}- \mathbf{r}^{\prime}|}, \tag{4}\] the term \[H_{BB}=\frac{1}{2}\int_{S}d^{2}\mathbf{R}\int_{S}d^{2}\mathbf{R}^{\prime}\, \hat{N}(\mathbf{R},t)U_{BB}(\mathbf{R}-\mathbf{R}^{\prime})\hat{N}(\mathbf{R} ^{\prime},t), \tag{5}\] describes the interactions of composite bosons within the Bose subsystem screened by normal electrons with the two-dimensional screened Coulomb potential \[U_{BB}(\mathbf{R}-\mathbf{R}^{\prime})=\int\frac{d^{2}\mathbf{k}}{(2\pi)^{2}} \,U_{B}(\mathbf{k})e^{i\mathbf{k}(\mathbf{R}-\mathbf{R}^{\prime})}, \tag{6}\] the Fourier image of the screened potential is \[U_{B}(\mathbf{k})=\frac{8\pi e^{2}}{\epsilon(k+k_{s})} \tag{7}\] \(k_{s}=2/a_{s}\) is the Thomas-Fermi screening wavenumber, \(a_{s}=\epsilon\hbar^{2}/me^{2}\) is the effective screening length, \(\epsilon\) is the effective dielectric constant accounting for all effects of medium, \(\hat{n}(\mathbf{r},t)=\psi^{\dagger}(\mathbf{r},t)\psi(\mathbf{r},t)\) and \(\hat{N}(\mathbf{R},t)=\varphi^{\dagger}(\mathbf{R},t)\varphi(\mathbf{R},t)\) are the density operators of the Fermi and Bose subsystem, respectively, \(\mathbf{R}=(x,y)\) is the plane radius vector of composite boson, \(\mathbf{r}=(x,y)\) is the plain radius vector of normal electron, and \(S\) is the area of the 2D system. We will restrict the following consideration by the case of extremely low temperatures, assuming that composite bosons form BEC, whereas the remaining unpaired electrons form the normal degenerate Fermi gas. In the following, we will also assume that the BEC density \(n_{c}\) is small enough to satisfy the condition \(n_{c}l^{2}\ll 1\), where \(l\) is the boson-boson scattering length. In such a regime, BEC can be described by the standard Bogoliubov theory of weakly interacting Bose gas. Within this theory, the Bose operator \(\varphi(\mathbf{R},t)=\varphi_{0}+\delta\varphi(\mathbf{R},t)\) consists of the uniform part describing BEC and the fluctuating part, where \(|\varphi_{0}|^{2}=n_{c}\) is the BEC density. As a result, the interaction Hamiltonian (1) can be rewritten as a sum of the three terms, \[H_{BF}^{(0)}=n_{c}\int_{S}d^{2}\mathbf{r}\int_{S}d^{2}\mathbf{R }\hat{n}(\mathbf{r},t)U_{BF}(\mathbf{r}-\mathbf{R}),\] \[H_{BF}^{(1)}=\sqrt{n_{c}}\int_{S}d^{2}\mathbf{r}\int_{S}d^{2} \mathbf{R}\hat{n}(\mathbf{r},t)U_{BF}(\mathbf{r}-\mathbf{R})\] \[\times\left[\delta\varphi^{*}(\mathbf{R},t)+\delta\varphi( \mathbf{R},t)\right],\] \[H_{BF}^{(2)}=\int_{S}d^{2}\mathbf{r}\int_{S}d^{2}\mathbf{R} \hat{n}(\mathbf{r},t)U_{BF}(\mathbf{r}-\mathbf{R})|\delta\varphi(\mathbf{R}, t)|^{2}. \tag{8}\] where the first term, which describes the shift of the Fermi energy of unpaired electrons, does not affect electronic properties and will be omitted in the following, whereas the second and third terms describe the interaction of the Fermi subsystem with the Bogoliubov excitations of BEC (bogolons). For further developments, it is instructive to introduce the creation and annihilation operators for bogolons [31] via the relations \[\delta\varphi(\mathbf{R},t)=\frac{1}{\sqrt{S}}\sum_{\mathbf{p}}e^ {i\mathbf{p}\mathbf{R}}\left(u_{\mathbf{p}}b_{\mathbf{p}}+v_{\mathbf{p}}b_{- \mathbf{p}}^{\dagger}\right),\] \[\delta\varphi^{*}(\mathbf{R},t)=\frac{1}{\sqrt{S}}\sum_{\mathbf{p }}e^{-i\mathbf{p}\mathbf{R}}\left(u_{\mathbf{p}}^{*}b_{\mathbf{p}}^{\dagger}+v_ {\mathbf{p}}^{*}b_{-\mathbf{p}}\right), \tag{9}\] where \(u_{\mathbf{p}},v_{\mathbf{p}}\) are the standard Bogoliubov coefficients (here and below we use the system of units with \(\hbar=1\) and will restore the Plank constant in the final expressions only). Introducing the healing length \(\zeta=1/2Ms\), the Bogoliubov coefficients read \[u_{\mathbf{p}},v_{\mathbf{p}}=\pm\sqrt{\frac{p^{2}/2M+U_{B}n_{c} }{2\omega_{\mathbf{p}}}}\pm\frac{1}{2}, \tag{10}\] \[\omega_{\mathbf{p}}=sp\sqrt{1+(p\zeta)^{2}},\] where \(\omega_{\mathbf{p}}\) is the bogolon dispersion, the parameter \(U_{B}\equiv U_{B}(\mathbf{k}=0)\) represents the strength of the boson-boson interaction, and \(s=\sqrt{U_{B}n_{c}/M}\) is the bogolon phase velocity. With using the bogolon operators, the interaction terms (8) can be rewritten as \[H_{BF}^{(1)}=\sqrt{\frac{n_{c}}{S}}\sum_{\mathbf{p}}U_{\mathbf{p }}^{BF}\hat{n}_{-\mathbf{p}}\] \[\times\left[(u_{\mathbf{p}}+v_{-\mathbf{p}})b_{\mathbf{p}}+(u_{- \mathbf{p}}+v_{\mathbf{p}})b_{-\mathbf{p}}^{\dagger}\right], \tag{11}\] and \[H_{BF}^{(2)}=\frac{1}{S}\sum_{\mathbf{k},\mathbf{p}}U_{\mathbf{p}}^ {BF}\hat{n}_{-\mathbf{p}}\] \[\times\Big{(}u_{\mathbf{k}-\mathbf{p}}b_{\mathbf{k}-\mathbf{p}}^{ \dagger}+v_{\mathbf{k}-\mathbf{p}}b_{\mathbf{p}-\mathbf{k}}\Big{)}\left(u_{ \mathbf{k}}b_{\mathbf{k}}+v_{\mathbf{k}}b_{-\mathbf{k}}^{\dagger}\right), \tag{12}\] where the term (11) describes the fermion-boson interaction with a single bogolon, whereas the term (12) corresponds to the two-bogolon processes. Mathematically, the interaction \(H_{BF}^{(1)}\) is similar to the conventional electron-phonon interaction in normal electronic systems (the only difference is the Bogoliubov coefficients), whereas the second term \(H_{BF}^{(2)}\) essentially differs from the usual electron-phonon Hamiltonian. Nevertheless, the terms (11) and (12) are of the same order and should be considered simultaneously. The Feynman diagrams corresponding to the quantum amplitudes of the processes described by the Hamiltonians (11) and (12) are presented in Fig. 3. It should be noted that these processes can be considered separately in the lowest order with respect to the boson-fermion interaction potential \(U_{\mathbf{p}}^{BF}\). Then the correction to the bare fermion energy, \[\xi_{\mathbf{p}}=p^{2}/2m-\mu, \tag{13}\] which appears due to the interactions (11) and (12), is given by the self-energy contribution \(\Sigma(\varepsilon,\mathbf{p})\) to the pole of the fermion Green function \(\mathcal{G}^{-1}(\varepsilon,\mathbf{p})=\varepsilon-\xi_{\mathbf{p}}+i\delta \operatorname{sign}(\xi_{\mathbf{p}})\). As to the renormalized fermion dispersion, it is determined by the equation \(\varepsilon-\xi_{\mathbf{p}}-\Sigma(\varepsilon,\mathbf{p})=0\). The self-energy, \(\Sigma(\varepsilon,\mathbf{p})=\Sigma_{cn}(\varepsilon,\mathbf{p})+\Sigma_{ nn}(\varepsilon,\mathbf{p})\), makes the two contributions in the second order of the fermion-boson interaction potential \(U_{\mathbf{k}}^{BF}\), which are \[\Sigma_{cn}(\varepsilon,\mathbf{p}) =i\sum_{\omega,\mathbf{k}}|U_{k}^{BF}|^{2}\mathcal{G}( \varepsilon-\omega,\mathbf{p}-\mathbf{k})P_{cn}(\omega,\mathbf{k}),\] \[P_{cn}(\omega,\mathbf{k}) =n_{c}\Big{[}G(\omega,\mathbf{k})+\tilde{G}(\omega,\mathbf{k})+F (\omega,\mathbf{k})+\tilde{F}(\omega,\mathbf{k})\Big{]}, \tag{14}\] and \[\Sigma_{nn}(\varepsilon,\mathbf{p})=i\sum_{\omega,\mathbf{k}}|U _{k}^{BF}|^{2}\mathcal{G}(\varepsilon-\omega,\mathbf{p}-\mathbf{k})P_{nn}( \omega,\mathbf{k}),\] \[P_{nn}(\varepsilon,\mathbf{p})=i\sum_{\omega,\mathbf{k}}\Bigl{[} G(\varepsilon,\mathbf{p})G(\varepsilon-\omega,\mathbf{p}-\mathbf{k})\] \[+\tilde{G}(\varepsilon,\mathbf{p})\tilde{G}(\varepsilon-\omega, \mathbf{p}-\mathbf{k})\] \[+F(\varepsilon,\mathbf{p})\tilde{F}(\varepsilon-\omega,\mathbf{p} -\mathbf{k})+\tilde{F}(\varepsilon,\mathbf{p})F(\varepsilon-\omega,\mathbf{p} -\mathbf{k})\Bigr{]}, \tag{15}\] where the Green functions of BEC read \[G(\varepsilon,\mathbf{p}) = \frac{\varepsilon+p^{2}/2M+U_{B}n_{c}}{\varepsilon^{2}-\omega_{ \mathbf{p}}^{2}+i\delta},\] \[F(\varepsilon,\mathbf{p}) = \frac{-U_{B}n_{c}}{\varepsilon^{2}-\omega_{\mathbf{p}}^{2}+i \delta}, \tag{16}\] and \(\tilde{G}(\varepsilon,\mathbf{p})=G(-\varepsilon,-\mathbf{p}),\ \tilde{F}( \varepsilon,\mathbf{p})=F(-\varepsilon,-\mathbf{p})\). Physically, the self-energy \(\Sigma_{cn}(\varepsilon,\mathbf{p})\) describes the excitation of BEC accompanied by transition of a boson to the non-condensed state (see Fig. 4a) and arises from the interaction Hamiltonian (11), whereas the self-energy \(\Sigma_{nn}(\varepsilon,\mathbf{p})\) describes the polarization of non-condensed composite bosons (see Fig. 4b) arisen from the interaction Hamiltonian (12). The self-energy operators (14) and (15) read \[\Sigma_{cn(nn)}(\varepsilon,\mathbf{p})=i\sum_{\omega,\mathbf{k}}\mathcal{G}( \varepsilon-\omega,\mathbf{p}-\mathbf{k})R(\omega,\mathbf{k}), \tag{17}\] where \(R(\omega,\mathbf{k})\) is either \(|U_{k}^{BF}|^{2}P_{cn}(\omega,\mathbf{k})\) or \(|U_{k}^{BF}|^{2}P_{nn}(\omega,\mathbf{k})\). In both cases, \(R(\omega,\mathbf{k})\) is the even function of frequency \(\omega\) and depends on the absolute value of momentum \(\mathbf{k}\). Using this, Eq. (17) can be simplified. Namely, using the expression \[\int\limits_{0}^{2\pi}\frac{d\varphi}{2\pi}\frac{1}{a+b\cos\varphi \pm i\delta}\] \[=\frac{\operatorname{sign}(a)\theta[|a|-|b|]}{\sqrt{a^{2}-b^{2}}} \mp i\frac{\theta[|b|-|a|]}{\sqrt{b^{2}-a^{2}}}, \tag{18}\] we arrive at \[\Sigma(\varepsilon,\mathbf{p}) =i\int\limits_{-\infty}^{\infty}\frac{d\omega}{2\pi}\int\limits_{0}^ {\infty}\frac{kdk}{2\pi}R(\omega,\mathbf{k})\Bigl{[}A(\omega,k)-iB(\omega,k) \Bigr{]},\] \[A(\omega,k) =\frac{\operatorname{sign}[\varepsilon+\omega-\xi_{p}-k^{2}/2m]} {\sqrt{(\varepsilon+\omega-\xi_{p}-k^{2}/2m)^{2}-v^{2}k^{2}}},\] \[B(\omega,k) =\frac{\operatorname{sign}[\varepsilon+\omega]}{\sqrt{v^{2}k^{2}-( \varepsilon+\omega-\xi_{p}-k^{2}/2m)^{2}}}. \tag{19}\] In a vicinity of the Fermi level (\(p\approx p_{F}\)) and on the mass shell (\(\varepsilon=\xi_{p}\)), the small contribution of \(k^{2}/2m\) can be Figure 3: Vertex Feynman diagrams describing the amplitudes of the single bogolon (a) and the double bogolon (b) emission by a moving fermion. The solid lines correspond to fermions, the dashed lines correspond to the Bogoliubov excitations, the wavy lines mark the factor \(\sqrt{n_{c}}\), and the circles mark the boson-fermion interaction potential. ignored. Then Eqs. (19) can be written as \[A(\omega,k) = \frac{\text{sign}[\omega]}{\sqrt{\omega^{2}-v_{F}^{2}k^{2}}},\] \[B(\omega,k) = \frac{\text{sign}[\varepsilon+\omega]}{\sqrt{v_{F}^{2}k^{2}-\omega^ {2}}}. \tag{20}\] Correspondingly, Eq. (17) yields \[\Sigma(\varepsilon,\mathbf{p})=\frac{\text{sign}(\varepsilon)}{2\pi^{2}}\int \limits_{0}^{|\varepsilon|}d\omega\int\limits_{\omega/v_{F}}^{\infty}\frac{ kdk}{\sqrt{v_{F}^{2}k^{2}-\omega^{2}}}R(\omega,\mathbf{k}), \tag{21}\] whereas Eqs. (14) and (15) read \[\Sigma_{cn(nn)}(\varepsilon,\mathbf{p})=\frac{\text{sign}( \varepsilon)}{2\pi^{2}}\] \[\times\int\limits_{0}^{|\varepsilon|}d\omega\int\limits_{\omega /v_{F}}^{\infty}\frac{kdk|U_{k}^{BF}|^{2}}{\sqrt{v_{F}^{2}k^{2}-\omega^{2}}}P_ {cn(nn)}(\omega,\mathbf{k}). \tag{22}\] The most interesting case corresponds to the long-wavelength limit (\(k\zeta\ll 1\)), when the Bogoliubov excitations have the linear sound-like dispersion, \(\omega_{k}=sk\). Applying the Debye approximation, we will assume the linear dispersion \(\omega_{k}=sk\) for all wavevectors \(k\). This simplification has a great advantage enabling the analytical treatment of the problems under consideration below. ## III Results and discussion ### Polaron effect The polaron effect consists in the renormalization of the fermion effective mass in a vicinity of the Fermi energy due to the fermion-boson interaction and is described by the real part of the self-energy, \(\text{Re}\,\Sigma(\varepsilon,\mathbf{p})\). The solution of the equation \(\varepsilon-\xi_{\mathbf{p}}-\text{Re}\,\Sigma(\varepsilon,\mathbf{p})=0\) can be found in a vicinity of the Fermi energy by the successive approximation \(\varepsilon=\xi_{\mathbf{p}}+\text{Re}\,\Sigma(\xi_{\mathbf{p}},p_{F})\), where \(p_{F}=\sqrt{2m\mu}\) is the Fermi momentum. Assuming the polaron corrections to be small, the terms \(\text{Re}\,\Sigma_{cn}(\xi_{\mathbf{p}},p_{F})\) and \(\text{Re}\,\Sigma_{nn}(\xi_{\mathbf{p}},p_{F})\) can be treated independently as follows. The first term reads \[\text{Re}\,\Sigma_{cn}(\xi_{\mathbf{p}},p_{F})=\frac{\text{sign} (\xi_{\mathbf{p}})}{2\pi^{2}}\] \[\times\int\limits_{0}^{|\xi_{\mathbf{p}}|}d\omega\int\limits_{ \omega/v_{F}}^{\infty}\frac{kdk|U_{k}^{BF}|^{2}}{\sqrt{v_{F}^{2}k^{2}-\omega^ {2}}}\text{Re}\,P_{cn}(\omega,\mathbf{k}),\] \[\text{Re}\,P_{cn}(\omega,\mathbf{k})=n_{c}\frac{k^{2}}{M}\frac{1 }{\omega^{2}-\omega_{k}^{2}}, \tag{23}\] where \(v_{F}=p_{F}/m\) is the Fermi velocity. The integrals in Eq. (23) can be easily evaluated in a vicinity of the Fermi energy, where \(\xi_{\mathbf{p}}\to 0\). Substituting \(\omega=0\) into the integral and taking into account the screened boson-fermion interaction potential \(U_{k}^{BF}=4\pi e^{2}/\epsilon(k+k_{s})\), one can find \(\text{Re}\,\Sigma_{cn}(\xi_{\mathbf{p}},p_{F})=-b_{cn}\xi_{\mathbf{p}}\), where \(b_{cn}\) is described by the expression \[b_{cn}=\frac{n_{c}}{2\pi^{2}Ms^{2}v_{F}}\int\limits_{0}^{\infty} dk|U_{k}^{BF}|^{2}=\frac{n_{c}}{2\pi^{2}Ms^{2}}\frac{(4\pi e^{2})^{2}}{ \epsilon^{2}\hbar v_{F}k_{s}}\] \[=\frac{e^{2}}{\epsilon\pi\hbar v_{F}} \tag{24}\] with the restored Planck constant. The second correction to the fermion effective mass comes from the remaining self-energy part which reads \[\text{Re}\,\Sigma_{nn}(\xi_{\mathbf{p}},p_{F})=\frac{\text{sign} (\xi_{\mathbf{p}})}{2\pi^{2}}\] \[\times\int\limits_{0}^{|\xi_{\mathbf{p}}|}d\omega\int\limits_{ \omega/v_{F}}^{\infty}\frac{kdk|U_{k}^{BF}|^{2}}{\sqrt{v_{F}^{2}k^{2}-\omega^ {2}}}\text{Re}\,P_{nn}(\omega,\mathbf{k}), \tag{25}\] where the polarization operator for non-condensed bosons is \[P_{nn}=-\frac{(Ms)^{2}}{4}\left[\frac{1}{\sqrt{s^{2}k^{2}-\omega^{2}}}+i\frac{ 1}{\sqrt{\omega^{2}-s^{2}k^{2}}}\right]. \tag{26}\] Substituting the real part of Eq. (26) into Eq. (III.2), one can demonstrate that \(\text{Re}\,\Sigma_{nn}(\xi_{\mathbf{p}},p_{F})\propto\xi_{\mathbf{p}}\ln\xi_{ \mathbf{p}}\). Such a logarithmic divergence at \(\xi_{\mathbf{p}}\to 0\) means that the bubble diagrams pictured in Fig. 4b give a large contribution in a vicinity of Fermi energy. Therefore, correct description of the interaction requires the summation of the infinite series of bubble diagrams pictured in Fig. 4c. As a result Figure 4: Self-energy diagrams: (a) excitation of a boson to the non-condensed state by a moving fermion; (b) polarization of non-condensed bosons by a moving fermion; (c) infinite series of bubble diagrams contribution to \(\text{Re}\,\Sigma_{nn}\). The solid lines corresponds to the electron Green functions, the dashed lines correspond to the bogolon Green functions, the wavy lines represent the \(\sqrt{n_{c}}\) factor, the filled circles mark the boson-fermion interaction potential, and the empty circles mark the boson-boson interaction potential. of the summation, we arrive at the expression \[\mathrm{Re}\,\Sigma_{nn}(\xi_{\mathbf{p}},p_{F})=\frac{\mathrm{sign} (\xi_{\mathbf{p}})}{2\pi^{2}v_{F}}\] \[\times\int\limits_{0}^{|\xi_{\mathbf{p}}|}d\omega\int\limits_{0}^ {\infty}dk|U_{k}^{BF}|^{2}\frac{\mathrm{Re}\,P_{nn}(0,\mathbf{k})}{1-U_{k}^{B }\mathrm{Re}\,P_{nn}(0,\mathbf{k})}, \tag{27}\] which again has the form \(\mathrm{Re}\,\Sigma_{nn}(\xi_{\mathbf{p}},p_{F})=-b_{nn}\xi_{\mathbf{p}}\) with \(b_{nn}=b_{cn}\mathcal{F}(k_{0}/k_{s})\), where \(k_{0}^{2}=2\pi e^{2}(Ms)^{2}/\epsilon s\hbar^{3}\propto\sqrt{n_{c}}\) and the function \[\mathcal{F}(y)=y^{2}\int\limits_{0}^{\infty}\frac{dx}{(x+1)^{2}(x+y^{2})}, \tag{28}\] describes the relationship between the BEC density, \(n_{c}\), and the ratio \(b_{nn}/b_{cn}\) (see Fig. 5). Since the fermion energy reads \(\varepsilon=(1-b_{cn}-b_{nn})\xi_{\mathbf{p}}\), the renormalized effective mass of fermion is \[m^{*}=\frac{m}{1-b_{cn}-b_{nn}}\approx m(1+b_{cn}+b_{nn}), \tag{29}\] where the coefficients \(b_{cn}\) and \(b_{nn}\) are defined by Eq. (24) and Fig. 6. It follows from Eq. (29) that the fermion-boson interaction leads to increasing the fermion effective mass, \(m^{*}>m\). Particularly, \(m^{*}\approx 1.52m\) for the MoS\({}_{2}\) monolayer with the fermion and boson densities, \(n=5\cdot 10^{12}\) cm\({}^{-2}\) and \(n_{c}=10^{8}\) cm\({}^{-2}\), respectively. The polaron renormalization of the effective mass will lead to decreasing electron mobility, what can manifest itself in various transport phenomena. It should be noted that the small parameter of the renormalization theory developed above is the ratio \(e^{2}/\epsilon\hbar v_{F}\), where the Fermi velocity \(v_{F}\) can be increased by the gate voltage applied to a monolayer up to the electron density \(\sim 10^{14}\) cm\({}^{-2}\) (see. e.g., Ref. [54]), and the effective dielectric constant \(\epsilon\) can be increased if the monolayer is sandwiched by dielectric materials with large dielectric constants. As a consequence, the aforesaid parameter can be varied in broad range to keep the obtained results within applicability of the renormalization theory. However, even if the calculated fermion mass lies near the border of applicability of the renormalization theory, the obtained results stay to be useful, at least, for semi-qualitative estimations. ### Quasi-particle lifetime The imaginary part of fermion self-energy, \(\mathrm{Im}\,\Sigma(\xi_{\mathbf{p}},p_{F})=-\Gamma\), defines the quasi-particle damping rate \(\Gamma=1/2\tau_{e}\), where \(\tau_{e}\) is the quasi-particle lifetime. In the following, we will analyse the damping rate, \[\Gamma=\Gamma_{cn}+\Gamma_{nn}, \tag{30}\] coming from the two contributions to the self-energy. The first contribution, \[\mathrm{Im}\,\Sigma_{cn}(\xi_{\mathbf{p}},p_{F})=\frac{\mathrm{ sign}(\xi_{\mathbf{p}})}{2\pi^{2}}\] \[\times\int\limits_{0}^{|\xi_{\mathbf{p}}|}d\omega\int\limits_{ \omega/v_{F}}^{\infty}\frac{kdk|U_{k}^{BF}|^{2}}{\sqrt{v_{F}^{2}k^{2}-\omega^{ 2}}}\mathrm{Im}\,P_{cn}(\omega,\mathbf{k}), \tag{31}\] \[\mathrm{Im}\,P_{cn}(\omega,\mathbf{k})=-\pi n_{c}\frac{k^{2}}{M} \delta(\omega^{2}-\omega_{k}^{2}), \tag{32}\] yields \[\Gamma_{cn}=\frac{\mathrm{sign}(\xi_{\mathbf{p}})n_{c}(4\pi e^{2 })^{2}\theta[v_{F}-s]}{4\pi\epsilon^{2}\hbar^{2}Ms\sqrt{v_{F}^{2}-s^{2}}}\] \[\times\left[\ln\left(\frac{|\xi_{\mathbf{p}}|+\hbar sk_{s}}{ \hbar sk_{s}}\right)-\frac{|\xi_{\mathbf{p}}|}{|\xi_{\mathbf{p}}|+\hbar sk_{s} }\right], \tag{33}\] where \(\theta[x]\) is the Heaviside step-function (the Planck constant is restored). The second contribution has the form \[\mathrm{Im}\,\Sigma_{nn}(\xi_{\mathbf{p}},p_{F})=-\frac{\mathrm{ sign}(\xi_{\mathbf{p}})(Ms)^{2}}{8\pi^{2}}\] \[\times\int\limits_{0}^{|\xi_{\mathbf{p}}|}d\omega\int\limits_{ \omega/v_{F}}^{\omega/s}\frac{kdk|U_{k}^{BF}|^{2}}{\sqrt{v_{F}^{2}k^{2}-\omega^ {2}}\sqrt{\omega^{2}-s^{2}k^{2}}}. \tag{34}\] Since \(|U_{k}^{BF}|^{2}\approx|U_{0}^{BF}|^{2}\) for \(|\xi_{\mathbf{p}}|\ll v_{F}k_{s}\), Eq. (34) yields \[\Gamma_{nn}=\xi_{\mathbf{p}}\frac{(Ms)^{2}}{16\pi v_{F}s\hbar^{4}}\left(\frac {4\pi e^{2}}{\epsilon k_{s}}\right)^{2}, \tag{35}\] where the Planck constant is restored. One can see that both contributions are non-zero only if the fermion phase velocity \(v_{F}\) exceeds the velocity of Bogoliubov excitations, \(s\). Physically, this is the condition of bogolon emission by a fermion (the particular case of the Cherenkov effect). Thus, the damping arises from the emission of bogolons which are real (in contrast to the polaron effect discussed above, where the Bogoliubov excitations dressing a fermion are virtual). In the aforesaid, we took into account only the bogolon damping arisen for the bogolon-fermion interaction, although there is also the bogolon-bogolon interaction channel giving the additional contribution to the decay rate known as the Beliaev dumping (see, e.g., Ref. [55]). However, the Beliaev dumping is \(\sim k^{3}\) in the long-wavelength limit considered above, whereas the dumping arisen from the bogolon-fermion interaction is \(\sim k\) there. Therefore, the Beliaev dumping can be neglected as a first approximation. It should be noted that the quasi-particle description holds only if the damping is weak enough, \(\Gamma/\xi_{\mathbf{p}}\ll 1\). To validate this condition, it should be noted that \(|\xi_{\mathbf{p}}|\ll v_{F}k_{s}\) in a vicinity of Fermi energy. As a consequence, \(\Gamma_{cn}\propto\xi_{\mathbf{p}}|\xi_{\mathbf{p}}|\) and, therefore, \(\Gamma_{cn}/\xi_{\mathbf{p}}\ll 1\). Thus, the processes corresponding to the boson transfer from the condensate into non-condensed states due to the moving fermion (see Fig. 3a and Fig. 4a) do not destroy the quasi-particle description of the Fermi subsystem. Substituting the MoS\({}_{2}\) monolayer parameters [51] into Eq. (35), one can see that \(\Gamma_{nn}/\xi_{\mathbf{p}}\sim 0.04\) and, therefore, the condition \(\Gamma_{nn}/\xi_{\mathbf{p}}\ll 1\) is also satisfied. It should be noted also that the electron gas viscosity is directly related to the electron-electron scattering time [56]. In a degenerate 2D electron gas at zero temperature, the inverse electron-electron scattering lifetime, \(\tau_{ee}^{-1}\propto\xi_{\mathbf{p}}^{2}\ln\xi_{\mathbf{p}}\), turns into zero at the Fermi surface (\(\xi_{\mathbf{p}}\to 0\)). In the case of the Bose-Fermi mixture, the unpaired electron lifetime, \(\tau_{e}\), which comes from the electron-boson scattering, also makes the contribution to the viscosity. It follows from Eq. (35) that \(\tau_{e}^{-1}\propto\xi_{\mathbf{p}}\) and it turns into zero more slowly than \(\tau_{ee}^{-1}\) at \(\xi_{\mathbf{p}}\to 0\). This means that the Fermi subsystem viscosity is determined by the fermion-boson scattering processes rather than by the fermion-fermion ones. As a consequence, one can expect that the superfluid Bose subsystem will give the predominant contribution to the inter-subsystem viscosity in comparison with the fermion-fermion interaction. ### Collective modes In the collective modes, the fermion density fluctuations \(\delta n_{\mathbf{k}\omega}\) and the boson density fluctuations \(\delta N_{\mathbf{k}\omega}\) are coupled by the system of equations \[\delta n_{\mathbf{k}\omega} = S_{\mathbf{k}\omega}U_{\mathbf{k}}^{BF}\delta N_{\mathbf{k} \omega}, \tag{36}\] \[\delta N_{\mathbf{k}\omega} = P_{\mathbf{k}\omega}U_{\mathbf{k}}^{BF}\delta n_{\mathbf{k} \omega},\] where \(U_{\mathbf{k}}^{BF}\) is the Fourier transform of the boson-fermion interaction potential \(U_{BF}(\mathbf{r})\), \[S_{\mathbf{k}\omega}=\frac{\Pi_{\mathbf{k}\omega}}{1-U_{\mathbf{k}}^{F}\Pi_{ \mathbf{k}\omega}},\ P_{\mathbf{k}\omega}=\frac{n_{c}k^{2}/M}{(\omega+i\delta )^{2}-\omega_{k}^{2}} \tag{37}\] are the Fermi subsystem response function and the Bose subsystem response function, respectively, which describe the reaction of the subsystems to an external perturbation, \[\Pi_{\mathbf{k}\omega}=-\frac{m}{\pi}\left[1-\frac{|\omega|\theta[\omega^{2} -v_{F}^{2}k^{2}]}{\sqrt{\omega^{2}-v_{F}^{2}k^{2}}}-i\frac{|\omega|\theta[v_{F }^{2}k^{2}-\omega^{2}]}{\sqrt{v_{F}^{2}k^{2}-\omega^{2}}}\right], \tag{38}\] is the Fermi subsystem polarization operator written in the long wavelength limit (\(k\ll mv_{F}\)), and \(\omega_{k}=sk\) is the bogolon dispersion. The poles of the response functions (37) give the dispersions of the corresponding collective modes in the system. Namely, the \(P_{\mathbf{k}\omega}\) pole, \(\omega=\omega_{k}\), defines the Bogoliubov mode, whereas the \(S_{\mathbf{k}\omega}\) pole defines the plasmon mode. It should be noted that the plasmon mode exists only within the frequency domain \(\omega\gg kv_{F}\), where the imaginary part of the polarization operator (38) is \(\mathrm{Im}\,\Pi_{\mathbf{k}\omega}=0\) and its real part can be written as \(\mathrm{Re}\,\Pi_{\mathbf{k}\omega}\approx mv_{F}^{2}k^{2}/2\pi\omega^{2}\). As a result, the denominator of the response function \(S_{\mathbf{k}\omega}\) reads \(1-U_{\mathbf{k}}^{F}Re\,\Pi_{\mathbf{k}\omega}\approx 1-\omega_{p}^{2}/ \omega^{2}\) and has the pole \(\omega=\omega_{p}\), where \(\omega_{p}\equiv v_{F}\sqrt{k_{s}k/2}\) is the plasmon dispersion. The secular equation of the algebraic system (36) yields the dispersion equation describing the interaction between the plasmon and Bogoliubov modes, \[1-U_{\mathbf{k}}^{F}\Pi_{\mathbf{k}\omega}-(U_{\mathbf{k}}^{BF})^{2}\Pi_{ \mathbf{k}\omega}P_{\mathbf{k}\omega}=0, \tag{39}\] which can be rewritten within the domain \(\omega\gg kv_{F}\) as \[(\omega^{2}-\omega_{p}^{2})(\omega^{2}-\omega_{k}^{2})-(\omega_{p}\omega_{k}) ^{2}k_{s}/k=0. \tag{40}\] Solving Eq. (40), we arrive at the hybridized plasmon-bogolon modes, \[\omega_{1,2}^{2}=\frac{\omega_{p}^{2}+\omega_{k}^{2}}{2}\pm\frac{1}{2}\sqrt{( \omega_{p}^{2}-\omega_{k}^{2})^{2}+4(\omega_{p}\omega_{k})^{2}\frac{k_{s}}{k}}, \tag{41}\] written in the limit \(k\ll k_{s}\). It should be noted that the mode \(\omega_{2}\) does not exist physically since \(\mathrm{Re}\,\omega_{2}=0\). On the contrary, the hybridized mode \(\omega_{1}\) is not damped since \(\mathrm{Im}\,\Pi_{\mathbf{k}\omega}=0\) for \(\omega>kv_{F}\) and \(\mathrm{Im}\,P_{\mathbf{k}\omega}\propto\delta(\omega^{2}-\omega_{k}^{2})=0\) for \(\omega_{1}\neq\omega_{k}\). The hybridized \(\omega_{1}\) mode is plotted for the cases of \(s<v_{F}\) and \(s>v_{F}\) in Fig. 6. It follows from the plots that the hybridization of the plasmon and Bogoliubov modes is most pronounced if the Bogoliubov mode velocity exceeds the Fermi velocity, i.e. \(s>v_{F}\) (see Fig. 6b). In this case the Bogoliubov mode (line 2) and the plasmon mode (line 3) are crossed and, therefore, their interaction is most effective. In the opposite case, \(s<v_{F}\), the intermode influence is relatively weak since the Bogoliubov mode (line 2) and the plasmon mode (line 3) are widely separated in frequencies (see Fig. 6a). As a consequence, the ultraviolet shift of the hybridized mode \(\omega_{1}\) (line 4) with respect to the bare plasmon dispersion (line 3) for \(s>v_{F}\) (see Fig. 6b) is much larger as compared with the same shift for \(s<v_{F}\) (see Fig. 6a). In the frequency domain \(\omega<v_{F}k\), the bare plasmon does not exist since the real part of the polarization operator, \(\mathrm{Re}\,\Pi_{\mathbf{k}\omega}\), does not depend on frequency. Therefore, only the Bogoliubov mode survives there. However, the Bogoliubov mode experiences damping in the region below the line \(\omega=kv_{F}\) (see the lines 2 and 3 in Fig. 6b). The imaginary correction to the Bogoliubov mode dispersion, which arises from \(\mathrm{Im}\,\Pi_{\mathbf{k}\omega}\neq 0\), can be easily found from the dispersion equation (39) in the limit of \(\omega\ll v_{F}k\). In this limiting case, the fermion polarization operator (38) can be simplified as \[\Pi_{\mathbf{k}\omega}\approx-\frac{m}{\pi}\left[1-i\frac{|\omega|}{v_{F}k}\right]. \tag{42}\] Then the dispersion equation \[1-(U_{\mathbf{k}}^{BF})^{2}S_{\mathbf{k}\omega}\frac{n_{c}k^{2}/M}{\omega^{2}- \omega_{k}^{2}}=0 \tag{43}\] yields the imaginary correction to the frequency, \[\mathrm{Im}\,\omega=(U_{\mathbf{k}}^{BF})^{2}\frac{n_{c}k^{2}}{2 M\omega_{k}}\mathrm{Im}\,S_{\mathbf{k},\omega=\omega_{k}}\] \[=2\pi\frac{\hbar^{2}n_{c}}{mv_{F}Ms}\omega_{k}, \tag{44}\] which describes the Bogoliubov mode damping. Depending on the boson and fermion density values, the damping can be both strong (\(\lim\limits_{k\to 0}\mathrm{Im}\,\omega/\omega_{k}\gg 1\)) and weak (\(\lim\limits_{k\to 0}\mathrm{Im}\,\omega/\omega_{k}\ll 1\)). An estimation for the MoS\({}_{2}\) monolayer with the BEC density \(n_{c}=4\cdot 10^{10}\) cm\({}^{-2}\) and the fermion density \(n=4\cdot 10^{12}\) cm\({}^{-2}\) results in \(\lim\limits_{k\to 0}\mathrm{Im}\,\omega/\omega_{k}=0.035\), what corresponds to the small damping of the Bogoliubov modes. The knowledge of the dispersion laws and the damping of collective modes is the key thing in using the Bose and Fermi systems as active elements of the plasmonics [57]. Since plasmons are accompanied by the electron gas polarization, they are extremely sensitive to external electromagnetic fields. Therefore, the discussed field-induced effects can be of interest for creating high-performance plasmonic devices and technologies. In the case of light-induced hybrid Bose-Fermi systems, both bare collective excitations (plasmons and the Bogoliubov modes) and their hybrid counterparts can be studied via the well developed pump-probe experimental technique, where the strong pump field produces the light-heavy electron pairs, whereas the relatively weak probe field may excite the hybrid modes. It should be noted that the bare plasmons in conventional systems are sensitive to the electron-impurity scattering which results in the plasmon damping and widening the plasmon resonance. One can expect that the hybridized modes considered above will be less sensitive to this destructive effect since the damping of the Bogoliubov modes due to impurity scattering is weak [58]. It should be noted also that the found structure of the collective modes will be useful to describe the gauge-invariant current response of the Bose-Fermi systems in the superconducting regime [59]. ## IV Conclusion We have developed the theory describing various physical characteristics -- including the dispersion laws and the damping (lifetimes) of both single-particle and collective elementary excitations -- in the hybrid Bose-Fermi system induced by light in the two-dimensional systems containing charge carriers with different effective masses. It is shown, particularly, that the interaction between the Bose and Fermi subsystems leads to increasing effective mass of fermions (the polaron effect), the bogolon emission by a moving fermion (the Cherenkov-like effect), and the hybridization of collective modes in the Fermi subsystem (plasmons) and the Bose subsystem (bogolons). These effects can be observed in various 2D structures containing charge carriers with different effective masses, including MoS\({}_{2}\) monolayers (where the conduction band consists of the spin-split heavy electron subbands and light electron subbands) and hole systems in quantum wells based on semiconductor materials (where the valence band consists of the heavy hole subbands and light hole subbands). ###### Acknowledgements. The reported study was funded by the Russian Science Foundation (project 20-12-00001). Figure 6: Dispersion of the collective modes for the different values of the Bogoliubov phase velocity: (a) \(s<v_{F}\); (b) \(s>v_{F}\). The line 1 corresponds to the dispersion \(\omega=v_{F}k\), the line 2 is the bare Bogoliubov mode dispersion \(\omega_{k}=sk\), the line 3 is the bare plasmon dispersion \(\omega_{p}\), and the line 4 is the hybridized plasmon-bogolon mode \(\omega_{1}\). ## Appendix A The two-electron Hamiltonian Let us consider a 2D structure containing the two electron subbands with the different effective masses \(m_{l}\) and \(m_{h}\) (see Fig. 2), where the energy spectrum of the subbands is \(\varepsilon_{l}({\bf k})=-\Delta_{0}/2+\hbar^{2}k^{2}/2m_{l}\) and \(\varepsilon_{h}({\bf k})=\Delta_{0}/2+\hbar^{2}k^{2}/2m_{h}\), \({\bf k}=(k_{x},k_{y})\) is the momentum of charge carrier in the 2D plane, and \(\Delta_{0}\) is the energy splitting of the subbands at \({\bf k}=0\). In the presence of a circularly polarized electromagnetic wave incident normally to the 2D structure (see Fig. 1), the Coulomb interaction of two electrons from the subbands \(\varepsilon_{l}({\bf k})\) and \(\varepsilon_{h}({\bf k})\) is described by the Hamiltonian \[\hat{\cal H}=\hat{\cal H}_{l}+\hat{\cal H}_{h}+U({\bf r}_{l}-{\bf r}_{h}), \tag{10}\] where \(\hat{\cal H}_{l,h}=(\hat{\bf p}_{l,h}-e{\bf A}(t)/c)^{2}/2m_{l,h}\) are the Hamiltonians of free electrons irradiated by the wave, \({\bf r}_{l,h}=(x,y)\) are the plane radius vectors of the electrons, \(\hat{\bf p}_{l,h}=-i\hbar\partial/\partial{\bf r}_{l,h}\) are the plane momentum operators of the electrons, \(U({\bf r}_{l}-{\bf r}_{h})=e^{2}/\epsilon|{\bf r}_{l}-{\bf r}_{h}|\) is the two-dimensional Coulomb potential of the electron interaction, \(\epsilon\) is the dielectric constant, \[{\bf A}(t)=(A_{x},A_{y})=[cE_{0}/\omega_{0}](\cos\omega_{0}t,\,\sin\omega_{0}t) \tag{11}\] is the vector potential of the wave, \(E_{0}\) is the electric field amplitude of the wave, and \(\omega_{0}\) is the wave frequency. The Hamiltonian (10) is spinless since the exchange interaction of the considered two electrons is absent due to different masses of them, whereas their direct spin-spin interaction is relativistically small and can be neglected as a first approximation [30]. Taking into account Eq. (11), the Hamiltonian (10) can be rewritten as \[\hat{\cal H} = \frac{\hat{\bf p}_{l}^{2}}{2m_{l}}+\frac{\hat{\bf p}_{h}^{2}}{2m_ {h}}-\frac{e{\bf A}(t)\hat{\bf p}_{l}}{cm_{l}}-\frac{e{\bf A}(t)\hat{\bf p}_{h} }{cm_{h}}+\varepsilon_{0} \tag{12}\] \[+ U({\bf r}_{l}-{\bf r}_{h}),\] where \[\varepsilon_{0}=\frac{e^{2}E_{0}^{2}}{2m_{l}\omega_{0}^{2}}+\frac{e^{2}E_{0}^{ 2}}{2m_{h}\omega_{0}^{2}} \tag{13}\] is the kinetic energy of electron rotation under the circularly polarized field (11). To proceed, let us apply the Kramers-Henneberger unitary transformation, \[\hat{U}(t)=\exp\left\{\frac{i}{\hbar}\int^{t}\left[\frac{e}{m_{l }c}{\bf A}(\tau)\hat{\bf p}_{l}-\frac{e^{2}E_{0}^{2}}{2m_{l}\omega_{0}^{2}} \right]d\tau\right\}\] \[\times\exp\left\{\frac{i}{\hbar}\int^{t}\left[\frac{e}{m_{h}c}{ \bf A}(\tau)\hat{\bf p}_{h}-\frac{e^{2}E_{0}^{2}}{2m_{h}\omega_{0}^{2}}\right] d\tau\right\}. \tag{14}\] Then the transformed Hamiltonian (12) reads \[\hat{\cal H}^{\prime} = \hat{U}^{\dagger}(t)\hat{\cal H}\hat{U}(t)-i\hbar\hat{U}^{\dagger }(t)\partial_{t}\hat{U}(t) \tag{15}\] \[= \frac{\hat{\bf p}_{l}^{2}}{2m_{l}}+\frac{\hat{\bf p}_{h}^{2}}{2m _{h}}+U\big{(}{\bf r}_{l}-{\bf r}_{h}-{\bf r}_{0}(t)\big{)},\] where \[{\bf r}_{0}(t)=(-r_{0}\sin\omega_{0}t,\,r_{0}\cos\omega_{0}t) \tag{16}\] is the vector defining the change of relative position of the two electrons under the field, and \[r_{0}=\frac{|e|E_{0}(m_{h}-m_{l})}{m_{l}m_{h}\omega_{0}^{2}} \tag{17}\] is the length of the vector. It should be noted that the field-induced energy (13) results only in the energy shift of all electronic states by the same energy. Since such a shift does not affect electronic properties, the unitary transformation (14) removes the energy (13) from the Hamiltonian (15). In the center-of-mass system, the two-electron Hamiltonian (15) can be rewritten as \[\hat{\cal H}^{\prime}=\frac{\hat{\bf p}^{\,2}}{2m^{*}}+U\big{(}{\bf r}-{\bf r }_{0}(t)\big{)}, \tag{18}\] where \({\bf r}={\bf r}_{l}-{\bf r}_{h}\) is the radius vector describing the relative motion of electrons, \(\hat{\bf p}=-i\hbar\partial/\partial{\bf r}\) is the momentum operator corresponding to the relative motion, and \(m^{*}=m_{l}m_{h}/(m_{l}+m_{h})\) is the reduced mass of the two-electron system. It should be noted that the Hamiltonian (18) with the periodically time-dependent potential \(U\big{(}{\bf r}-{\bf r}_{0}(t)\big{)}\) is still exact and describes the relative motion of two interacting electrons under the field accurately. Next, let us apply the high-frequency approximation which is well-known in the Floquet theory of periodically driven quantum systems [3; 4; 5; 6]. Namely, the periodically time-dependent potential in the Hamiltonian (18) can be replaced approximately with the time-averaged potential if the field frequency is high enough [28; 29; 30]. Within this approximation, the periodically time-dependent Hamiltonian (18) turns into the effective stationary Hamiltonian \[\hat{\cal H}_{0}=\frac{\hat{\bf p}^{\,2}}{2m^{*}}+U_{0}({\bf r}), \tag{19}\] where the time-averaged potential \[U_{0}({\bf r})=\frac{1}{2\pi}\int_{-\pi}^{\pi}U\big{(}{\bf r}-{ \bf r}_{0}(t)\big{)}\,d(\omega_{0}t)\] \[=\left\{\begin{array}{l}(2e^{2}/\pi r_{0})K\left(r/r_{0}\right),\,\,\,r/r_{0}\leq 1\\ (2e^{2}/\pi r)K\left(r_{0}/r\right),\,\,\,r/r_{0}\geq 1\end{array}\right. \tag{20}\] can be treated as the Coulomb potential dressed by the circularly polarized field, and the function \(K(z)\) is the complete elliptical integral of the first kind. Since the dressed potential (20) has a local minimum at \(r=0\) for \(m_{l}\neq m_{h}\), the Schrodinger equation with the Hamiltonian (19) yields the bound two-electron state localized near the minimum (composite boson) [28]. It should be noted that this bound state is quasi-stationary since the potential minimum at \(r=0\) is local. Therefore, a single composite boson has finite lifetime. However, it has been demonstrated that the Fermi sea of normal electrons stabilizes the boson [30]. Such a stabilization is physically similar to the stabilization of the Cooper pair by the Fermi sea of conduction electrons in the conventional BCS theory of superconductivity. As a consequence, the light-induced composite bosons in the hybrid Bose-Fermi system have infinite lifetime and the system as a whole is stable [30]. It follows from Eq. (11) that the dressed potential \(U_{0}(\mathbf{r})\) for \(m_{l}=m_{h}\) turns into the bare Coulomb potential, \(U(\mathbf{r})=e^{2}/\epsilon r\), which has no local minima and, therefore, cannot couple interacting electrons. Physically, this follows from the fact that the vector (17) turns into zero if \(m_{l}=m_{h}\). As a consequence, the field does no change the distance between interacting electrons in this case and, correspondingly, does not affect the Coulomb interaction of them. Therefore, the condition \(m_{l}\neq m_{h}\) is crucial for the effects under consideration. Among 2D structures satisfying this condition, both MoS\({}_{2}\) monolayers (where the conduction band consists of the spin-split heavy electron subbands and light electron subbands) and hole systems in quantum wells based on semiconductor materials (where the valence band consists of the heavy hole subbands and light hole subbands) should be noted. Next, let us discuss interactions in the light-induced hybrid Bose-Fermi system, assuming the boson density to be small enough to consider the composite bosons as weakly interacting independent particles. Since the dressed Coulomb potential (11) turns into the bare Coulomb potential for charge particles with identical masses, the boson-boson interaction and the fermion-fermion interaction can be described by the bare Coulomb potentials (4) and (6), respectively. Since the boson and fermion masses are different, the boson-fermion interaction, rigorously, should be described by the dressed Coulomb potential. However, the dressed potential (11) substantially differs from the bare Coulomb potential only for small distances \(r\lesssim r_{0}\), where the length \(r_{0}\) defined by Eq. (18) is the characteristic size of composite boson [30]. Therefore, the boson-fermion interaction for small boson densities can be described by the bare Coulomb potential defined by Eq. (2).
2301.03947
Autonomous Strawberry Picking Robotic System (Robofruit)
Challenges in strawberry picking made selective harvesting robotic technology demanding. However, selective harvesting of strawberries is complicated forming a few scientific research questions. Most available solutions only deal with a specific picking scenario, e.g., picking only a single variety of fruit in isolation. Nonetheless, most economically viable (e.g. high-yielding and/or disease-resistant) varieties of strawberry are grown in dense clusters. The current perception technology in such use cases is inefficient. In this work, we developed a novel system capable of harvesting strawberries with several unique features. The features allow the system to deal with very complex picking scenarios, e.g. dense clusters. Our concept of a modular system makes our system reconfigurable to adapt to different picking scenarios. We designed, manufactured, and tested a picking head with 2.5 DOF (2 independent mechanisms and 1 dependent cutting system) capable of removing possible occlusions and harvesting targeted strawberries without contacting fruit flesh to avoid damage and bruising. In addition, we developed a novel perception system to localise strawberries and detect their key points, picking points, and determine their ripeness. For this purpose, we introduced two new datasets. Finally, we tested the system in a commercial strawberry growing field and our research farm with three different strawberry varieties. The results show the effectiveness and reliability of the proposed system. The designed picking head was able to remove occlusions and harvest strawberries effectively. The perception system was able to detect and determine the ripeness of strawberries with 95% accuracy. In total, the system was able to harvest 87% of all detected strawberries with a success rate of 83% for all pluckable fruits. We also discuss a series of open research questions in the discussion section.
Soran Parsa, Bappaditya Debnath, Muhammad Arshad Khan, Amir Ghalamzan E.
2023-01-10T13:02:23Z
http://arxiv.org/abs/2301.03947v1
# Autonomous Strawberry Picking Robotic System ###### Abstract Challenges in strawberry picking made selective harvesting robotic technology very demanding. However, the selective harvesting of strawberries is a complicated robotic task forming a few scientific research questions. Most available solutions only deal with a specific picking scenario, e.g., picking only a single variety of fruit in isolation. Nonetheless, most economically viable (e.g. high-yielding and/or disease-resistant) varieties of strawberry are grown in dense clusters. The current perception technology in such use cases is inefficient. In this work, we developed a novel system capable of harvesting strawberries with several unique features. These features allow the system to deal with very complex picking scenarios, e.g. dense clusters. Our concept of a modular system makes our system reconfigurable to adapt to different picking scenarios. We designed, manufactured, and tested a patented picking head with 2.5 degrees of freedom (two independent mechanisms and one dependent cutting system) capable of removing possible occlusions and harvesting the targeted strawberry without any contact with the fruit flesh to avoid damage and bruising. In addition, we developed a novel perception system to localise strawberries and detect their key points, picking points, and determine their ripeness. For this purpose, we introduced two new datasets. Finally, we tested the system in a commercial strawberry growing field and our research farm with three different strawberry varieties. The results show the effectiveness and reliability of the proposed system. The designed picking head was able to remove occlusions and harvest strawberries effectively. The perception system was able to detect and determine the ripeness of strawberries with 95% accuracy. In total, the system was able to harvest 87% of all detected strawberries with a success rate of 83% for all pluckable fruits. We also discuss a series of open research questions in the discussion section. **keywords:** Selective Harvesting; Robotic manipulation; Computer Vision; Motion planning; Precision; farming; Agricultural robotics Introduction Selective harvesting of crops using robotic technology aims to address the societal and economical challenges of agricultural labour shortages. The industry is yet far from an efficient and practical solution. Many aspects of crop harvesting still remain unsolved (scientific and technological) problems. The dexterity and efficiency of robotic end-effectors are open questions. Most of the available picking heads (i.e. end-effectors) for selective harvesting are capable of performing only two actions: opening the picking head, and closing the picking head. Strawberry is a highly valued crop. While the annual retail value of the strawberries industry is over $17 Billion globally, producers have to spend over $1 Billion for picking (selective harvesting) only (Web, 2020). Factors such as labour shortage, increasing labour costs, and the COVID-19 pandemic are having a negative impact on selective harvesting costs. Therefore, robot-based automated selective harvesting technologies are highly in demand (Duckett et al., 2018). Over the past decade, both private and public entities have extensively invested to develop commercially viable robotic harvesting technology. Despite the recent investments and funding (CORDIS, 2020) and (CORDIS, 2015) for harvesting high-value crops, many problems are still unsolved which form very interesting scientific questions. Nevertheless, there is not yet a commercially viable robotic technology available for the selective harvesting of strawberries. One of the challenges of a desired robotic solution is the picking head they use. While human pickers use the sense of touch and active perception, multiple fingers and two arms for picking strawberries, using a picking head with a single degree of freedom (DOF) does not look sufficient. The available picking heads have limited ability to harvest strawberries in a dense cluster where a ripe strawberry to be picked is occluded. The problem is manifold: (1) a ripe strawberry may not be detected, (2) its segment, ripeness assessment, and location may not be precise, (3) existing picking heads may not be able to reach a ripe strawberry surrounded by other unripe strawberries. We present a robotic picking system capable of addressing these issues Moreover, the asymmetric and irregular nature of the stems coming out of the fruit makes it difficult to localise the picking point. Commercially available depth sensors are designed for large objects under controlled lighting conditions. Insufficient quality of depth-sensing technologies makes strawberry picking point localisation on stem intractable. This is especially true under bright sunlight in farm conditions where the depth accuracy decreases further. In addition, the depth sensors are designed to work optimally for distances larger than 50 [cm], and their precision drops to 0 for distances below 15 [cm]. However, for picking point localisation we require precise depth-sensing below the 15 [cm] range. This makes the robot perception challenging as some target fruits may be occluded by non-target fruits and leaves. Commercially available depth sensors, e.g, Realsense _D435i_, also make the perception challenging as they are designed for large objects' 3-D perception and controlled lighting conditions. For small fruits under outdoor lighting, the depth maps are not precise. Detecting, segmenting, and localising a ripe fruit to be picked in a complex cluster geometry, under outdoor lighting conditions make strawberry perception a very challenging problem. In this paper, we present a robotic system for automated selective harvesting of strawberries which aims to address some of the challenges preventing large-scale commercial deployment of these systems. (A video of the system can be seen in this link.) We designed, prototyped and field-tested our robotic system benefiting from a novel picking head for robotic selective harvesting. The picking head demonstrated the ability to navigate through the cluster to reach a targeted fruit and harvest it successfully. One feature of the picking head that makes it different from the existing technologies, is that it is able to grasp, detach and handle the fruit from its stems without contact with the fruit body. This is important in harvesting and handling soft fruits e.g. strawberries, to reduce bruising and damage and increase fruit shelf time. These characteristics are able to contribute to reducing food waste significantly. Our state-of-the-art perception system proved to be effective in detecting and localising fruit in different environments and lighting conditions. Moreover, the localising of the picking point on the stem is challenging. This is due to the nature of the 3D perception devices that work poorly on small objects, or under sunlight conditions. To overcome this challenge we propose a novel Gaussian Process Regression method for picking point error estimation. We propose a modular and configurable approach to developing and integrating the robotic system for selective fruit harvesting. The system was reconfigured based on two different harvesting conditions and tested. Unlike other approaches the robotic harvesting system is designed based on a specific requirement and only work in a specific condition, this proposed system is modular and configurable based on the different varieties and growing condition. The remainder of this paper is organised as follows. Section 2 presents a thorough literature review of the current works. In section 3 and 4 the system architecture and end-effector design are discussed respectively. Section 5 presents the perception system. Finally, the field experiments and results are presented in section 6 and are discussed in section 7. ## 2 Related Works ### Harvesting systems and manipulators Robotic harvesting systems in general are mechanisms that are designed to interact with agricultural crops. A typical robotic harvesting system is equipped with a manipulator usually in form of a serial robotic arm, a custom-designed end-effector for grasping and/or picking the targeted crop, a perception system for detection, and a platform to mount all these sub-components which itself could be an autonomous or semi-autonomous powered mobile platform (Arad et al., 2020). Off-the-shelf robotic arms and manipulators have proven to be functional and reliable and have been employed largely to develop robotic selective harvesting systems. However, custom-designed manipulators have been emphasised greater than using off-the-shelf ones. Among the off-the-shelf manipulators, six degrees of freedom (DOF) were widely used. It has been studied that additional degrees of freedom were added to or removed from the available controllable DOFs in these off-the-shelf manipulators according to the harvesting requirements. Xiong et.al.(Xiong et. al., 2019) used a 6-DOF off-the-shelf manipulator for harvesting strawberries where 1-DOF was kept fixed during operation to meet selective harvesting orientation requirement. In addition to the single-arm manipulator, multiple-arm robots were also utilised for harvesting scenarios to tackle the complexity of selective harvesting. In particular, dual-arm manipulators were developed to work either collaboratively or as standalone units. Sepulveda et.al.(SepuLveda et al., 2020) used a dual robotic arm to cooperatively harvest eggplants with very promising and successful results in dealing with occluded fruit conditions. Zhao et.al.(Zhao et al., 2016) also used a dual-arm robot for tomato harvesting which also operated collaboratively. In this proposed configuration, one arm detaches a tomato from its stem while the other arm grips it. Davidson J et.al.(Davidson et al., 2017) utilised a dual-arm mechanism for an apple harvesting robot in which a six-degree of freedom (DOF) apple picker was assisted by a two-DOF catching mechanism. The second arm catches the picked apple and transfers it to storage. This reduces the harvesting cycle time. A few other dual-arm configurations were also presented in (Armada et al., 2005; Ceres et al., 1998; Xiong et al., 2020b) to speed up the harvesting cycle. The autonomous kiwi harvesting robot(Scarfe et al., 2009) uses four robots in parallel. The harvesting robot must reach the varying height, widths, and depths of the targeted crop with respect to the base of the manipulator. Hence, the harvesting platform needs a moving base to increase the limited reachable workspace of a robotic manipulator. In addition to a mobile base that can navigate across fields, manipulators were also mounted on the vertical slide(s) (Zhao et al., 2016; Lehnert et al., 2017)(Bac et al., 2017; Ling et al., 2019; Baeten et al., 2008), horizontal slide(s) (Davidson et al., 2017; Silwal et al., 2017; Van Henten et al., 2002), slanting slide (Armada et al., 2005), or on a scissor lift mechanism (Arad et al., 2020)(Feng et al., 2018) to enhance the reachability of a robotic arm. Moreover, a forklift vehicle was utilised to enable the cutter mechanism to reach the various height and depths to harvest oranges in orchards (Lee and Rosa, 2006). In addition to rigid mechanisms, other mechanisms were also employed for harvesting (Chowdhary et al., 2019). For instance, (Tiefeng et al., 2015) proposed an elephant trunk-inspired mechanism to harvest fruits. Combined soft and rigid mechanisms have been also tested for agroforestry activities which include harvesting as well. (Chowdhary et al., 2019). Motion planning and motion control are also important components of a successful selective harvesting robotic system. Mghames et al. (Mghames et al., 2020) proposed an interactive motion plan to push occluding strawberries away in a cluster to reach a target fruit. Pushing actions are encoded by movement primitives and hand-designed features of pushable obstacles. A set of pushing demonstrations is used to train the motion primitives. Learning from demonstrations (Ragaglia et al., 2018), and imitation learning (Osa et al., 2018) are used in many other contexts. However, their hand-designed features (such as the position of the target and occluding strawberries) may limit the generalisation of such a method. Sanni et al. (Sanni et al., 2022) proposed deep-Movement Primitives (d-MP) that do not need any hand-designed features and directly map visual information into robot movements based on the observed demonstrations. Tafuro et al. (Tafuro et al., 2022) extended d-MP into deep-Probabilistic movement primitives (d-ProMP) in which the model generates a distribution of trajectories given a single image of the scene. To control pushing motions occluded camera views are not sufficient and tactile sensings are necessary. Mandil et al. (Mandil et al., 2022) proposed a data-driven tactile predictive model which is then used in (Nazari et al., 2022) to proactively control manipulation movements to avoid slip. This framework can be adopted to control pushing actions. ### End-effectors An end-effector is a tool attached to the wrist of a robotic manipulator to harvest the fruit, either by grasping or gripping the fruit or its peduncle (attachment), detaching it from the parent plant, and eventually delivering it to the storage. These end-effectors execute the individual actions either simultaneously (eg: gripping and detaching) or sequentially (gripping/grasping followed by detaching) to perform a successful harvesting operation. The end-effector of a selective harvesting robot is the unit that directly and physically interacts with crops. Across different end-effector technologies for fruit harvesting, the physical interactions include (i) gripping/grasping the fruit by its peduncle or fruit body (attachment), (ii) detaching from the plant by pulling, twisting, or cutting the peduncle, (iii) facilitating the fruit transport from the detachment location to the storage, and (iv) pushing/parting of fruit in the cluster during detaching action(Xiong et al., 2020) that is a recently studied functionality. Among attachment and detachment actions, some end-effectors perform simultaneous gripping and detachment of the strawberry peduncle using a parallel jaw mechanism (Hayashi et al., 2010; Hayashi et al., 2014). In such a mechanism, one jaw will be shaped in the form of a cutting blade or is provided with a provision to attach detaching blades. One such end-effector design makes use of a suction cup to provide an additional grip by sucking the fruit body to avoid any positional errors during this simultaneous gripping and cutting actions (Hayashi et al., 2010). Another suction-based approach is used by an end-effector which uses a suction head to grip the fruit body and then rotates so that a blade is positioned on the curved opening of the suction head to trim the peduncle(Arima et al., 2004). Instead of using any cutting blades for trimming the peduncle, the end-effector in (Yamamoto et al., 2014) uses a bending action to detach the strawberry after gripping the fruit body with a suction head and two-jaw gripper. In addition, a thermal-based cutting is reported to be used by another end-effector that uses an electrically heated wire on the gripping jaw to cut the peduncle (Feng et al., 2012). In this end-effector, once the fruit body is gripped by the suction cup to position the peduncle between the two jaws (cutting device), the jaws then close and trim the peduncle using the heated wire. The end effector developed by Octinion uses a soft gripper to grip the strawberry fruit body and imposes a rotational motion while pulling the strawberry to detach it from the peduncle (De Preter et al., 2018). The end-effectors above make either a gripping contact with the fruit body or with the peduncle during the harvesting action. But the end-effector reported in (Xiong et al., 2018) doesn't grip the fruit body or the peduncle during the harvesting action. Instead, a combination of three active and passive fingers guides the strawberry into the end-effector housing. Once the fruit reaches the cutting location, scissor-shaped blades cut the peduncle to detach the strawberry. After the detaching operation, the end effector continues to catch/hold the fruit until it is dropped intentionally or safely placed in the designated location by the manipulator. Some end-effectors were reported to have certain finger arrangements to perform the catching action when the harvested fruits were dropped after the fruit detachment action. One such catching provision was provided in the end effector design reported by Arad B et.al. (Arad et al., 2020) for the sweet pepper harvesting robot. It was a soft plastic coated six metallic fingers arrangement just below the cutting blade assembly, that receives the fruit after the detachment. Another catching mechanism was proposed by Davidson J et.al. (Davidson et al., 2017) for the apple harvesting robot. It used a two DOF secondary mechanism with a funnel-like catching end effector which will be moved to the dropping position to catch the apple while the primary picking manipulator detaches and drops the apple. This pick and catch approach was determined to be superior to the conventional pick and place approaches as it resulted in a fifty per cent reduction in the harvesting cycle time (Davidson et al., 2017). Considering the different options for gripping and cutting, it is always beneficial to avoid applying any force on the fruit body by the end-effector contact surfaces. Since some fruits are very soft and delicate, there are higher chances of bruising during such operations. Aliasgarian et al. (Aliasgarian et al., 2013) showed strawberry fruits are more damaged when exposed to compression forces on their body. Hence, from the end-effector design point of view, it is recommended to target the peduncle for gripping/cutting actions or to avoid a grip action as demonstrated in (Xiong et al., 2018). ### Perception From traditional computer vision (CV) based to modern state-of-the-art deep neural networks, various methods exist for fruit detection and localisation of picking points. Traditional or classical CV approaches are typically based on geometric, thresholding, colour, and morphology. Thus, similar to other areas of CV, researchers have taken advantage of Deep Learning (DL) methods for performance improvement. Some of the early methods for detecting ripe strawberries relied on colour thresholding in HSI colour map (Rajendra et al., 2009). The authors also used thresholding of diameter for detecting the strawberry stem. The automatic thresholding-based algorithm was shown to be more robust by Zhuang et al. (Zhuang et al., 2019). colour-based segmentation was used by Arefi et al. (Arefi et al., 2011) to segregate the background from the fruit blob. Arefi et al. (Arefi et al., 2011) used colour-based segmentation to remove the background and keep the fruit blob. Instead of directly using colour, colour information can also be used with other features for a more robust approach. 3D-parametric model-fitting was used for the localisation of sweet peppers by Lehner et al. (Lehnert et al., 2017). Tao et al. (Tao and Zhou, 2017) used geometric features with GA-SVM for apple classification. Arefi et al. (Arefi et al., 2011) combined the water-shed algorithm to extract the morphology of tomatoes from colour-thresholded binary images. Zhuang et al. (Zhuang et al., 2019) were able to improve the results obtained by colour segmentation by using iterative-retinax algorithm along with Otsu's thresholding. Similarly, geometry-based algorithms are among the early contributions in this domain. Li et al. (Li et al., 2020) applied morphological operations for litchi harvesting. The connected component algorithm was used by Duran et al. (Durand-Petiteville et al., 2017) to identify strawberry blobs. Moving on to geometry-based approaches, Hayashi et al. (Hayashi et al., 2010) relied on extracted geometric features of strawberries to calculate stem angle w.r.t. to the longitudinal axis. (Tao and Zhou, 2017) used a Fast Point Feature (FPF) histogram, which is a geometric descriptor. The FPF descriptor consists of the parameterised query of the spatial differences between a point and its adjacent area which helps in describing the geometric properties within the K-neighbourhood of the point. While threshold, colour, morphology, and geometry-based methods may provide good performance, they lack generalisation and are prone to noise. This is especially true for fruits like strawberries which are not regular in shape and lack symmetry. To improve generalisation, researchers need to engineer handcrafted features. However, with increasing size and variation in datasets handcrafting features become infeasible (O'Mahony et al., 2019). The alternative is to use DL methods which are reviewed next. The most obvious DL-based approach is to use CNNs. Liu et al. (Liu et al., 2018) combined CNN-based fruit detection with depth data to localise the fruit in 3D. CNN-based model for strawberry detection was also used by Lamb et al. (Lamb and Chuah, 2018) where the network was optimised through image tiling, input compression, network compression, and colour masking. Zhang et al. (Zhang et al., 2018) relied on their CNN-based model for tomato classification. Instead of using colour images, Gao et al. (Gao et al., 2020) relied on a spectral features-based CNN model for detecting the ripeness and quality of strawberries. Thermal images have also been used as input to a CNN-based model for bruise detection on pears (Zeng et al., 2020). In recent years DL has been demonstrated to be superior for tasks such as segmentation (He et al., 2017) and key-points detection (Cao et al., 2019). Thus, authors in selective harvesting have begun to adopt some of the DL techniques for fruit perception. Lamb et al. (Lamb and Chuah, 2018) used CNN for strawberry detection by optimising the network through input compression, image tiling, colour masking, and network compression. Liu et al. (Liu et al., 2018) used CNN in combination with depth data to calculate the relative 3-D location of fruit. Similarly, Zhang et al. (Zhang et al., 2018) used CNN for tomato classification. Spectral features with CNN are used for strawberry quality or ripeness detection (Gao et al., 2020). CNN model is also used for pear bruise detection based on thermal images (Zeng et al., 2020). CNNs have revolutionised object detection and recognition, however for pixel-wise tasks such as semantic segmentation, Regional CNNs (RCNNs) is more appropriate. Sa et al. (Sa et al., 2016) relied on a faster RCNN model for bounding box detection of fruit while fusing faster RCNN, RGB, and Infrared (IR) images. More recently, Mask-RCNN (He et al., 2017) has been presented as a better alternative to the original RCNN. In selective harvesting also Mask-RCNN has been shown to provide a higher degree of accuracy while performing a pixel-wise segmentation (Ge et al., 2019). Liu et al. (Liu et al., 2018) relied on both YOLOv3 and mask-RCNN (M-RCNN) for bounding box detection of citrus fruit. M-RCNN with the ResNet-150 as backbone provided better performance than YOLO-v3. Similarly, Perez et al. (Perez-Borrero et al., 2020) relied on M-RCNN for strawberry segmentation for harvesting. Yu et al. (Yu et al., 2019) presented another instance of M-RCNN used for selective harvesting where features from M-RCNN were used to determine the strawberry shapes. Afterwards, a geometrical algorithm was used to localise the strawberry picking point. Researchers have extracted features from R-CNNs and combined them with their own algorithm to improve the localisation of picking points (Ge et al., 2019; Liu et al., 2019). Ge et al. (Ge et al., 2019) first used M-RCNN to determine strawberry pixels then, the extracted strawberry pixels were combined with depth data, and thereafter density-based clustering and Hough transformation were used to develop a richer scene segmentation. Liu et al. (Liu et al., 2019) combined M-RCNN with the logical green operator to come up with a more robust cucumber detection. Ganesh et al. (Ganesh et al., 2019) used both HSV and RGB images to enhance the performance of M-RCNN for Orange detection. Yu et al. (Yu et al., 2019) applied M-RCNN to segment strawberry images and then used geometrical features to localise the picking point. On the other hand, Tafuro et al. (Tafuro et al., 2022) argue that localisation of picking points is not feasible by geometrical, statistical, or other such approaches even after fruit segmentation through M-RCNN. Instead, the authors rely on key-point detection normally used for tasks like human pose estimation (Cao et al., 2019), and face pose estimation (Zhang et al., 2014) for localisation of picking points. We rely on the approach by Tafuro et al. for the initial detection and segmentation of strawberries. However, 2D detection is not sufficient for strawberry harvesting in 3D. As discussed earlier, the depth information is not sufficiently accurate owing to sensor inaccuracies and sunlight. Moreover, in the real-world more inaccuracies are introduced by camera calibration errors. Thus, it is not feasible to simply combine the 2D localisation with depth information. Similar, to Ge et al. (Ge et al., 2019), we develop our own algorithm to refine and work around the inaccurate depth information obtained from depth sensors. However, our advantages over Ge et al. (Ge et al., 2019) and other methods based on M-RCNN discussed above are two-fold: 1) We rely on M-RCNN-based key-point detection for picking points which gives us much more robust picking point localisation as compared to handcrafted methods adopted to refine Mask-RCNN output. 2) We compensate for the lack of precise depth information by carefully fine-tuning the end-effector pose by translating the information to two additional cameras at the front. ## 3 Concept, Design, and Features ### System overview To address the current challenges in the autonomous selective harvesting sector an autonomous system for fruit harvesting was designed and developed. The system includes a robotic arm, a novel robotic end-effector designed and manufactured for this research, a comprehensive perception system, a mobile platform; and an integrated control system for controlling the robotic arm and end-effector. Figures 0(a) and 0(b) show the autonomous fruit picking system and its components during field tests in a commercial strawberry growing glasshouse and research strawberry poly tunnels respectively. The robotic arm used in this work is an off-the-shelf arm Franka Emika Panda with a 3 kg payload and 7 degrees of freedom. A block diagram of the system is shown in Figure 2. In this work, a novel universal picking head (UPH) for fruit harvesting was designed and introduced. The design of this picking head was to address the shortcoming of the available solutions, specifically for harvesting in dense clusters. We designed, manufactured, and successfully tested the picking head which has 2.5 degrees of freedom to allow harvesting fruits and manipulating possible occlusions independently. In the next sections, the design of the picking head is discussed in detail. Our comprehensive perception system includes an RGB-D sensor and three RGB cameras, a novel dataset, and state-of-the-art algorithms to detect and localise the fruit and determine its suitability for picking. The RGB-D sensor is an Intel Realsense D435i model which is integrated into the picking head design and works based on eye-to-hand principles. This sensor provides an RGB image of the plant and also a three-dimensional point cloud to detect fruit, localise the picking point and predict the ripeness of the fruit. The RGB cameras located underneath the UPH allow close-range view where the RGB-D sensor loses its view. These sensors are coupled with a novel Mask-RCNN-based algorithm to form the perception system which is discussed in detail. The robotic arm controller, UPH controller, RGB-D sensor, and RGB cameras are all connected to a laptop using either USB or CAN to USB adaptors. The laptop has an Intel Core i7(r) CPU, with 16 GB ram, and runs on Ubuntu 20.04.4 LTS (Focal Fossa). We used ROS Noetic (Robotics Operating System) as a middleware operating system to integrate all algorithms, sensors, robotic arm, and UPH and establish communication between them. The system includes a second laptop with a powerful GPU unit (NVIDIA(r) GeForce(r) RTX 2070 SUPER) to Handel the high-demand perception tasks. The second laptop is connected to the first laptop using an Ethernet connection and communicating through ROS. The system including the robot, controllers, and other components can be powered by a domestic power plug or provided by a DC power source e.g. a battery. In our field test, we tested the system using both methods. The DC power was sourced from the mobile robot batteries. The system also includes a container to hold the fruit punnets. The container could be replaced manually after it is filled. A rough estimate of the cost of the entire system (including the robotic arm, UPH, sensors, and computing system) is around r25k. ### A configurable and modular system Full integration of the system allows the robot to continuously detect and harvest ripe strawberries along the table rows. Similar to the other agricultural robotic harvesters, all sequences are performed in a static condition, i.e. when a strawberry is detected the mobile platform stops, the system harvests all reachable strawberries and the mobile platform moves on. The block diagram for the system includes all sub-systems Figure 1: a) The system compromises a robotic arm (Franka Emika Panda), a designed picking head, an RGB-D sensor, and a fruit container. The system and its components including controllers are mounted on the commercial trolley capable of moving on a rail between the strawberry rows. b) All components of the system were reconfigured and mounted on a commercial mobile robot to be tested in a different strawberry growing field. and components shown in figure 2 demonstrating the hardware and software architecture of the system. One of the features of our system in comparison with previous works is the modular design of the system. As can be seen, all main components i.e. robotic arm, UPH, perception, and mobile platform are independent and could be integrated with other models with minimal development requirements. For instance, it is possible to use a variety of off-the-shelf robotic arms depending on the costs and the harvesting condition such as indoor/outdoor, task space, etc. More importantly, the system is configurable. In other words, the system can be configured for different growing conditions such as on-the-rail for glasshouses, on a mobile robot for full autonomous harvesting, or on an XY gantry mechanism for vertical farming. For this work, we integrated and tested our system on two mobile platforms in different conditions. First, the developed system including the robotic arm, UPH, and other components was mounted on Thorvald, a four-wheeled mobile robot made by Saga Robotics as shown in Figure 0(b). In this case, the mobile robot was teleoperated remotely using a joystick. The harvesting test was carried out at strawberry-growing polytunnels at the Riseholme campus of the University of Lincoln. This strawberry-growing facility was established for research purposes. The second layout was a commercial strawberry harvesting trolley mounted on rails which was able to move between strawberry rows. In the current stage of this work, the trolley is operated manually, i.e. a human operator pushed the trolley on the rail. However, the automation of the mobile platform on the rail requires a simple solution that is widely available and out of the scope of this work. This field test was performed at the Dyson glasshouse strawberry growing facility which is a leading commercial strawberry grower in the UK as shown in Figure 0(a). In addition to the hardware modularity, the software and control platform of the system is modular and configurable. The different units and control nodes are shown in Figure 2. These units include sensor data acquisition and processing, UPH control, robotic arm control, central control unit, a detection unit, and a ripeness assessor. For this project, we used two laptops to separate computationally heavy fruit detection which requires a powerful GPU, and robot control which requires high frequency. For both laptops, an image processing node is required to provide the correct format as different third-party packages are running on both laptops. These units could be developed independently and integrated depending on the harvesting condition, used hardware, and other requirements. As most available harvesting technologies were developed for a specific growing condition or strawberry variety, they are commercially unavailable. In addition, while they might have good performance for the condition they were designed for, they perform poorly for different conditions. This is important as the high number of combinations of different varieties and different growing conditions makes it impossible to propose a robotic solution for all of them. Figure 2: System block diagram demonstrating all sub-systems and other components including; the robotic arm, mobile platform, perception, and control system. ### System work-flow and algorithm The flowchart of the system algorithm and workflow is shown in Figure 3. As can be seen, the entire algorithm was implemented in two laptops communicating through Ethernet protocol. The whole system consists of several control loops on different levels, i.e. high-level, mid-level, and low-level. Initially, the main loop triggers the robot control loop and perception loop. The robot goes to the home position (i.e., a suitable configuration in which the robot looks at the tabletop strawberries as shown in Fig. 1) and perception acquires the data, i.e. images, depths, point cloud, from multiple sensors. After pre-processing the received data, the perception algorithm detects all strawberries in the field of view and publishes their coordinates through the berry topic. In addition, the perception classifies all detected berries as "pluckable" or "unpluckable" which is included in the berry topic. Receiving an "at least one pluckable berry detected" message, triggers the robot control loop which converts the coordinates of berries from the camera frame to the robot base frame and commands the robot end-effector to move to the pre-grasp pose. As transformed coordinates of the picking points of the Strawberries in the robot base frame always contain some level of error, instead of the robot moving directly to the picking point pose, it goes to the pre-grasp pose. The pre-grasp pose is a point with the actual \(Y\) and \(Z\) coordinate and \(X-d\) in the robot base frame where \(d\) is a predefined distance of the end-effector from the berry. If more than one pluckable berries are detected the scheduling algorithm is triggered to determine the sequence of picking in order to improve the efficiency of the system. In the pre-grasp pose, the targeted berry is detected with at least two of the bottom RGB cameras. Using the detected coordinates of the targeted berry in the bottom RGB cameras, the errors of the picking point coordinates are estimated. Using the estimated errors, the picking point coordinates are corrected and the robot moves to the picking pose. in this stage, the strawberry stem should be located in between the gripping fingers where the cutter is able to cut the stem effectively. In addition, the remaining stem on the berry should not be too long which damages the other fruit in the punnet, or too short where the gripper fingers contact the fruit and bruise it. To confirm that the fruit is in the right place, a cutting confirmation algorithm based on the bottom RGB sensor was developed which is described in detail in Section 3. After the confirmation that the fruit is located in the right place, the cutting command is sent and the stem is cut. In this stage, different conditions and scenarios can lead to unsuccessfully cutting and picking. To improve the efficiency of the system, and avoid redundant movement of the robot arm, a picking confirmation is performed before moving to place the fruit in the punnet. After the cutting action, the robot moves back with a pre-defined distance and performs a picking validation as outlined in Section 3. If the picking is confirmed, the robot goes to punnet pose and places the picked berry in the punnet and the sequence is repeated. Placing poses of the fruit in the punnet are pre-defined points by which it is ensured that the fruits are placed evenly in order in the punnet to avoid bruising them. ### Control and Motion planning As the space between the strawberry plants and the robot arm is very limited and the environment contains a high level of uncertainty, there is a high possibility of collision of the robot arm or the end-effector with different objects. The high possibility of collision and limited task space demands a thorough and comprehensive motion planning approach. The manipulator's motion trajectory, velocity, and acceleration should be rectified from the beginning through all the way to putting the fruit in the punnet, including approaching the fruit, cutting action, holding, and placing. In this work, to preserve a collision-free manipulation and prevent dropping/damaging fruit or equipment we defined a set of \(n\) key points through the trajectory of the end effector motion denoted by \(P_{i}\) where \(i=\{0,1,2,...,n\}\). We assumed that the trajectory of the mobile robot is aligned with the fruit tables, hence, the robot arm's base frame is almost the same distance as the fruit tables. In this way, the robot arm's home position always is almost the same distance as the fruit tables. Different motion planning algorithms and trajectory/velocity/acceleration profiles were employed for each segment of movement to ensure collision-free and efficient manipulation. At the beginning of the picking process, the end-effector frame is located at \(P_{0}\), i.e. Home Position. The end-effector frame denoted by \(\mathcal{F}_{ee}:\{O_{ee};x_{ee},y_{ee},z_{ee}\}\) is attached to the middle of the gripper fingers. The end-effector frame coordinates are a defined point that coincides with defined trajectory key points during the robot's movement. When the \(\mathcal{F}_{ee}\) locate at \(P_{0}\), the system is initialised and the picking process is commenced. We exploited the Open Motion Planning Library (OMPL) (Sucan A. et al., 2012) which is a sampling-based motion planning and Pilz Industrial Motion Planner to plan the manipulator's movement. Many planners in OMPL (including the default one) favour the speed of finding a solution path over path quality. A feasible path is smoothed and shortened in a post-processing stage to obtain a path that is closer to optimal. However, there is no guarantee that a global optimum is found or that the same solution is found each time since the algorithms in OMPL are probabilistic. Pilz Industrial Motion Planner provides a trajectory generator to plan standard robot motions like Pint to Point (PTP), Line (LIN), and Circle (CIRC) with the interface of a MoveIt planner. For this work, we used the LIN motion command. LIN motion planner connects the start point and end point with a straight line and generates a linear Cartesian trajectory between the goal and starts poses. The planner uses the Cartesian limits to generate a trapezoidal velocity profile in Cartesian space. This planner generates more accurate movement in Cartesian space with a focus on the end-effector trajectory. Figure 4 shows a schematic diagram of the end effector trajectory \(\zeta(t)\), attached frames, and key points. From \(P_{0}\) to \(P_{1}\), i.e. Grasp Pose, we used OMPL for motion planning as it doesn't require high accuracy movement nor a specific trajectory of movement. We set the speed at the highest level as long as preserves the safety and stability of the movement. For reach-to-grasp movement, \(P_{1}\) to \(P_{2}\), Pilz Industrial Motion Planner was employed to ensure the end effector goes through a defined trajectory to minimise disruption of the objects or possible collision. It is the same case for picking and validation action, from \(P_{2}\) to \(P_{3}\). For placing the strawberry in the punnet OMPL was used, however, the movement acceleration should be calculated carefully to avoid dropping the harvested strawberry. Figure 3: Flowchart of the perception and workflow of the system. The algorithm is implemented in two laptops, one running the ROS nodes/topics and high-level controller and the other a GPU-enabled system for running the strawberry detection and scheduling system. Assuming that the mass of the strawberry to be gripped in the end effector is about 50 grams. This is higher than the average mass value recorded during our field studies (Rajendran Sugathakumary et al., 2022). For peak forces (F\({}_{C}\)) about 22.53 N, coefficient of friction of 0.3, and safety factor of 2, under dynamic conditions, using Equation 1 the end effector would be able to handle a 50 g strawberry for a manipulator acceleration of up to about 50 m/s\({}^{2}\). \[F_{c}=\frac{m.(g+a).S}{\mu} \tag{1}\] where; 'F\({}_{c}\)' is the maximum gripping force (N),'m' is the mass to be handled (Kg), 'g' is the acceleration due to gravity (m/s\({}^{2}\)), '\(\mu\)' is the coefficient of friction, and 'S' is a factor of safety. To place the harvested fruit in the punnet six key points were defined with respect to the frame attached to the punnet \(\mathcal{F}_{g}\). These points were selected in a way that distributes the fruits in the punnet evenly and avoids bruising or damaging them. From placing point to the home position, again OMPL with the highest possible speed was used to increase the efficiency of the system ### Harvesting sequence planning selecting a berry as the target for picking among many possible pickable fruits is important to increase the success rate and reduce possible occlusions. Picking a free fruit that occludes other pickable fruits, not only normally has a higher success rate, but also removes the possible occlusion. There are notable studies on sequence planning for robotic harvesting that proposed near-optimal solutions (Kurtser and Edan, 2020). In this work we mainly implemented and tested two methods for simplicity and reducing the amount of computation to determine the target berry as described in the followings: _1- Target Berry Selection Using Min/Max:_ The min-max algorithm attempts to find the maximum of the minimum distances among all the bounding boxes of the detected berries. Considering that there are \(N\) berries detected in the top camera image view, the minimum of all the distances between each berry with respect to all other detected berries are calculated Figure 4: The initial (\(t=0\)) and final state (\(t=T\)) of end-effector trajectory \(\zeta(t)\) (shown with a green line): \(\mathcal{F}_{e}\) is the robot’s end-effector frame, the frame \(\mathcal{F}_{pg}\) is attached to pre-grasp and post-grasp points, and a grasping configuration \(\mathcal{F}_{p}\) is shown with a frame attached to the picking point. Also, a frame is attached to the container as the final goal of the end-effector is shown as \(\mathcal{F}_{g}\). All frames are expressed using the inertial global frame \(\mathcal{F}_{r}\). as: \[d_{i}^{min}=min(d_{i,1},d_{i,2}\ldots d_{i,j});\forall i\neq j \tag{2}\] Then the maximum of these minimum distances is taken as follows: \[d_{target}=max(d_{1}^{min},d_{2}^{min}\ldots d_{N}^{min},) \tag{3}\] where \(d_{target}\) represents the most isolated berry in a cluster which is scheduled to be harvested first. This ensures that the difficult-to-reach berries are harvested later which aids in avoiding damage to other berries while trying to reach the difficult ones. _2- Coordinate based sorting:_ Although the previous method gives a systematic tool to select the target berry, it was noticed that it is more complicated to achieve a reasonable success rate. One reason is that by changing the viewpoint of the RGB-D camera, the targeted berry might change without a change in reality. The more efficient and practical approach could be sorting berries based on their coordinate in the image frame e.g. left-to-right or right-to-left depending on the direction of moving of the mobile platform. In this way, the mobile platform doesn't have to go forward and backwards to harvest all berries. ## 4 Universal picking head ### concepts and requirement of a universal picking head Picking fruits in different growing conditions is a challenging problem for selective harvesting technology, as it is difficult to design and build an effective robotic device (known as an end-effector or picking head) that is able to deal with complex picking operations. A human's hand enables dexterous manipulation of fruit with 27 degrees of freedom, and over 80 per cent of the grasping information can be encoded into just 6 Eigen grasps. In contrast, conventional robotic end-effectors are customised for specific applications, such as pick-and-place operations in industrial environments (Jarrasse et al., 2014). Currently, there are two types of picking heads available for robotic harvesting of high-value crops: (i) a picking head having a parallel jaw gripper, which may not be suitable for all types of crops, and (ii) a picking head that has a customised design for picking particular fruit in a very specific picking scenario, which is only suitable for a specific type of crop or method of harvesting. Consequently, the effectiveness of commonly available robotic picking heads is limited, as different robotic picking heads may be needed for different crop types. Some robotic picking heads are used to pick soft fruits such as strawberries. Some of the robotic picking heads that are currently available for picking strawberries are cup-shaped picking heads, which have opening parts that locate the peduncle of a strawberry and position the strawberry in front of cutting scissors in order to harvest the strawberry. The cutting action causes the strawberry to detach from the plant and fall into a punnet for collecting the strawberries. In this example, the picking head does not directly touch the flesh of the strawberry, which minimises bruising. However, as the strawberry falls from a height into the punnet, the harvesting can inadvertently cause damage/bruising to the fruit. Furthermore, fruit placement within the punnet is not controlled, which may result in uneven distribution of the fruit in the punnet (which may also cause damage to fruit that are below other fruit). Similarly, the design of the cup-shaped picking head, and the design of other types of picking heads, may not be suitable for harvesting crops that grow in dense clusters. The present design has therefore identified the need for an improved apparatus for the automatic detection, selection, and harvesting of crops that grow in dense clusters. ### Peduncle gripping and cutting force for end-effector design To develop an end-effector solution that attempts to detach strawberries by targeting the peduncle, it is essential to understand certain physical properties of the peduncle. This includes the estimate of the required cutting force and the gripping force that can be applied to the peduncle. Knowing the cutting force while using a particular cutting blade profile, gives insight into a better selection of actuation systems to provide the required cutting force. Moreover, the practice of using off-the-shelf blades takes away the need of investing effort to design optimum blade profiles. Rather such blades can be directly used or can be custom-made according to the standard profile. If the end-effector is designed in such a way as to use interchangeable blades, replacing the blades would be easier during worn-out situations. Hence it is wise to use cutting blades with a standard profile and in an interchangeable configuration in the end-effectors. For this work, a comprehensive study on the gripping and cutting forces of strawberry peduncle was carried out (Rajendran Sugathakumary et al., 2022). This study intended to estimate the limit of the gripping force that can be applied to the strawberry peduncle without crushing it. To understand this force limit, experiments were conducted by applying compression force (analogous to the gripping force) to the peduncle specimens using a Universal Testing Machine (UTM). The peduncles of ripe strawberries of both varieties were selected for preparing the specimens. 15 specimens of each variety were prepared so that the peduncle was 10 mm in length and were trimmed at a distance of 10 mm from the top surface of the ripe strawberry fruit. This specimen measurement can simulate the situation where an end effector grips the peduncle within 10-20 mm from the top surface of the strawberry top surface during harvesting. The specimen diameter varied from 1.40 mm to 2.22 mm for Katrina with a mean and standard deviation of 1.75 mm and 0.24 mm. And for Zara, the diameter varied from 1.43 mm to 2.33 mm with a mean and standard deviation of 1.76 mm and 0.25mm. In addition, we studied the force required to cut the peduncle of a ripe strawberry using a standard blade. And as an extension to this, the variation of this force at different cutting orientations was also studied. The profile of the selected cutting blade was studied using a scanning electron microscope. The blade had a double level cutting edge with a blade angle of 16.6\({}^{0}\) and a thickness of 0.22 mm. We studied the cutting force variation for 0\({}^{0}\),10\({}^{0}\), 20\({}^{0}\), 30\({}^{0}\) inclinations. 15 peduncle samples from both Zara and Katrina varieties were prepared for this study, i.e., 15 samples from each variety for each orientation of cut. These specimens were prepared by trimming the peduncle at a length of 30 mm from the top surface of the ripe strawberry. During the experimental trials for studying the limit of the gripping force, all tested specimens showed a common force profile under compression load. While applying compression load to the specimen, there is a gradual increase in the resistive force to a certain point, and from then it shows a sudden drop. After then, the specimen gets squeezed completely on further application of compression load. It has been noticed that, after the drop in the resistive force, the specimen goes into permanent deformation, and finally leads toward complete squeezing. So this peak force (F\({}_{C}\)) before the drop is considered the point of interest. The trend of this force (F\({}_{C}\)) can be studied to limit the gripping force on the peduncle such that there is a lesser chance of squeezing the peduncle during the gripping action. The squeezing or crushing of the peduncle during the gripping action can result in the detached strawberry falling off from the grip during the harvesting process. Considering the peak forces, we determined that the lowest of these peak forces (F\({}_{C}\)) recorded is about 26.83 N and 22.53 N for Katrina and Zara respectively. This means that at these lowest values of compression force (analogous to gripping force), the respective test specimen went into permanent deformation before squeezing. Hence if we allocate a factor of safety of 2 to the lowest of these two values of forces (26.83 N and 22.53 N), the gripping force should be limited to around 10 N. In addition, while analyzing the force values recorded during the cutting trials, again a common force profile has been noticed for all the tested specimens. In the profile, there is an increase in force value during the cutting action but with two sudden drops after two force peaks (F\({}_{P1}\) and F\({}_{P2}\)). After the second peak force, there is a flat profile followed by a sharp rise in force after a point (E). This sharp increase happens when the blade touches the peduncle supports after cutting the peduncle off. So from the force profile for each specimen, the maximum of the two peak forces (F\({}_{P1}\) or F\({}_{P2}\)) is taken as the peak cutting force (F\({}_{P}\)) required for that specimen. From the force values recorded at different cutting orientations, it has been noticed that the mean cutting force shows a relatively lowest value at 30\({}^{0}\) orientation compared to other studied orientations. At 30\({}^{0}\) cutting orientation, the maximum of the peak cutting force (F\({}_{P}\)) recorded is about 7.20 N for Katrina, and 5.80 N for Zara. And hence, with a factor of safety 2, the cutting force requirement can be approximated to 15 N which could be considered sufficient to cut the strawberry peduncle at 30\({}^{0}\) orientation. Also, this force would be sufficient to handle other cutting orientations studied. We exploited these results to optimise the design of the end-effector and increase the harvesting success rate. ### End-effector design The proposed robotic end-effector for fruit harvesting, comprising: a vision system for identifying the location of the ripe fruits on the plant; the first pair of fingers for moving any objects that at least partly occlude the identified ripe fruit on the plant (Separators); the second pair of fingers for gripping a stem of the identified ripe fruit (Grippers); and a cutting mechanism for cutting the stem of the identified ripe fruit when the stem is gripped between the second pair of fingers (Cutter), wherein a portion of the stem that remains attached to the fruit remains gripped by the grippers after the stem has been cut. The design and components of the universal picking head are shown in Figures 4(a) and 4(b). In general, the end-effector described herein benefits from 2.5 degrees of freedom, which is higher than those of available picking heads. This added degree of freedom allows the end-effector to deal with complex picking scenarios where the available picking heads fail. The end-effector benefits from an effective combination of actuation systems and sensors to resolve the limitations of currently available picking heads. The end-effector includes three separate movements (moving objects, gripping a stem, and cutting a stem) that are actuated using two actuators. This is useful as the ability of the end-effector is increased without also significantly increasing the complexity or component count of the device. The actuators and detector design for the picking head are shown in Figure 4(b) The presented techniques advantageously enable ripe fruit to be harvested without bruising or damaging the fruit. These techniques are particularly advantageous for harvesting fruit that grows in dense clusters, such as strawberries. Strawberry is an example - and not limited - fruit that may be harvested using the robotic end-effector of the present techniques. More generally, the presented techniques may be used to harvest different types of fruit and vegetable crops, including those which grow individually and those which grow in clusters. The robotic end-effector comprises a first actuation mechanism for controlling the actuation of the first pair of fingers. Thus, a dedicated actuation mechanism is used to control the movement and operation of the first pair of fingers which allows one degree of freedom for manipulating the cluster and removing possible Figure 5: a) End-effector overall view and its components. The end-effector compromises an RGB-D sensor mounted on top of it for a better wide view, three RGB sensors for close-range view, the first pair of fingers (Separators) for occlusion removal, second pair of fingers for gripping and holding strawberry stem (grippers), and a cutting mechanism for cutting the stem (Cutter). b) End-effector internal design and components. It includes two independent actuators controlling grippers, separators, and cutter providing 2.5 degrees of freedom. The projector separates actuators from the power transmission mechanisms for better heat reduction and dust and water isolation. occlusion independent from griping and holding the fruit. Responsive for receiving the location of an object that at least partly occludes a fruit, the first actuation mechanism controls the first pair of fingers to push away the object, by increasing the separation distance between the first pair of fingers. Thus, the first pair of fingers are closed together when the end-effector is being used to image a plant and identify ripe fruits, and/or when it is moving towards an identified ripe fruit. The first pair of fingers are moved further apart when an object that at least partly occludes fruit needs to be moved away so that the fruit can be better seen (to determine if it is suitable for harvesting) and/or so that the second pair of fingers can grip the stem of the fruit. The cutting mechanism is located in proximity to the second pair of fingers, such that when the stem of the identified ripe fruit is cut by the cutting mechanism, the second pair of fingers continues to grip a portion of the stem that remains attached to the fruit. In other words, when a cutting operation performed by the cutting mechanism is complete, the fruit is not immediately dropped into a container for collecting the harvested fruit. Instead, the fruit continues to be gripped, via a portion of the stem that is still attached to the fruit, by the second pair of fingers. This is advantageous because the robotic end-effector may be controlled to gently place the harvested fruit in a container. The second pair of fingers release their grip on the portion of the stem that remains attached to the fruit when the end-effector is close to the container. The cutting mechanism may be partially or fully covered or encased for safety reasons, i.e. to avoid any risk of a human operator being able to come into contact with the cutting mechanism. The second pair of fingers is moved by a second actuation mechanism which also moves the cutting mechanism. As the cutting mechanism is only operated when it is confirmed that the target fruit is located in between the second pair of fingers, advantageously a single actuation mechanism is used to control both the second pair of fingers and the cutting mechanism, thereby reducing complexity and the number of components needed to control the end-effector. The vision system comprises a depth sensor for generating a three-dimensional map of the plant. The vision system uses the three-dimensional map to identify the location of ripe fruits on a plant and any objects that at least partly occlude the identified ripe fruits. The depth sensor is an RGB-D (red-green-blue-depth) camera. Furthermore, The vision system includes three RGB image sensors for capturing images of the fruit/cluster of fruits at the bottom of the end effector in the vicinity of the second pair of fingers to enable a better view in close range. Having the RGB sensors in the vicinity of the second pair of fingers is advantageous because the sensors capture images of the fruit or cluster of fruits at the fruit level, whereas other sensors of the vision system may view the fruit from a different perspective/angle. This also reduces the risk of every sensor of the vision system being occluded during the picking process, i.e. it provides some redundancy in the vision system. The effective configuration of RGB and RGB-D sensors helps to efficiently detect and localise the ripe fruits. In addition, the combined sensory information can be used to estimate the size, weight, and sort quality of the fruits to be picked. Also, they are further used to control fruit picking and occlusion removal actions. ## 5 Perception for selective harvesting robotic system Strawberries are grown in dense clusters and they come in different configurations and wide varieties with varying shapes. Moreover, the asymmetric and irregular nature of the stems coming out of the fruit makes it difficult to localize the picking point. Commercially available depth sensors are designed for large objects under controlled lighting conditions. Insufficient quality of depth-sensing technologies makes strawberry picking point localization on stem intractable. Our depth sensing, namely RealSenhas 435i, which is widely used in robotics, has deteriorated performance under bright sunlight in farm conditions. In addition, it is designed to work best for distances larger than 50 [cm] where the preciseness drops to 0 for distances below 15 [cm]. As picking strawberries by grasping them contributes to their bruising, we considered picking by griping and cutting the fruit stem requiring picking point (PP) localization. However, localizing the picking point in the depth image is challenging because of the low resolution of RealSense 435i, in particular where the distance is below 15 [cm]. Traditional methods rely on the color-based (Arefi et al., 2011), geometry-based, or shape-based (Li et al., 2020) methods. Strawberries are neither very symmetrical nor their orientation is fixed making an accurate assumption, or shape-based prediction about their stem location difficult. Instead, we take advantage of the Deep Learning (DL)-based strawberry segmentation and key-point detection method proposed by Tafuro et al (Tafuro et al., 2022a). We use the key points to understand the orientation of the strawberry and pose the end-effector. The key point also localizes the picking point to a reasonable accuracy. However, due to the very thin cross-section of a strawberry stem, this cannot always exactly localize the stem giving rise to inaccuracies in the depth perception of the picking point. In addition, we address the lack of precise depth sensing and fine-tuning the localization in close proximity by a novel camera configuration (short and medium-distance focused cameras) and a combination of localization for short and medium distances. The system uses an Intel Realsense D435i depth-sensing camera as the main sensing system. The device is placed at the back and top of the end effector which allows sufficient distance from the cutting action for the depth sensor to be reasonably accurate. In addition to the Realsense camera, there are three RGB cameras placed at the front bottom of the end-effector. At this close range, it is also not feasible to calibrate the RGB cameras as a stereo system. Instead, we carefully find the project of 3D coordinates of the strawberry segmentation into 2D pixel coordinates of the front RGB cameras. Then, based on the coordinates of the strawberry bounding box visible in each image we trained an AI-Based model to make meticulously adjust the position of the end-effector to the final cutting position. In addition to localizing the fruit and picking point, the perception should be able to determine the ripeness, size, and shape of the fruit and determine whether it is suitable for picking or not. we present two novel datasets of strawberries annotated with picking points, key-points (such as the shoulder points, the contact point between the calyx and flesh, and the point on the flesh farthest from the calyx), and the weight and size of the berries. We performed experiments to predict if the fruit is suitable for picking or not. In contrast to the existing works in which classic CV methods are used to determine picking points and suitability for picking, our approach includes SOTA MRCNN models. We collected two datasets to train our models: Dataset-1 is collected at a new _15-acre_ table-top strawberry glasshouse in Carrington, Lincolnshire, which is the latest addition to SOTA Dyson Farming's circular farming system (Dyson, 2022); Dataset-2 has been derived from the Strawberry Digital Images (SDI) (Perez-Borrero et al., 2020). Dataset-1 is a novel dataset that presents strawberry dimensions, weights, suitability for picking, instance segmentation, and key-points for grasping and picking action. The main purpose of this dataset is to facilitate autonomous robotic strawberry picking. For each strawberry, the dataset presents five different key-points: the picking point (PP), the top, and bottom points of the fruit, the left grasping point (LGP), and the right grasping point (LGP). While the PP indicates the position on the stem where the cutting action has to be performed, the left and right grasping points can provide a reference to the end effector for the grasping action. In addition, the dataset also contains annotation for instance segmentation for each of the strawberries. To determine the suitability of strawberries for harvesting each strawberry is labeled as 'pluckable'-ready to be picked- or '"unpluckable"'-not to be picked-. "unpluckable"'-"unpluckable" strawberry include unripe, semi-, and over-ripe or rotten berries. The 'pluckable' category includes strawberries that are nearly ripe and perfectly ripe. The dataset contains a set of 532 strawberry sets. Each set has three colors, depth, and point cloud data of the same strawberry cluster from different distances. The farthest image captures the entire cluster whereas the nearest image focuses on one target strawberry in the cluster. In total, this dataset includes 1588 strawberry images. All the images have been captured with Intel Realsense RGB-D sensor _D435i_. Dataset-2 is an enhancement of the SDI dataset (Perez-Borrero et al., 2020). SDI dataset contains a total of 3100 images. These are dense strawberry clusters that contain an average of 5.8 strawberries per image. We carefully annotated 10999 berries each with 5 different key-points. Moreover, we labeled "pluckability" (i.e. suitability to be picked) of all the strawberries. The strawberries not annotated for key-points are either severely occluded or are in an early flowering stage where a meaningful annotation is not possible. more details on the datasets and proposed methods were published previously and can be reached in (Tafuro et al., 2022a). ### Segmentation, Key-points and Pluckable Detection Our proposed approach includes Detectron-2 (Wu et al., 2019) for segmentation and key-points estimation. The Detectron-2 model is based on MRCNN (He et al., 2017) and has become the standard for instance segmentation. It also has an added capability of key-points detection for human pose estimation. We adapted this key-points detection method integrated within Detectron-2 to estimate the strawberry key-points. The datasets' key-points, segmentation masks, and strawberry categories ('pluckable' and '"unpluckable") are converted to MSCOCO JSON format (Lin et al., 2015). This MSCOCO JSON is the default format for feeding data into Detectron-2. It is also essential to recalculate the bounding box. Without the key-points, the bounding box aligns to the extremities of the segmentation mask. However, the PP key-point lies outside the segmentation mask of the strawberries and thus outside the bounding box. Because of the nature of MRCNN, a key-point outside the bounding box is not detectable. Thus, the bounding boxes are expanded to accommodate all the key-points. We performed experiments with three backbone networks for Detectron-2, R50-FPN, X101, and X101-FPN. ResNeXt (Xie et al., 2017) (X101-FPN and X101) is a more recent network that was introduced as an improvement to ResNet-50 (R50-FPN). Section 6.1 discusses the results in detail. ### Perception Setup We use two laptops to control the entire system and run the perception system. The first laptop runs the ROS nodes as shown in Figure 2 (left). The second laptop with Nvidia GPU runs the Detectron node as shown in Figure 2 (right). The vision system consists of three cameras: An Intel Realsense d435i color and depth-sensing camera and three colors (RGB) cameras. In robotic perception, depth-sensing cameras are essential for the 3D localization of the target and for generating a point cloud. However, most commercially available depth cameras including Realsense d435i are not suitable for close proximity sensing (\(\leq 15\ cm\)). Thus, for the depth sensing to be feasible the Realsense camera is mounted on the back and top of the UPH 5a. However, due to the positioning of the Realsense, the berries are occluded by the picking head in close proximity to maneuvering. So, to compensate for the lack of depth sensing and occlusion in close proximity maneuvering three RGB cameras are additionally mounted for fine-tuning the trajectory, cutting confirmation, and picking success validation. Considering that the purpose of these RGB cameras is error reduction instead of primary strawberry detection, these cameras were mounted firmly on the front bottom of the picking head with a 25 mm horizontal distance between them. ### Perception pipeline and approach The berry plants are on both sides of a lane of strawberry farms with a tabletop system. Hence, our robot, similar to human pickers, picks the ripe fruits on one table on one side before picking the fruits on the other side. This simplifies motion planning as collision is checked for one side of the row of tabletop strawberries. This was achieved using joint constraint in the robot arm planning. Let \(x_{c}^{top}\), \(x_{c}^{left}\), \(x_{c}^{middle}\), and \(x_{c}^{right}\) be the coordinates of strawberry in the top (Intel RealSense), left, middle, and right color cameras respectively. Notice, \(c\in x,y,z\) for \(x_{c}^{top}\) i.e. 3D, \(c\in x,y\) for \(x_{c}^{right}\) and \(x_{c}^{left}\) i.e 2D. \(\mathcal{P}^{top}\) is the camera projection matrix of the top camera for translating an image or pixel coordinates to camera link coordinates. This consists of camera intrinsic (\(K^{top}\)) and extrinsic (\(R^{top}\), \(t^{top}\)) parameters, where \(R\) and \(t\) stands for rotation and translation respectively. \(T^{top}\) is the camera calibration parameter/matrix which translates the camera link coordinates to the robot base frame. \(K\) is the intrinsic parameters of the left, middle and right color camera. \(T^{left}\), \(T^{middle}\), and \(T^{right}\) are the camera calibration parameters/matrices for projection from the robot base frame to the camera coordinate. The sequence of steps taken below for the picking action is shown in Figure 2 and described below: * At the home position, all strawberries are detected in the top (RealSense) camera image frame and scheduled. The depth estimation for the picking point is not reliable for something as thin as the strawberry stem. Thus the 2D segmented strawberry pixels are used as binary masks on the depth image to filter the depth pixels belonging to the strawberry. Further, depth pixels indicating depths less than 20cm and more than 50 cm are filtered out. The minimum distance of the gripping point from Real-Sense is roughly 20cm. Clearly, any strawberry pixel should be more than 20cm. Further, the initial position of the robot cannot be more than 50 cm from the berry due to the farm structure. Then we take the average value of the remaining strawberry depth pixels which gives us a more reliable 3D coordinate \(x_{c}^{top}\) * The scheduled berry coordinate is transformed from the top image or pixel coordinate to the robot base frame using the camera calibration matrix \[X_{c}^{base}=\mathcal{T}_{base}^{top}\mathcal{P}^{top}x_{c}^{top}\] (4) \[where\ P^{top}=[R^{top},t^{top}]K^{top}\] (5) Basically, the pixel coordinate is first transformed to the camera link coordinate using camera intrinsic and extrinsic parameters (Eq. 5) and then the camera link is transformed to the robot base frame (Eq. 4). * Using careful calibration, the targeted berry coordinate \(X_{c}^{base}\) in the top camera is back-projected to the bottom left, middle and right camera using the camera intrinsic (\(K^{left}\), \(K^{middle}\), \(K^{right}\)) and camera calibration parameters (\(T^{left},T^{middle},T^{right}\)). \[x_{c}^{right^{\prime}}=K^{right}\mathcal{T}_{left}^{base}X_{c}^{ base}\] (6) \[x_{c}^{middle^{\prime}}=K^{middle}\mathcal{T}_{middle}^{base}X_{c}^{ base}\] (7) \[x_{c}^{left^{\prime}}=K^{left}\mathcal{T}_{right}^{base}X_{c}^{ base}\] (8) * This back projection (Eqs. 6, 7 and 8) helps in establishing a target berry association in the top and bottom cameras at the robot home position based on image plane position error. \[\gamma^{left}=x_{c}^{left}-x_{c}^{left^{\prime}}\] (9) \[\gamma^{middle}=x_{c}^{middle}-x_{c}^{middle^{\prime}}\] (10) \[\gamma^{right}=x_{c}^{right}-x_{c}^{right^{\prime}}\] (11) * \(\gamma\) is the error of the projected bounding box and the bounding boxes in the bottom cameras. We consider the berry with the lowest \(\gamma\) the targeted berry in each camera view. The error arises mainly from inaccuracies in the depth perception of the RealSense camera as well as errors in camera calibration. * From this point on, for any arm movement, this berry association is identified in bottom cameras for later fine-tuning, grasp alignment, and cutting confirmation. * D) along the depth axis while aligned to the strawberry picking point in X and Y coordinates (\(X_{x^{\prime}y^{\prime}}^{base}\)). * Once the arm is in the pre-grasp pose, the UPH gripping point is adjusted based on the estimated errors calculated. The goal here is to move the gripper to the picking point (\(X_{z}^{base}\)) based on initial depth sensing. However, the X and Y alignment is continuously fine-tuned, where the goal is to locate the grasp pose of the strawberry stem in between the gripper fingers. Due to errors in depth sensing and camera calibration, the gripper cutting point does not align perfectly with the strawberry picking point. The novelty of the proposed method is that we back-project the 3D coordinate to 2D coordinates in the front color cameras. By assuming the fruit with the lowest \(\gamma^{left}\) and \(\gamma^{right}\) as the targeted fruit, we are able to associate the same berry with the bottom color cameras and fine-tune the X, and Y alignment based on more reliable strawberry coordinates in the bottom cameras. This enables the system to perfectly align with the strawberries on X,Y-axis. ## 6 Field experiments results In order to validate the integrity of the system and verify its accuracy, field tests were carried out at two different sites. First, experiments were conducted at Berry Gardens strawberry poly-tunnels at the Riseholme campus of the University of Lincoln 2021. The second field test was carried out at a commercial strawberry growing glasshouse facility owned by Dyson Farming in Carrington, England in 2022. Figure 0(a) and 0(b) show the system harvesting at Dyson glasshouse and Berry Gardens respectively. As can be seen at Berry Gardens the harvesting system was mounted on Thorvald mobile robot, and at Dyson glasshouse, it was mounted on a commercial harvesting trolley. The Berry Gardens poly-tunnels is a research strawberry farming facility with two main strawberry varieties Driscoll Zara and Driscoll Katrina. The variety Zara has a longer calyx as compared to the Katrina variety which makes it more complicated for robotic harvesting. The mean diameter for the Katrina variety is 1.75 mm with a standard deviation of 0.24 mm, and for the Zara is 1.76 mm with a standard deviation of 0.25 mm (Rajendran Sugathakumary et al., 2022). The strawberry variety at Dyson glasshouse is a commercial variety that is grown and available widely. Figure 6 demonstrates harvesting sequences including removing possible occlusion using separator fingers. The separator fingers penetrate in between the occluding fruits and by opening remove the occlusion and make way for gripping fingers to grasp the targeted fruit's stem and cut it. In addition, the separator fingers can be used to remove detection occlusion as well to allow the perception system to detect all possible fruits. ### Segmentation, Key-points and Pluckable Detection Results Table 1 summarises the results for segmentation and key-points detection of strawberries for both the datasets with Detectron-2 (Wu et al., 2019). Although, our results demonstrate different backbones used in our experiments can produce consistent results across the dataset, ResNeXt based model performs better than ResNet-50 based model. The first two columns of Table 1 show segmentation Average Precision (AP) values for pluckable and "unpluckable" berries separately. The sub-columns show AP for Intersection over Union (IoU), and thresholds of 0.5, 0.7, and 0.9. The standard practice is to consider IoU threshold 0.5 (He et al., 2017), however, we also show up to IoU threshold 0.9. Using Dataset-2, our proposed models yield decent AP values for both pluckable and "unpluckable" strawberries at IoU threshold 0.5. However, as shown in Table 1, the '"unpluckable"' berries in Dataset-2 significantly outnumber the 'pluckable' berries. This results in better segmentation performance of '"unpluckable"' berries. The performance drops significantly for stricter thresholds 0.7 and 0.9. This dataset represents berries in very dense clusters and thus Dataset-2 is a very Figure 6: The harvesting sequences to remove a possible occlusion and pick fruit. The separator fingers penetrate in between the occluding fruits and by opening remove the occlusion and make way for gripping fingers to grasp the targeted fruit’s stem and cut it. challenging dataset and has the potential to further advance the research in selective harvesting. On the other hand, Dataset-1 shows very reliable AP values for pluckable strawberries for both the backbones across IoU thresholds. With IOU threshold of 0.5, the Detectron-2 produces 93.32 (R50-FPN) and (X101-FPN) 94.19 AP values, while with a very strict IoU threshold of 0.9 the Detectron-2 provides AP of 83.55, and 88.70 AP with R50-FPN and X101-FPN, respectively. This shows that for selective harvesting the dataset can be reliably used. For Dataset-1, the performance of our models on '"unpluckable"' berries is comparatively less reliable as there are fewer samples of '"unpluckable"' berries in this dataset. However, from a selective harvesting perspective instance segmentation of 'pluckable' berries is more essential. The results of the key-points detection expressed in terms of AP at different IoU thresholds are similar to segmentation. At each IoU threshold, we take the average results from 0.5, 0.3, and 0.1 OKS. OKS (Wu et al., 2019) is the standard performance metric used by Detectron-2 (Wu et al., 2019) and MSCOCO (Lin et al., 2015) for key-point detection. While the OKS threshold normally used is 0.5, 0.1 is a stricter threshold. The experimental results show that similarly to segmentation, the results are consistent across the two backbones although X101-FPN performs slightly better. Also, the key-points detection for 'pluckable' berries is much better than '"unpluckable"' berries for Dataset-1. The results for Dataset-2 obtained comparing X101-FPN and X101 networks, provide a good baseline for future research. Figure 7 shows an example of creating a fruit bounding box, key-points detection, and predicting pluckable and unpluckable fruits. \begin{table} \begin{tabular}{l c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Backbone} & \multicolumn{3}{c}{Sgn Pluckable} & \multicolumn{3}{c}{Sgn ”unpluckable”} & \multicolumn{3}{c}{Key-points Pluckable} & \multicolumn{3}{c}{Key-points ”unpluckable”} \\ & & 0.5 & 0.7 & 0.9 & 0.5 & 0.7 & 0.9 & 0.5 & 0.7 & 0.9 & 0.5 & 0.7 & 0.9 \\ \hline \multirow{2}{*}{Dataset-1} & R50-FPN & 93.32 & 90.97 & 83.55 & 59.46 & 53.61 & 42.91 & 91.27 & 89.10 & 81.90 & 51.36 & 46.20 & 37.30 \\ & X101 & 94.19 & 92.83 & 88.70 & 61.12 & 56.22 & 45.64 & 92.71 & 91.40 & 87.74 & 61.26 & 56.52 & 46.84 \\ \hline \multirow{2}{*}{Dataset-2} & X101-FPN & 71.12 & 64.70 & 43.24 & 76.83 & 74.52 & 68.79 & 64.32 & 58.93 & 39.92 & 73.26 & 71.39 & 66.46 \\ & X101 & 72.12 & 66.84 & 47.86 & 78.09 & 76.65 & 70.30 & 59.29 & 54.40 & 42.12 & 74.67 & 71.45 & 65.30 \\ \hline \hline \end{tabular} \end{table} Table 1: Segmentation and key-points detection results.The sub-columns show AP for Intersection over Union(IoU), and thresholds of 0.5, 0.7, and 0.9. Figure 7: We introduce two novel datasets targeted toward the robotic selective harvesting of strawberries. The datasets provide instance segmentation, ”pluckability”, key-points, and weight information about the strawberries. ### Gaussian Process Regression for picking point error estimation results During the field experiment, the results of picking point error estimation using the Gaussian Process Regression model were recorded which are shown in Figure 8. It presents the predicted x, y, and z error of the fruit coordinate in the robot base frame corresponding to the euclidean distance of the fruit from the base of the robot. After predicting the errors using the model, the targeted fruit picking point coordinate is corrected and the new coordinate is sent to the system to re-plan the trajectory of the end-effector. The results show that the values of the errors are fluctuating around a specific mean value for each axis. The Mean of errors of x-axis is \(ME_{x}=0.062m\) with a standard deviation of \(\sigma_{x}=0.012\), for y-axis is \(ME_{y}=0.009m\) with a standard deviation of \(\sigma_{y}=0.014\), and for z-axis is \(ME_{z}=-0.019m\) with a standard deviation of \(\sigma_{z}=0.016\). ### Harvesting analyses We perform a series of experiments where at each attempt we consider all the berries in a series of clusters. This typically means a target range from around 5 to 15 ripe fruits in a cluster among other yet to ripe fruit. Different trails in multiple harvesting sessions were carried out. The data collected from harvesting trails are presented in Table 3. For this experiment, first all fruit \(N_{a}\) in the harvesting section including ripe and unripe fruits were counted. To determine the performance of the fruit ripeness detection of the system, the pluckable fruits \(N_{p}\), also were counted based on human judgment and compared with the system's ripeness detection \(N_{d}\). The ratio of pluckable fruit to all fruits including unripe fruits (\(N_{a}/N_{p}\)) was 0.42. The result of calculating ripeness detection ratio (\(N_{d}/N_{p}\)) shows 95% accuracy of our ripeness detection model. The trials show that the performance of the robot seemed to be influenced by the position of the fruit in the cluster, which varies significantly from one variety to another. The failure of the robot increases if the target fruit is occluded by too many fruit and/or leaves. However, our novel design of the end-effector, equipped with separator fingers, is able to unblock the most common occlusion. The successfully harvested fruits were counted during the harvesting trails. A harvest attempt was considered a successful harvest where a ripe fruit was detected, griped, cut, and put in the punnet without damage or bruise. Where a fruit dropped midway, bruised or damaged with cutting or gripping, or harvested Figure 8: Predicted x, y, and z error of the fruit coordinate in the robot base frame using Gaussian Process Regression model. with a too long stem remaining on the fruit, considered an unsuccessful harvest. Therefore, the successful harvesting rate \(S_{r}\) was calculated as \(S_{r}=(N_{s}/N_{p})\times 100\) where \(N_{s}\) is the total harvested fruits, and \(N_{p}\) is the total pluckable fruit. The results presented in Table 3 show \(S_{r}=83\%\) for all trials. Another important parameter is the success rate of detected fruits which calculate as \(SD_{r}=(N_{d}/N_{p})\times 100\) where \(N_{d}\) is the total number of detected fruit by the system detection model, and \(N_{p}\) is the total pluckable fruit. This parameter shows the performance of the designed end-effector, position error estimation model, and motion planning control system, regardless of whether fruit is detected or not. The results show \(SD_{r}=87\%\) for all trials. Table 2 shows the performance of our system in comparison to existing approaches that have evaluated their systems. Among other methods only (Ge et al., 2019) perform close in comparison to this work. It is not surprising that (Ge et al., 2019) also relies on mask-RCNN for strawberry detection. Moreover, they also propose their own algorithm for refining the depth information obtained from the depth sensor. To analyze the unsuccessful harvests, five more parameters were defined and recorded during field experiments; total attempt \(A_{t}\), cut command failure \(F_{c}\), gripping/cutting failure \(F_{gc}\), picking validation failure \(F_{v}\), and position failure \(F_{p}\). The total attempt is the sum of all robot attempts to harvest all detected fruits in a trail. Cut command failure is when the robot end-effector is successfully positioned at the picking point, but the system fails to detect the fruit and send a cutting command. The gripping/cutting failure parameter shows unsuccessful or partially cutting off the stem, and/or unsuccessful gripping leads to harvesting failure. Picking validation failure happens when the stem of the targeted fruit is gripped and cut successfully, but the system fails to validate picking and doesn't put the fruit in the punnet. The position failure parameter shows the failures due to inaccurate localizing of the picking point where the end-effector fails to grasp and/or cut the stem of the targeted fruit. The results show in total 201 attempts were conducted by the robot to harvest all detected pluckable fruit which is 1.23 attempts per fruit. It can be seen that there are 66 failed attempts where 12% is the result of detection failure, 29% due to cut command failure, 14% because of gripping/cutting failure, 9% is the result of picking validation failure, and finally, 26% was the result of position inaccuracy. The histogram of the time of picking one fruit successfully is presented in Figure 9. The time of harvest of one fruit here was measured from the capture of a fruit image to put that fruit in the punnet. The average execution time from capturing the image to placing the fruit was 28.2 s. In addition, the total time of a trial was measured as can be seen in Table 3. This time was measured from starting the robot at the beginning of a trail to the last harvest of that trail. The average time per successful harvest of this measurement is slightly higher than the average picking time. This is due to failed/delayed defections, failed attempts, etc. ## 7 Discussion The results of the field experiments demonstrate the robustness and effectiveness of the new robotic harvesting system. The experiments in two different growing conditions and three different strawberry varieties show the high-level adaptability of the system. It proves that the modular characteristics of the system alongside its high ability to reconfigure the system based on the required condition, show distinguished performance. this ability is highly important to generalise its developed technologies to other fruits or growing conditions with minimal changes. The current alternatives have been designed and developed based on specific needs \begin{table} \begin{tabular}{l|c} \hline Author & Success rate (\%) \\ \hline (Dimeas et al., 2015) & 65 \\ (Hayashi et al., 2010) & 65* \\ (Feng et al., 2012) & 71 \\ (Ge et al., 2019) & 74 \\ This work (\(S_{r}\)) & 83 \\ This work (\(SD_{r}\)) & 87 \\ \hline \end{tabular} \end{table} Table 2: Comparison of picking success rate after the first attempt with other strawberry harvesting systems. *Success of peduncle detection and conditions, which makes it challenging to adapt them to other environments. The vision system and ripeness detection proved to be effective with just 4.9% of the ripe fruit not detected. From the graph shown in Figure 8 it can be seen that the localisation of the vision system contains a level of significant errors, however, the errors fluctuate around a specific value. Some of these errors could be caused by the system's internal calibration errors. For instance, camera calibration errors, or errors of transformation from camera coordinate to robot coordinate can contribute to internal calibration errors. Another source of error could be the different lighting conditions of the experiment environment. Regardless of the source or nature of the errors, the proposed Gaussian Process Regression for picking point error estimation method proved to be effective of mitigate the position errors. From Table 3 it can be seen that just 8.4% of the attempts failed due to position error. It is notable that we used a limited dataset to train the model. A more comprehensive dataset with more data points can enhance the performance of the model in different conditions. The cutting confirmation and picking validation methods were also tested during field experiments. This sub-system improves the efficiency of the whole system by reducing the redundant movement of the robot. However, the results show that around 12.9% of all failed harvesting attempts were because of the cutting confirmation and picking validation failure. One reason for these failures that were observed during the field experiment lighting condition of the environment. As the strawberry, in general, has a shiny texture that reflects direct sunlight, in some scenarios the direction of the sunlight reflection prevents the RGB sensors to pick up and transmit the correct colour. In this situation, the pixels show bright colours instead of red colour which leads to classifying the picking as unsuccessful or the cutting command as not sent. Different factors affect the harvesting time of the robotic system. Figure 9 shows that harvesting time varies from 21 seconds to 35 seconds with an average of 28.2 seconds. Field experiment observations indicate that one reason for time variation is robot correction movements to adjust to the targeted fruit location Figure 9: The distribution of time of picking one fruit. The time was measured from the beginning of capturing the image of the targeted fruit using the vision sensor to the end of placing the harvested fruit in the punnet. and/or bring the targeted fruit into the field view of the bottom RGB sensor. In some scenarios, the robot carries out multiple adjustment movements such as moving back, left, right, down, or up to bring the targeted fruit into the bottom sensors' field of view. These adjustment movements are important to capture the targeted fruit which is later used for validation, although it might increase the harvesting time slightly. The field experiments demonstrated the effectiveness of the novel design of the picking head. Only 4.4% of all attempts failed because of the cutting or gripping failure. It means that the picking head design was effectively capable of a stable grip and successful cut of the strawberry stem. The picking head is successfully capable of grasping and manipulating the harvested strawberry without contact with the fruit. In contrast, most available technologies handle the fruit by grasping it using grippers or suction cups or directly dropping it into the container after cutting the stem. Our design grasps and handles the fruit from its stem reducing the possibility of bruising or damaging the fruit significantly. In addition, an ineffective cutting mechanism leads to partial cutting which damages the plant and increases plant disease possibility. We tested the design on different strawberry varieties which have different stem diameters and strengths. The experiments proved that the cutting mechanism is able to cut the different varieties of stems effectively. (A video of the system can be seen in this link.) ## 8 Conclusion We designed, prototyped and field tested a novel picking head that can navigate through possible clusters and pick a targeted fruit. The picking head consists of two independent mechanisms for grasping and removing occlusion providing 2.5 DOF including the cutting mechanism. This novel design allows the system to manipulate occlusions independent of picking actions. In addition, the picking head design provides a \begin{table} \begin{tabular}{l l l l l l l l l l l} \hline \hline **Trail No.** & \begin{tabular}{l} **Total** \\ **fruit** \\ \end{tabular} & \begin{tabular}{l} **Pluckable** \\ **fruit** \\ \end{tabular} & \begin{tabular}{l} **Cut column.** \\ **not detected** \\ \end{tabular} & \begin{tabular}{l} **Grip/Cut** \\ **Failure** \\ \end{tabular} & \begin{tabular}{l} **Picking** \\ **failure** \\ \end{tabular} & \begin{tabular}{l} **Position** \\ **failure** \\ \end{tabular} & \begin{tabular}{l} **Successful** \\ **failure** \\ \end{tabular} & \begin{tabular}{l} **Total** \\ **harvest** \\ \end{tabular} & \begin{tabular}{l} **Total trail** \\ **attempt** \\ \end{tabular} & \begin{tabular}{l} **Total trail** \\ **time (s)** \\ \end{tabular} \\ \hline [MISSING_PAGE_POST] *Total** & **377** & **163** & **8** & **26** & **9** & **6** & **17** & **135** & **201** & **5324 (s)** \\ \hline \hline \end{tabular} \end{table} Table 3: Results of field experiments. contact-free grasping and picking of the fruit. This is highly important to reduce fruit damage or bruising and to reduce the corresponding waste as strawberries are a very delicate fruit. We developed and proposed a state-of-the-art perception system using RGB-D and three RGB sensors. To train our vision system, we produced two new datasets from a real strawberry growing farm with different features such as key points, picking points, ripeness, etc. We designed and developed the autonomous harvesting system modular and configurable to increase its adaptability for different strawberry varieties and growing conditions. Finally, to test the system we performed field experiments on two different commercial farms with three varieties. The field experiment results show the efficiency and reliability of the system with an 87% success rate. Furthermore, the perception system demonstrated 95% success in detecting ripe fruits.
2305.00761
Effect of depolarizing and quenching collisions on contrast of the coherent population trapping resonance
We investigate the effect of buffer gases on the coherent population trapping resonance induced by a $\sigma$-polarized optical field in $^{87}$Rb atoms. Our experimental results show that inert gases, which depolarize the excited state of the alkali-metal atoms, provide higher contrast than nitrogen that effectively quenches their fluorescence. We also demonstrate that elimination of the spontaneous radiation does not significantly decrease the width at moderate temperatures of an atomic medium. Therefore, a mixture of inert gases can be preferable over a mixture with nitrogen for atomic clocks.
K. M. Sabakar, M. I. Vaskovskaya, D. S. Chuchelov, E. A. Tsygankov, V. V. Vassiliev, S. A. Zibrov, V. L. Velichansky
2023-05-01T10:38:35Z
http://arxiv.org/abs/2305.00761v1
# Effect of depolarizing and quenching collisions ###### Abstract We investigate the effect of buffer gases on the coherent population trapping resonance induced by a \(\sigma\)-polarized optical field in \({}^{87}\)Rb atoms. Our experimental results show that inert gases, which depolarize the excited state of the alkali-metal atoms, provide higher contrast than nitrogen that effectively quenches their fluorescence. We also demonstrate that elimination of the spontaneous radiation does not significantly decrease the width at moderate temperatures of an atomic medium. Therefore, a mixture of inert gases can be preferable over a mixture with nitrogen for atomic clocks. All-optical interrogation schemes utilizing the effect of coherent population trapping (CPT) [1], progress in miniature vapor-cell production technology and advances in diode lasers [2] have led to the development of chip-scale atomic clocks [3]. Their main advantages over other frequency standards are smaller size and lower power consumption, but they have lower frequency stability. Currently, many research groups are seeking for new approaches to improve the long-term frequency stability of such atomic clocks [4; 5; 6; 7]. In the absence of a frequency drift, the stability is proportional to \(1/\sqrt{\tau}\), where \(\tau\) is the averaging time [8]. In this case, a further improvement of the long-term frequency stability can be achieved only by the increase of the short-term stability, which depends on the contrast-to-width-ratio of the CPT resonance. In what follows, we call it the quality factor, or Q-factor. The standard approach to reduce the relaxation rate of the ground-state coherence occurring due to collisions of alkali-metal atoms with atomic cell walls is the usage of a buffer gas. It provides diffusion of atoms to the walls with a slower speed than their unperturbed movement with the thermal velocity. The probability to lose the polarization upon collisions with buffer gas particles is less than under interaction with the cell walls. However, an increase of the buffer gas pressure results eventually in the collisional rebroadening of the CPT resonance. These opposite dependencies give the value of buffer gas pressure that provides the minimum width. This value depends on the dimensions and geometry of the cell [9]. The buffer gas induces a temperature-dependent shift of the CPT resonance frequency, which can be suppressed by using a mixture of two gases with linear temperature coefficients of opposite signs [10]. Most often, a mixture of argon and nitrogen is used. It is known that inert gases depolarize the excited state of alkali-metal atoms and tend to equalize populations of its magnetic sublevels [11; 12; 13; 14]. The equalization of populations should increase the CPT resonance amplitude detected in \(\sigma^{+}\)-\(\sigma^{+}\) scheme. Indeed, the depolarization reduces the number of atoms pumped to the sublevel 5P\({}_{1/2}\)\(m_{F_{g}}=2\) (we consider the case of \({}^{87}\)Rb atoms). Therefore, a smaller amount of atoms is optically pumped to the non-absorbing sublevel \(m_{F_{g}}=2\) of the ground state and more atoms arrive at the working sublevels \(F_{g}=1\), 2, \(m_{F_{g}}=0\) due to spontaneous transitions. Fig. 1 demonstrates distribution of populations over magnetic sublevels for the case of zero and complete excited-state depolarization. They were obtained by solving density-matrix equations accounting for all electric-dipole transitions of D\({}_{1}\) line induced by a bichromatic \(\sigma^{+}\)-polarized optical field. Power broadening of the CPT resonance was set to be 3-times greater than relaxation rate of ground-state elements to make difference in populations evident. Details of calculations are given in Appendix. Nitrogen quenches fluorescence of alkali-metal atoms due to the transfer of the excited-state energy to molecular vibrations. This prevents broadening of the CPT resonance induced by the spontaneous radiation, therefore, nitrogen is often considered as a preferable buffer gas for atomic clocks [9; 15]. Transitions from the excited to the ground states during quenching have the same selection rules as spontaneous decay [16; 17], but the effect of nitrogen on the population distribution of the excited state has not been studied in detail and is poorly described in the literature. We assume that molecular gases can to some extent prevent the excited-state depolarization. In this case, nitrogen should reduce the CPT resonance amplitude compared to inert gases while improving its width due to the elimination of the spontaneous radiation. The influence of these two factors on Q-factor is opposite. The goal of this paper is to estimate which of them is more significant. To check this, we compared the contrast and width of the CPT resonance in Ar, Ne, and N\({}_{2}\). ## I Experiment The experimental setup is schematically shown in Fig. 2. We used a single-mode vertical-cavity surface-emitting laser generating at \(\simeq 795\) nm. The DC and RF components of the injection current were fed to the laser via a bias tee. The modulation frequency was close to 3.417 GHz, and the first sidebands of the polychromatic optical field were tuned to transitions \(F_{g}=2\to F_{e}=2\), \(F_{g}=1\to F_{e}=2\) of the \({}^{87}\)Rb D\({}_{1}\) line. The power of the RF field was set to provide the highest amplitudes of the first-order sidebands. A polarizer and a quarter-wave plate were used to form the CPT resonance in the \(\sigma^{+}\)-\(\sigma^{+}\) scheme. The diameter of the laser beam was 3 mm. The laser wavelength was stabilized by a feedback loop that controls the temperature of the laser diode. An atomic cell was placed in a longitudinal magnetic field of 0.02 G to separate the metrological CPT resonance from magneto-sensitive ones at the transitions between sublevels \(m_{F_{g}}=\pm 1\). The temperature of the atomic cell was maintained with an accuracy of 0.01 \({}^{\circ}\)C. The cell, heater, and solenoid were placed in a three-layer \(\mu\)-metal magnetic shield, providing better than 500-fold suppression of the laboratory magnetic field. We have manufactured three sets of cylindrical atomic cells with CO\({}_{2}\) laser-welded windows (8 mm diameter, 15 mm length, 0.7 mm wall thickness) and filled them with isotopically enriched \({}^{87}\)Rb and one of the buffer gases: N\({}_{2}\), Ar, or Ne. The buffer gas pressures are 30, 60, and 90 Torr. We used pinch-off glass welding to seal the stem at a distance of about 20 mm from the cell body so as not to heat it. This ensures that the actual gas pressure inside the cell differs from the pressure in the filling chamber by no more than 1%. Fig. 3 shows metrological and magneto-sensitive CPT resonances obtained in two atomic cells filled with nitrogen and argon at a pressure of 90 Torr. The inhomogeneity of the magnetic field did not lead to a noticeable broadening of magneto-sensitive resonances, thus, we consider their amplitudes to be determined by populations of the corresponding magnetic sublevels. Experimental conditions are the same for both cells: temperature and optical field intensity are 65 \({}^{\circ}\)C and 0.3 mW/cm\({}^{2}\). The non-resonant radiation losses in both cells are almost equal. However, some differences in the signals can be seen. First, resonances on sublevels \(F_{g}=1,2,m_{F_{g}}=-1\) (left) and \(F_{g}=1,2,m_{F_{g}}=0\) (central) have noticeably greater amplitudes in Ar than in N\({}_{2}\). Second, the background transmission in nitrogen is Figure 1: Energy level structure and electric-dipole transitions to sublevels of hyperfine component \(F_{e}=2\) induced by an optical field with \(\sigma^{+}\) polarization. Columns show distribution of populations over 5S\({}_{1/2}\) and 5P\({}_{1/2}\) states. They were calculated in a model without (a) and with (b) accounting for the depolarization; see Appendix for details. The heights of the columns for the excited state are increased by about five orders of magnitude. Figure 3: Metrological (central) and magneto-sensitive CPT resonances obtained in atomic cells filled with nitrogen and argon at a pressure of 90 Torr. The dashed lines serve as a guide for the eye and show the difference in the amplitudes of the resonances. Figure 2: The layout of experimental setup. higher when the microwave frequency is detuned from the CPT resonance (\(\simeq 1.96\) V in N\({}_{2}\), \(\simeq 1.82\) V in Ar). On the contrary, nitrogen should provide a slightly smaller transmission level due to the lower collisional broadening of the \({}^{87}\)Rb D\({}_{1}\) line [18]. We attribute the mentioned above features to the negative impact of the fluorescence quenching in nitrogen, which reduces efficiency of the excited state depolarization and enhances pumping to the non-absorbing sublevel. Dependencies of the contrast (the ratio of the resonance amplitude to the transmission level at resonance peak) of the metrological CPT resonance on the laser field intensity for all pressures are shown in Fig. 4. The difference in contrast between gases is negligible at intensities below 0.1 mW/cm\({}^{2}\) for all pressures. As the intensity increases, the rate of optical pumping becomes significant and the divergence between dependencies arises. For N\({}_{2}\), the contrast reaches a maximum and then slightly decreases. For Ar and Ne the contrast does not decrease even at the highest available intensity of 1 mW/cm\({}^{2}\). Neon provides the highest contrast, reaching a value of 6.5%, while the maximal contrasts for argon and nitrogen are 5% and 3.6%, respectively. s the pressure increases, the dependencies remain almost the same and the relation \(C_{\rm max}^{\rm Ne}>C_{\rm max}^{\rm Ar}>C_{\rm max}^{\rm N_{2}}\) does not change, but the maximal contrasts decline. It happens due to the growth of homogeneous broadening that leads to decrease in amount of atoms optically pumped into the dark state. The increase in temperature cannot fully compensate the loss in number of atoms since the relaxation rate of the ground-state coherence becomes greater due to the spin-exchange mechanism. Therefore, the maximal absorption contrast is achieved at higher temperature for greater buffer gas pressure and falls with growth of the latter. The difference in contrasts between the inert gases is due to the smaller broadening of \({}^{87}\)Rb D\({}_{1}\) line by Ne [18]. Similarly to the contrast, the dependencies of the CPT resonance width and Q-factor on the intensity were obtained. From each dependence Q(I) we defined the maximum value Q\({}_{\rm max}\) and plotted the dependence of the Q\({}_{\rm max}\) on temperature for each gas and all pressures (Fig. 5). Under the same conditions, the resonance in nitrogen has the smallest width. Nitrogen has an advantage over argon and neon of about 20% and 10% in Q\({}_{\rm max}\) for a pressure of 30 Torr, which is achieved due to the narrower resonance. At higher buffer gas pressures, the advantage of inert gases in contrast exceeds the advantage of nitrogen in width. As a result, \(Q_{\rm max}^{\rm Ne}/Q_{\rm max}^{\rm N_{2}}\) is close to 1.6 at Figure 4: Dependencies of the CPT resonance contrast on the laser intensity for different temperatures and pressures of nitrogen, argon, and neon. Figure 5: Dependencies of the maximal (in terms of intensity) quality factor Q\({}_{\rm max}\) of the CPT resonance on the cell temperature for different pressures of nitrogen, argon, and neon. The legend is the same as in Fig. 4. 60 Torr and to 2 at 90 Torr. Note that neon maximizes Q-factor at lower temperatures than other gases at all pressures. Fig. 6 (a,b) shows dependencies of the CPT resonance width on the optical field intensity for different temperatures in neon and nitrogen. In neon the width decreases with temperature at intensities above 0.4 mW/cm\({}^{2}\). This feature is due to the light narrowing effect [19], which takes place when a sufficiently large number of atoms are coherently trapped in a dark state. In nitrogen, as the temperature increases, the width increases in the same way for all intensities, which indicates the much smaller impact of this effect. Therefore, more atoms resides at the non-absorbing sublevel in N\({}_{2}\) compared to Ne. We have studied the potential benefit of fluorescence quenching in nitrogen for the CPT resonance width. For this, we made three additional cells with the same diameter of 8 mm, but smaller internal length of 2.5 mm. The decrease of length prevents complete absorption of low-intensity laser radiation at high temperatures and Rb concentrations, when the influence of spontaneous photons on the width should be more evident. The cells were filled with 90 Torr of Ar, Ne, and N\({}_{2}\). The beam was expanded to 6 mm in diameter and the operating radiation intensity was 0.1 mW/cm\({}^{2}\). The measured dependencies of the CPT resonance width on temperature in these cells are shown in Fig. 6(c). As one would expect, the width should grow faster with temperature in inert gases than in nitrogen due to the spontaneous radiation. However, dependencies in all gases reveal the same behavior, which is typical for spin-exchange broadening. The contribution of this mechanism to the coherence relaxation is given by [14] \[\Gamma_{se}=\frac{6I+1}{8I+4}\sigma_{se}v_{r}n, \tag{1}\] where \(I\) is the nuclear spin, \(\sigma_{se}\) is the spin-exchange cross-section, \(v_{r}\) is the average relative velocity, and \(n\) is the concentration of the alkali-metal atoms. The dashed line in Fig. 6(c) is the dependence of \(\Gamma_{se}/\pi\) on temperature for \({}^{87}\)Rb plotted for concentration taken from [20] and the most reliable value of \(\sigma_{se}\) equal to \(1.9\cdot 10^{-14}\) cm\({}^{2}\)[21]. However, we have not observed a noticeable broadening of the CPT resonance in inert gases at high temperatures compared to nitrogen. Similar result was obtained earlier in [22] for \({}^{133}\)Cs in the temperature range 20-65 \({}^{\circ}\)C. Thus, we do not associate the difference in widths with the quenching effect, it is probably related to the lower diffusion coefficient in nitrogen [23; 24], which determines the rate of Rb coherence relaxation as a result of collisions with the cell walls. ## II Discussion Considering the choice of a buffer gas for CPT-based atomic clocks, we believe that a proper mixture of Ar and Ne is preferable to a mixture containing N\({}_{2}\). When using inert gases, it is possible to achieve a higher resonance contrast due to depolarization of \({}^{87}\)Rb excited state. Moreover, the maximal Q-factor of the resonance in neon is achieved at a lower temperature than in nitrogen, which reduces the clock power consumption. Another advantage of Ar-Ne mixture is the ability to suppress the light shift at higher buffer gas pressures. As we demonstrated in [25], the AC Stark shift of the CPT resonance frequency cannot be eliminated if the homogeneous broadening of optical lines exceeds a certain value. Since \({}^{87}\)Rb D\({}_{1}\) line collisional broadening rate for Ne is about 1.5 times smaller than that for N\({}_{2}\)[18], it is possible to obtain the minimal relaxation rate of the coherence and retain the ability to suppress the light shift in Figure 6: Dependencies of the CPT resonance width on the optical field intensity for different temperatures in neon (a) and nitrogen (b) and on the temperature for atomic cells with nitrogen, argon and neon (c). Pressure of all buffer gases is 90 Torr. The dashed line in (c) is the spin-exchange broadening for \({}^{87}\)Rb calculated via Eq. (1). minature atomic cells, which is significant for chip-scale atomic clocks. Finally, the depolarization of the \({}^{87}\)Rb excited state in inert gases leads to a smaller difference in populations of the ground-state working sublevels due to the repopulation pumping mechanism. Unequal populations are the source of the CPT resonance asymmetry and the nonlinear dependence of its frequency on the laser field intensity [26]. This hinders the light shift suppression methods based on the laser field intensity modulation (see, for example, [27]). ## III Summary We have demonstrated that argon and neon provide a higher contrast of the CPT resonance than nitrogen in the \(\sigma^{+}\)-\(\sigma^{+}\) scheme. The difference in contrast is significant when the optical pumping rate dominates over the ground-state relaxation. We explain this effect as follows. Quenching of alkali-metal atoms fluorescence reduces the degree of the excited-state depolarization, which increases the population of the non-absorbing sublevel. As a result, the amount of atoms that can be optically pumped to the dark state becomes smaller and the amplitude of CPT resonance decreases. We have not found a benefit from quenching of the fluorescence for the width of the CPT resonance at temperatures providing the maximal Q-factor. The difference in Q-factor between Ne and N\({}_{2}\) increases with the buffer gas pressure, reaching a factor of 2 at 90 Torr. Hence, a mixture of inert gases can be more advantageous for CPT-based atomic clocks than a mixture with nitrogen. ## IV Acknowledgments The authors receive funding from Russian Science Foundation (grant No. 19-12-00417). ## V Appendix Here we consider the following model of optical pumping in four-level system with moments \(F=2,1\) in excited and ground states; see Fig. 1. Components of bichromatic optical field \[\mathbf{E}(t)=\mathbf{e}\frac{\mathcal{E}}{2}\left(e^{-i\omega_{e}t}+e^{-i \omega_{b}t}+c.c.\right),\] where \(\mathbf{e}=(\mathbf{e}_{x}+i\mathbf{e}_{y})/\sqrt{2}\), induce transitions between levels \(F_{g}=2\), \(F_{e}=2\) and \(F_{g}=1\), \(F_{e}=2\), respectively, having optical detuning \(\Delta\). The corresponding detuning from \(F_{e}=1\) level is \(\Delta+\omega_{e}\), where \(\omega_{e}\) is the hyperfine splitting of the excited state. Frequency spacing between components is close to hyperfine splitting of the ground state: \(\omega_{b}-\omega_{r}=\omega_{g}+\delta\), where detuning \(\delta\) is much smaller than \(\omega_{g}\). Phenomenological relaxation constant \(\Gamma\) was introduced in equations for optical coherences to account for homogeneous broadening of absorption line occurring under collision of alkali-metal atoms with particles of a buffer gas. We assume that \(\gamma\ll\Gamma\), where \(\gamma\) denotes natural width of the excited state. Rabi frequency \(V=d\mathcal{E}/2\hbar\) contains the reduced dipole matrix element. For simplicity, one phenomenological constant \(\Gamma_{g}\) is used to describe relaxation of the ground-state sublevels. Finally, we do not account for magneto-sensitive CPT resonances and consider only the one between sublevels \(m_{F_{g}}=0\). Initial equations for elements of the density matrix are solved under approximations of low saturation regime and the resonant one for optical field (also known as the rotating-wave approximation), which allows us to obtain the following system for the steady-state regime. Namely, there are equations for populations of the excited state under absence of its depolarization: \[\rho_{-2-2}^{uu}=0, \tag{2a}\] \[\rho_{-1-1}^{uu}=\frac{V^{2}}{6}\frac{\Gamma/(\gamma/2)}{\Delta^{2}+\Gamma^{2 }}\rho_{-2-2}^{22},\] (2b) \[\rho_{-1-1}^{dd}=\frac{V^{2}}{2}\frac{\Gamma/(\gamma/2)}{(\Delta+\omega_{e})^{ 2}+\Gamma^{2}}\rho_{-2-2}^{22},\] (2c) \[\rho_{00}^{uu}=\frac{V^{2}}{4}\frac{\Gamma/(\gamma/2)}{\Delta^{2}+\Gamma^{2}} \left(\rho_{-1-1}^{22}+\frac{1}{3}\rho_{-1-1}^{11}\right),\] (2d) \[\rho_{00}^{dd}=\frac{V^{2}}{4}\frac{\Gamma/(\gamma/2)}{(\Delta+\omega_{e})^{2}+ \Gamma^{2}}\left(\rho_{-1-1}^{22}+\frac{1}{3}\rho_{-1-1}^{11}\right),\] (2e) \[\rho_{11}^{uu}=\frac{V^{2}}{4}\frac{\Gamma/(\gamma/2)}{\Delta^{2}+\Gamma^{2}} \left[\rho_{00}^{22}+\rho_{00}^{11}-2\mathrm{Re}\left(\rho_{00}^{21}\right) \right],\] (2f) \[\rho_{11}^{dd}=\frac{V^{2}}{12}\frac{\Gamma/(\gamma/2)}{(\Delta+\omega_{e})^{2}+ \Gamma^{2}}\left[\rho_{00}^{22}+\rho_{00}^{11}-2\mathrm{Re}\left(\rho_{00}^{2 1}\right)\right],\] (2g) \[\rho_{22}^{uu}=\frac{V^{2}}{2}\frac{\Gamma/(\gamma/2)}{\Delta^{2}+\Gamma^{2}} \left(\frac{1}{3}\rho_{11}^{22}+\rho_{11}^{11}\right), \tag{2h}\] where upper indices "u," "d" of the density matrix elements denote upper-state levels \(F_{e}=2,1\) and indices "2," "1" denote ground-state levels \(F_{g}=2,1\). Lower indices denote \(m_{F}\) value. Equations for elements of the ground state are the following: \[\Gamma_{g}\rho_{-2-2}^{22}=-\Gamma\left[\frac{1}{3}\frac{V^{2}}{\Delta^{2}+\Gamma^ {2}}+\frac{V^{2}}{(\Delta+\omega_{e})^{2}+\Gamma^{2}}\right]\rho_{-2-2}^{22}+ \frac{\Gamma_{g}}{8}+\gamma\left(\frac{1}{3}\rho_{-2-2}^{uu}+\frac{1}{6}\rho_{- 1-1}^{uu}+\frac{1}{2}\rho_{-1-1}^{dd}\right), \tag{3a}\] \[\Gamma_{g}\rho_{-1-1}^{22}=-\frac{1}{2}\Gamma\left[\frac{V^{2}}{\Delta^{2}+\Gamma^ {2}}+\Gamma\frac{V^{2}}{(\Delta+\omega_{e})^{2}+\Gamma^{2}}\right]\rho_{-1-1}^ {22}+\frac{\Gamma_{g}}{8}+\gamma\left(\frac{1}{6}\rho_{-2-2}^{uu}+\frac{1}{12 }\rho_{-1-1}^{uu}+\frac{1}{4}\rho_{00}^{uu}+\frac{1}{4}\rho_{-1-1}^{dd}+ \frac{1}{4}\rho_{00}^{dd}\right),\] (3b) \[\Gamma_{g}\rho_{-1-1}^{11}=-\frac{1}{6}\Gamma\left[\frac{V^{2}}{\Delta^{2}+ \Gamma^{2}}+\Gamma\frac{V^{2}}{(\Delta+\omega_{e})^{2}+\Gamma^{2}}\right]\rho_ {-1-1}^{11}+\frac{\Gamma_{g}}{8}+\gamma\left(\frac{1}{2}\rho_{-2-2}^{uu}+ \frac{1}{4}\rho_{-1-1}^{uu}+\frac{1}{12}\rho_{00}^{uu}+\frac{1}{12}\rho_{-1-1} ^{dd}+\frac{1}{12}\rho_{00}^{dd}\right),\] (3c) \[\Gamma_{g}\rho_{00}^{22}=-\frac{1}{2}\Gamma\left[\frac{V^{2}}{ \Delta^{2}+\Gamma^{2}}+\frac{1}{3}\Gamma\frac{V^{2}}{(\Delta+\omega_{e})^{2}+ \Gamma^{2}}\right]\rho_{00}^{22}\] \[+\frac{1}{2}\frac{V^{2}}{\Delta^{2}+\Gamma^{2}}\left[\Delta\cdot \mathrm{Im}(\rho_{00}^{21})+\Gamma\cdot\mathrm{Re}(\rho_{00}^{21})\right]+ \frac{1}{6}\frac{V^{2}}{(\Delta+\omega_{e})^{2}+\Gamma^{2}}\left[(\Delta+ \omega_{e})\cdot\mathrm{Im}(\rho_{00}^{21})+\Gamma\cdot\mathrm{Re}(\rho_{00}^ {21})\right]\] (3d) \[+\frac{\Gamma_{g}}{8}+\gamma\left(\frac{1}{4}\rho_{-1-1}^{uu}+ \frac{1}{4}\rho_{11}^{uu}+\frac{1}{12}\rho_{-1-1}^{dd}+\frac{1}{3}\rho_{00}^{ dd}+\frac{1}{12}\rho_{11}^{dd}\right),\] \[\Gamma_{g}\rho_{00}^{11}=-\frac{1}{2}\Gamma\left[\frac{V^{2}}{ \Delta^{2}+\Gamma^{2}}+\frac{1}{3}\frac{V^{2}}{(\Delta+\omega_{e})^{2}+\Gamma ^{2}}\right]\rho_{00}^{11}\] \[-\frac{1}{2}\frac{V^{2}}{\Delta^{2}+\Gamma^{2}}\left[\Delta\cdot \mathrm{Im}(\rho_{00}^{21})-\Gamma\cdot\mathrm{Re}(\rho_{00}^{21})\right]- \frac{1}{6}\frac{V^{2}}{(\Delta+\omega_{e})^{2}+\Gamma^{2}}\left[(\Delta+ \omega_{e})\cdot\mathrm{Im}(\rho_{00}^{21})-\Gamma\cdot\mathrm{Re}(\rho_{00}^ {21})\right] \tag{3e}\] \[+\frac{\Gamma_{g}}{8}+\gamma\left(\frac{1}{4}\rho_{-1-1}^{uu}+ \frac{1}{3}\rho_{00}^{uu}+\frac{1}{4}\rho_{11}^{uu}+\frac{1}{12}\rho_{-1-1}^{ dd}+\frac{1}{12}\rho_{11}^{dd}\right),\] \[\Gamma_{g}\rho_{11}^{22}=-\frac{1}{3}\Gamma\frac{V^{2}}{\Delta^{2}+\Gamma^{2}} \rho_{11}^{22}+\frac{\Gamma_{g}}{8}+\gamma\left(\frac{1}{6}\rho_{22}^{uu}+ \frac{1}{12}\rho_{11}^{uu}+\frac{1}{4}\rho_{00}^{uu}+\frac{1}{4}\rho_{11}^{dd} +\frac{1}{4}\rho_{00}^{dd}\right), \tag{3f}\] \[\Gamma_{g}\rho_{11}^{11}=-\Gamma\frac{V^{2}}{\Delta^{2}+\Gamma^{2}} \rho_{11}^{11}+\frac{\Gamma_{g}}{8}+\gamma\left(\frac{1}{2}\rho_{22}^{uu}+ \frac{1}{4}\rho_{11}^{uu}+\frac{1}{12}\rho_{00}^{uu}+\frac{1}{12}\rho_{11}^{dd} +\frac{1}{12}\rho_{00}^{dd}\right),\] (3g) \[\Gamma_{g}\rho_{22}^{22}=\frac{\Gamma_{g}}{8}+\gamma\left(\frac{1}{3} \rho_{22}^{uu}+\frac{1}{6}\rho_{11}^{uu}+\frac{1}{2}\rho_{11}^{dd}\right), \tag{3h}\] \[\left\{\delta+i\Gamma_{g}+\frac{i}{2}\Gamma\left[\frac{V^{2}}{\Delta^{2}+ \Gamma^{2}}+\frac{1}{3}\frac{V^{2}}{(\Delta+\omega_{e})^{2}+\Gamma^{2}}\right] \right\}\rho_{00}^{21}= \tag{3i}\] \[\frac{i}{4}\Gamma\left[\frac{V^{2}}{\Delta^{2}+\Gamma^{2}}+\frac{ 1}{3}\frac{V^{2}}{(\Delta+\omega_{e})^{2}+\Gamma^{2}}\right](\rho_{00}^{22}+ \rho_{00}^{11})+\frac{1}{4}\left[\Delta\frac{V^{2}}{\Delta^{2}+\Gamma^{2}}+ \frac{1}{3}(\Delta+\omega_{e})\frac{V^{2}}{(\Delta+\omega_{e})^{2}+\Gamma^{2}} \right](\rho_{00}^{22}-\rho_{00}^{11}).\] The light shift of the ground-state microwave transition frequency is neglected in system of equations (3) to simplify calculations. To account for complete depolarization of the excited state, we replaced all its populations with the arithmetic mean: \(\rho_{ii}^{uu},\,\rho_{ii}^{dd}\rightarrow\left(\sum_{i=-2}^{2}\rho_{ii}^{uu}+ \sum_{i=1}^{1}\rho_{ii}^{dd}\right)/8\). Solution for significant rate of optical pumping, \(V^{2}/\Gamma\gg\Gamma_{g}\), demonstrated that physical contrast, which we define here as \(\left[\rho^{ee}(|\delta|\gg V^{2}/\Gamma)-\rho^{ee}(\delta=0)\right]/\rho^{ee}(| \delta|\gg V^{2}/\Gamma)\), is two-times greater than the excited state. Here \(\rho^{ee}\) is the sum of populations of the excited-state sublevels. Fig. 1 demonstrates distributions of populations of \({}^{87}\)Rb 5S\({}_{1/2}\) and 5P\({}_{1/2}\) states for two cases: without (a) and with (b) excited-state depolarization. They were calculated for \(\Gamma/2\pi=1\) GHz, \(\omega_{e}/2\pi=817\) MHz, \(\Delta/2\pi=-30\) MHz, \(\delta=0\). The value of Rabi frequency was set to provide power broaden ing of the CPT resonance three-times greater than \(\Gamma_{g}\). In case (b) population of sublevel \(m_{F_{e}}=2\) decreases and optical pumping of the non-absorbing sublevel \(m_{F_{g}}=2\) becomes smaller. Populations of excited-state sublevels \(F_{e}=2\), \(m_{F_{e}}=-2,-1,0,\ F_{e}=1\), \(m_{F_{e}}=-1,0\) grow, which increases amount of spontaneous transitions to working sublevels \(F_{g}=1,2\), \(m_{F_{g}}=0\). We note that population of sublevel \(F_{g}=1\), \(m_{F_{g}}=1\) is smaller than that of \(F_{g}=1\), \(m_{F_{g}}=-1\), since probability of transition \(|F_{g}=1\), \(m_{F_{g}}=1\rangle\rightarrow|F_{e}=2\), \(m_{F_{e}}=2\rangle\) is greater, while the repopulation rate of these sublevels is the same due to spontaneous transitions. We also note that equation (3i) for coherence \(\rho_{00}^{21}\) contains terms \(\propto(\rho_{00}^{22}-\rho_{00}^{11})\) in its right-hand side. Despite that components of bichromatic field have equal intensities, populations of working sublevels are not equal due to spontaneous transitions. As a consequence, the real part of coherence \(\rho_{00}^{21}\) acquires a term proportional to \(\delta\). The CPT resonance becomes neither an even nor an odd function of \(\delta\), i.e., it turns out to be asymmetric. On the opposite, under complete depolarization of the excited state, spontaneous transitions equally populate working sublevels providing a symmetric CPT resonance.
2306.06898
Transient Stability Analysis of Grid-Connected Converters Based on Reverse-Time Trajectory
As the proportion of converter-interfaced renewable energy resources in the power system is increasing, the strength of the power grid at the connection point of wind turbine generators (WTGs) is gradually weakening. Existing research has shown that when connected with the weak grid, the dynamic characteristics of the traditional grid-following controlled converters will deteriorate, and unstable phenomena such as oscillation are prone to arise. Due to the limitations of linear analysis that can not sufficiently capture the stability phenomena, transient stability must also be investigated. So far, standalone time-domain simulations or analytical Lyapunov stability criteria have been used to investigate transient stability. However, time-domain simulations have proven to be computationally too heavy, while analytical methods are more complex to formulate, require many assumptions, and are conservative. This paper demonstrates an innovative approach to estimating the system boundaries via hybrid - linearised Lyapunov function-based approach and the time-reversal technique. The proposed methodology enables compensation for both time-consuming simulations and the conservative nature of Lyapunov functions. This work brings out the clear distinction between the system boundaries with different post-fault active current ramp rate controls. At the same time providing a new perspective on critical clearing times for wind turbine systems. Finally, the stability boundary is verified using time domain simulation studies.
Mohammad Kazem Bakhshizadeh, Sujay Ghosh, Guangya Yang, Łukasz Kocewiak
2023-06-12T07:04:03Z
http://arxiv.org/abs/2306.06898v1
# Transient Stability Analysis of Grid-Connected Converters Based on Reverse-Time Trajectory ###### Abstract As the proportion of converter-interfaced renewable energy resources in the power system is increasing, the strength of the power grid at the connection point of wind turbine generators (WTGs) is gradually weakening. Existing research has shown that when connected with the weak grid, the dynamic characteristics of the traditional grid-following controlled converters will deteriorate, and unstable phenomena such as oscillation are prone to arise. Due to the limitations of linear analysis that can not sufficiently capture the stability phenomena, transient stability must also be investigated. So far, standalone time-domain simulations or analytical Lyapunov stability criteria have been used to investigate transient stability. However, time-domain simulations have proven to be computationally too heavy, while analytical methods are more complex to formulate, require many assumptions, and are conservative. This paper demonstrates an innovative approach to estimating the system boundaries via hybrid - linearised Lyapunov function-based approach and the time-reversal technique. The proposed methodology enables compensation for both time-consuming simulations and the conservative nature of Lyapunov functions. This work brings out the clear distinction between the system boundaries with different post-fault active current ramp rate controls. At the same time providing a new perspective on critical clearing times for wind turbine systems. Finally, the stability boundary is verified using time domain simulation studies. Lyapunov direct method, Non-autonomous systems, PLL, Time trajectory reversal, Transient stability assessment, Wind turbine converter system. ## Nomenclature \(\mathbb{R}^{n}\)Is a n dimensional Euclidean space. \(\mathbb{0}^{n}\)Is a nx1 null vector. \(\in\)Is a member of. \(\subset\)Is a subset of. \(\partial\mathbb{A}\)Boundary of a space A. \(\forall\)For all. \(\exists\)There exists. \(A\Rightarrow B\)A results in B. A is a sufficient condition for B. B is a necessary condition for A. \(A\Leftrightarrow B\)B is true/false if and only if A is true/false. A/B is a sufficient and necessary condition for B/A. \(\textbf{x}^{T}\)Transpose of vector \(x\). \(f:X\to Y\)Function \(f\) that maps set \(X\) to set \(Y\). \(f^{-1}(\textbf{x})\)Inverse of function \(f(\textbf{x})\). \(\dot{\textbf{x}}=\frac{\text{d}\textbf{x}}{\text{d}t}\)Time derivative of vector \(\textbf{x}(t)\). \(\textbf{A}|_{x=x_{0}}\)A is evaluated at \(\textbf{x}=\textbf{x}_{0}\). \(\Re\{Z\}\)Real part of the complex number \(Z\). \(\lambda(\textbf{A})\)Set of the eigenvalues of matrix **A**. \(\textbf{x}_{ini}\)Initial condition for a dynamical system. \(\textbf{x}_{final}\)Final condition for a dynamical system. \(\textbf{x}(t)=\varphi(t,\textbf{x}_{ini})\)Solution of a dynamical system for a specific initial condition. \(\frac{\partial f}{\partial x}\)Jacobian of the vectorial function \(f\). \(\left[\begin{array}{ccc}\frac{\partial f_{1}}{\partial x_{1}}&\dots&\frac{ \partial f_{1}}{\partial x_{n}}\\ \vdots&\ddots&\vdots\\ \frac{\partial f_{n}}{\partial x_{1}}&\dots&\frac{\partial f_{n}}{\partial x_{n} }\\ \end{array}\right]\) \(\textbf{P}\succ 0\)Positive-definite matrix **P**. \(\textbf{P}\succ 0\)Positive-semidefinite matrix **P**. \(\tilde{\textbf{x}}\)Equilibrium point/state of a dynamical system. \(F_{abc}=\begin{bmatrix}F_{a}\\ F_{b}\\ F_{c}\end{bmatrix}\)Representation of signal F(t) in the three-phase stationary frame (abc domain). \(F_{dq}=\begin{bmatrix}F_{d}\\ F_{q}\end{bmatrix}\)Representation of signal F(t) in the rotating frame (dq domain). ## I Introduction As of 2021, the worldwide installation of wind power capacity has reached approximately 743 GW, contributing significantly to a reduction of over 1.1 billion tonnes of CO2 emissions globally [1]. The wind industry is poised for continued growth due to technological innovations, economies of scale, and policy support around the world. However, the increasing proportion of converter-interfaced renewable energy resources in the power system [2] has weakened the connection strength between wind turbine generators (WTGs) and the power grid. Previous research has demonstrated that the dynamic characteristics of traditional grid-following controlled converters can deteriorate when connected to a weak grid, leading to unstable phenomena such as oscillation [3][4]. Traditionally, the stability of wind farm connections has been analysed using linearised model-based approaches, such as eigenvalue analysis [5][6] or impedance-based stability analysis [7][8]. These methods assume that the system, including the wind turbine (WT) and the connected power system, behaves linearly under small disturbances and that stability is only analysed within the operating point's vicinity. However, it has been noted in [9] that small-signal stability assessment alone cannot guarantee overall stability. Therefore, transient stability must also be investigated to ensure that the system remains stable under larger disturbances. In [10], it has been demonstrated that large disturbances can destabilise the phase-locked loop (PLL), which can have a significant impact on the transient stability of the wind turbine (WT) system. In general, transient stability has been evaluated through time-domain simulations. Time-domain simulation is simple, however, it cannot provide a close-form solution for quantifying stability margins. Therefore, it is necessary to repeat the simulations over a large set of system conditions (like phase portraits) to identify the system boundary, i.e. the region of attraction (RoA) [11]. Alternatively, analytical transient stability methods, such as equal area criteria and Lyapunov's direct method [12], provides a closed-form solution for the system. Here an non-linear energy function is constructed such that after a disturbance the decrease in energy results in a stable system. A classical non-linear energy function is constructed for synchronous generators based on its swing equation [13]. Efforts have been made to extend the same analysis to WT systems [14]; however, the system is assumed to have autonomous behaviour, see Section III. In [15], a non-linear energy function for a WT with non-autonomous behaviour is constructed based on [16], which states that a system has a smaller RoA when the active current ramp post-fault is faster. However, the approach to construct the energy function is highly complex and results in a conservative estimate of the RoA. Recent developments have focused on maximising the system's RoA by formulating an optimisation problem using sum-of-squares programming [17]. Additionally, some machine learning (ML) techniques [18] have been studied to achieve a better estimate of the RoA. However, these methods require significant expertise in data-driven techniques. Considering, (a) the high computation burden of repeated time domain simulations over a large set of system conditions, (b) the mathematical complexities of non-linear analytical methods coupled with conservative estimate of system RoA, and (c) the requirement of domain expertise in data-driven techniques for optimisation and ML methods; the objective of this paper is to propose a fast and simplified transient stability assessment method that can be easily adopted by the industry. This paper presents a novel approach to transient stability assessment of wind power plants (WPPs) by combining the advantages of time-domain simulations and analytical energy-based stability methods. Specifically, we use the reverse time trajectory technique in conjunction with linear Lyapunov functions to estimate the system boundary. Compared to nonlinear energy functions, the construction of linear energy function is simple and has an established procedure. Additionally, the reverse time simulation only needs to be performed for stable cases, significantly reducing the number of repeated time domain simulations. The time-reversal technique has been the subject of extensive research for several decades [19]-[21]. The application of time-reversal in dynamic systems dates back to 1915, where it was initially used to analyse a three-body problem [22]. Subsequently, time-reversal has been employed in various problems related to thermodynamics and quantum mechanics, as discussed in [23]-[25]. Reference [26] provides a extensive overview of time reversible dynamics, including system equations, and conservative and dissipative behavior. Building on our previous research on nonlinear modelling and transient stability assessment of WPPs [15], [27]-[29]. Our proposed methodology aims to provide a fast, simple and practical solution for industry without requiring complex mathematical analysis. The following is the contribution in the paper, 1. A hybrid approach to estimating the post-fault system boundary (RoA) is proposed based on linear energy function and reverse time-trajectory. 2. This work brings out the clear distinction between the system boundaries with different post-fault active current ramp rate controls. 3. A new perspective on critical clearing time for wind turbine systems is discussed. Section II provides an overview of the mathematical preliminaries for the proposed transient stability assessment method. Section III details the large signal reduced order WT model and its transient stability assessment. The time-domain validation of the proposed method is presented in Section IV. The paper concludes with Section V. ## II Transient Stability - Mathematical Preliminaries Most of the dynamical systems can be described by the following ordinary differential equation (ODE): \[\dot{x}=f(t,x,u) \tag{1}\] where \(t\) is time, \(\dot{\mathbf{x}}\) is the time derivative of vector \(\mathbf{x}\in\mathbb{R}^{n}\), and \(\mathbf{u}\in\mathbb{R}^{m}\) is the vector of input signals. \(\mathbf{x}\) is called the state variables of the system. Usually, the inputs are defined based on time and the state variables, therefore, they can be omitted from (1). If \(f\) is not an explicit function of time, then the system defined by (2) is called an autonomous system [30]. \[\dot{x}=f(x) \tag{2}\] An equilibrium point for a dynamical system is defined as a point \(\tilde{\mathbf{x}}\), for which \(f(\tilde{\mathbf{x}})=0\). In other words, if the system solution \(\mathbf{x}(t)\) reaches \(\tilde{\mathbf{x}}\) it stays there forever. ### _Lyapunov's direct method for stability analysis_ Scalar continuous and differentiable function \(V:D\subset\mathbb{R}^{n}\rightarrow\mathbb{R}\) is called a Lyapunov function (LF) for the system (2) with \(\dot{\mathbf{x}}=0\) such that, * \(V(0^{n})=0\Leftrightarrow\mathbf{x}=0^{n}\) * \(V(\mathbf{x})>0\)\(\forall~{}\mathbf{x}\in\mathbb{D}-\{0^{n}\}\) * \(\dot{V}(\mathbf{x})\leq 0\)\(\forall~{}\mathbf{x}\in\mathbb{D}\) which also shows that the system is stable. Furthermore, if \(\dot{V}(\mathbf{x})<0\)\(\forall\mathbf{x}\in\mathbb{D}-\{0^{n}\}\), then the system is asymptotically stable, i.e. \(\lim_{t\to\infty}\mathbf{x}(t)=0^{n}\). It must be noted that if the equilibrium point \(\tilde{\mathbf{x}}\) is not the origin, it can be shifted by change of variables. ### _Region of Attraction for dynamical systems_ The region of attraction for an equilibrium point is defined as a set, \[\mathbb{D}=\{\mathbf{x}_{init}\in\mathbb{R}^{n}:\lim_{t\to\infty}\varphi(t,\mathbf{x}_ {init})=0^{n}\} \tag{3}\] Finding the exact RoA is a highly complex task; instead, finding an inner estimate of the exact RoA is common practice. A set defined by \(V(\mathbf{x})\leq c\) (\(c>0\)) is called a sublevel set of the LF \(V(\mathbf{x})\), which is a set that if the solution trajectory \(\mathbf{x}(t)\) enters, then it cannot exit. Therefore, \[V(x)\leq c\subset\mathbb{D} \tag{4}\] In other words, obtaining the biggest estimate of the RoA is to find appropriate LFs and then maximize c. For example, the RoA of the reversed Van der Pol system (5) is presented in Fig. 1, where it is evident that if the initial point is inside the RoA, the system is attracted to the origin. \[\begin{cases}\dot{x}_{1}=-x_{2}\\ \dot{x}_{2}=x_{1}-x_{2}(1-x_{1}^{2})\end{cases} \tag{5}\] ### _Lyapunov function candidate from linerised system_ The nonlinear dynamical system (2) can be approximated by a linear model in a small region around the operating point (i.e. origin) by small signal linearisation as follow, \[\Delta\dot{\mathbf{x}}=\mathbf{A}\Delta\mathbf{x} \tag{6}\] where \(\mathbf{A}=\frac{\partial f}{\partial x}|_{\mathbf{x}=0^{n}}\) If \(\mathbf{A}\) is Hurwitz matrix, then a quadratic LF \(V(\mathbf{x})\) can easily be found by using the linearised model (6) as, \[V(\mathbf{x})=\mathbf{x}^{T}\mathbf{P}\mathbf{x} \tag{7}\] where, for any \(\mathbf{Q}\succ 0\), \(\mathbf{P}\succ 0\) is the solution of the Lyapunov equation, \[\mathbf{P}\mathbf{A}+\mathbf{A}^{T}\mathbf{P}+\mathbf{Q}=0 \tag{8}\] For example, for the reversed Van der Pol system (5), the linearisation around the origin results in, \[\mathbf{A}=\begin{bmatrix}0&-1\\ 1&-1\end{bmatrix} \tag{9}\] which is Hurwitz. By assuming \(\mathbf{Q}=\begin{bmatrix}1&-0.5\\ -0.5&1\end{bmatrix}\succ 0\) and solving the Lyapunov equation in (8) results in, \[\mathbf{P}=\begin{bmatrix}1&-0.5\\ -0.5&1\end{bmatrix} \tag{10}\] The computed matrix \(\mathbf{P}\) results in the LF (11), whose maximum estimated RoA is presented in Fig. 2. For the LF to be valid, the time derivative should be negative, which is also highlighted. Fig. 2 shows that not only \(\dot{V}(x)\) should be negative, but also \(V(x)\leq c\) is also essential. \[V(x_{1},x_{2})=\begin{bmatrix}x_{1}&x_{2}\end{bmatrix}P\begin{bmatrix}x_{1}\\ x_{2}\end{bmatrix}=x_{1}^{2}-x_{1}x_{2}+x_{2}^{2} \tag{11}\] Fig. 1: RoA of the reversed Van der Pol system (5). Fig. 2: Estimated RoA of the reversed Van der Pol system by energy function constructed in (11). ### _Dynamical systems as mappings_ A dynamical system can be thought of as a function that maps the initial to the final conditions after a specific time. Some of the main properties related to the dynamical system stability are given by the following definitions [30]-[32]. **Definition 1.**_Uniqueness of the solution of a dynamical system_ - A sufficient condition for the uniqueness of the solution is that the function \(f\) should be locally Lipschitz, i.e. it is a continuous function, and its derivative with respect to the state variables is bounded [30]. \[\parallel f(t,x)-f(t,y)\parallel\ \ \leq L\parallel x-y\parallel \tag{12}\] It should be noted that this is a weaker condition than the differentiability of the function \(f\). **Definition 2.**_Boundary preservation in a homeomorphic mapping_ - The continuous function \(f:X\to Y\) is a homeomorphic [31] if it is bijective and its inverse is also continuous, such that, \[f(\partial\mathbb{A})=\partial(f(\mathbb{A})),\forall\mathbb{A}\subseteq X \tag{13}\] which says that if \(f\) is a homeomorphic from \(X\) to \(\mathbb{Y}\) and if \(\mathbb{A}\) is a subset of X; then the image of the boundary of \(\mathbb{A}\) is equal to the boundary of the image of \(\mathbb{A}\). **Definition 3.**_Reverse-time trajectory_ - If \(f\) is Lipschitz, then a unique solution trajectory for each initial condition is guaranteed. Moreover, if the differential equations are solved backwards in time, then the same unique trajectory is traversed [32]. This means that, \(F(x)=\varphi(T,x)\) where F is an invertible function, in which T is a defined amount of time. Therefore, the response of a dynamical system after a given time for initial conditions chosen from a closed set in \(\mathbb{B}\subset\mathbb{R}^{n}\) will lie in a closed set \(\mathbb{D}\subset\mathbb{R}^{n}\) such that, \[\partial\mathbb{D}=\partial(F(\mathbb{B}))=F(\partial\mathbb{B}) \tag{14}\] This is quite a useful conclusion which means that simulating a dynamical system numerically for a boundary of initial conditions can give the boundary of the final states, and it is guaranteed that for any initial point inside this boundary, the final response lies in the calculated final boundary. ### _Estimating RoA from reverse-time trajectory_ If the initial point selected is in close proximity to the stable equilibrium point, the uniqueness of the solution guarantees that all points in the reversed trajectory will be attracted to the equilibrium point. Additionally, if the stable equilibrium point is bounded by a limit cycle, it can be identified by reversing the trajectory, but this may require a longer simulation time, as demonstrated in Fig. 3. In power system applications, it is often required that the system reaches a steady state within a specific time frame, known as the settling time [33]. To estimate a time-limited region of attraction (TLRoA) after a disturbance, the system's differential equations are solved backward until the disturbance, assuming a tolerance band (e.g., \(\pm\) 5%) around the equilibrium point. It is crucial that the tolerance band must be a region of attraction. Therefore, a small region of attraction around the equilibrium point is first identified using linearized analysis. Then, the mapping theory explained in Section II-F is used to transform this region into another region through the backward solution of the original ODEs, as shown in Fig.4. ## III Transient stability assessment Our previous works [15][27][28] have demonstrated that a type-4 wind turbine (WT) can be simplified to a grid-side converter with a constant DC voltage during grid faults (Fig. 5a), resulting in a current-controlled source (Fig. 5b) with reference values obtained from the grid codes. For large-signal stability analysis, the fast inner current control dynamics can be neglected, and the shunt capacitor filter's impact on stability can be disregarded if the current is controlled on the grid-side LCL filter. A reduced-order WT model in the DQ domain is presented in Fig. 5c, with SRF PLL for synchronisation. The equivalent swing equation of the WT converter system derived in [28] can be presented as, \[M_{eq}\ddot{\delta}=T_{m_{eq}}-T_{e_{eq}}-D_{eq}\dot{\delta} \tag{15}\] Fig. 4: Mapping of the initial states to the final states by the backward solution of the ODEs for the reversed Van der Pol system (5). Fig. 3: Reverse-time trajectory for the reversed Van der Pol system (5). where, \[\begin{split} M_{eq}&=1-k_{p}L_{g}i_{d}^{c}\\ T_{m_{eq}}&=k_{p}(\overline{r_{Lg}i_{d}^{c}}+ \overline{L_{g}i_{d}^{c}}+\dot{\overline{L_{g}i_{d}^{c}}}\omega_{g})+k_{i}(r_{ L_{g}}i_{q}^{c}\\ &\quad+\overline{L_{g}i_{q}^{c}}+L_{g}i_{d}^{c}\omega_{g})\\ T_{e_{eq}}&=(k_{i}V_{g}\text{sin}\delta+k_{p}\dot{V _{g}}\text{sin}\delta)+M\dot{\omega}_{g}\\ D_{eq}&=k_{p}(V_{g}\text{cos}\delta-\overline{L_{g}i _{d}^{c}})-k_{i}L_{g}i_{d}^{c}\end{split} \tag{16}\] Equation (15) represents a second-order nonlinear damped differential equation that is used to model the wind turbine (WT) system. This equation takes into account the time-varying nature of the system parameters, which are represented by the derivatives in (16). The WT system is modelled in a DQ frame, rotating at a fixed frequency \(\omega_{0}\). The equation also includes several variables and constants, such as \(k_{p}\) and \(k_{i}\), which are the PLL controller gains, \(i_{d}^{c}\) and \(i_{q}^{c}\), which are the converter currents in the converter reference frame, \(r_{Lg}\) and \(L_{g}\), which are the grid impedance, and \(V_{g}\) and \(\omega_{g}\), which are the grid voltage and frequency. Table 1 presents the operating point of the WT converter system considered in this study. ### _Estimating the TLRoA_ #### Iii-A1 Lyapunov function candidate from linerised system The first step in estimating the WT system boundary is obtaining an initial RoA, which is carried out by constructing a Lyapunov function from the linearised equations of the WT system (15). In [15], the system (15) was linearised around \(\tilde{\textbf{x}}\), which gives \[\begin{split}\textbf{A}&=\frac{\partial f}{ \partial x}|_{x=\tilde{x}}\\ &=\begin{bmatrix}0&1\\ \frac{\mp k_{i}V_{g}}{1-k_{p}L_{g}i_{d}^{c}}\sqrt{1-\gamma^{2}}&\frac{k_{i}L_{g}i_{d}^{c} \mp k_{p}V_{g}\sqrt{1-\gamma^{2}}}{1-k_{p}L_{g}i_{d}^{c}}\end{bmatrix}\end{bmatrix} \end{split} \tag{17}\] By assuming, **Q** as an identity matrix, **P** can be computed from (8). Further, similar to (11), the LF constructed from the linear WT system is presented in (18), where Fig. 6 illustrates the estimated initial RoA, with a selected energy level set of 0.001. It must be noted that for simplicity, in Fig. 6, the equilibrium point was shifted to the origin. \[V(\textbf{x})=ax_{1}^{2}+bx_{1}x_{2}+cx_{2}^{2} \tag{18}\] where, _a_=49.66, _b_=0.0026, and _c_=0.129. #### Iii-A2 Backward solution and trajectory reversing In order to estimate the TLRoA, the system (15) can be solved backwards in time using the initial conditions obtained from the initial RoA boundary, which was determined in Fig. 6. For this study, the system was solved backwards for 2.25 seconds, which is the time for the oscillations to dampen out. After this period, the system was further solved backwards until the post-fault active current was ramped down to the fault clearing time. The estimated TLRoA with a post-fault active current ramp rate of Fig. 5: Wind turbine model: (a) Full topology of Type-4 wind turbine system, highlighting the actions/assumptions during faults. (b) Reduced order model (ROM) of the Type-4 wind turbine considering the actions/assumptions. Also showing the synchronisation instability of wind turbine system during grid faults. (c) System representation of ROM in the DQ domain. 28.4 kA/s, as given in (15), is shown in Fig. 7. Unlike RoA, for TLRoA all the points at the boundary reaches the equilibrium point at the same time, therefore the enclosed TLRoA is a subset of the actual RoA. To generate the smooth TLRoA in Fig. 7, the number of samples (initial conditions) are N=186. For a similar system, to generate the actual RoA through forward simulations required N=3213 [27]. This brings out the advantage our proposed method. It must be noted that one should ensure enough samples are taken from the boundary of initial RoA to have a smooth boundary for the set of final conditions. This limitation is a known issue in numerical computations, and there are adaptive sampling techniques to reduce the step size, when there are large variations in a function. The methodology to choose the sampling rate is not the focus of this paper, and will be addressed through future publications. ### _Transient stability assessment methodology_ The primary question in assessing transient stability is whether a power system can return to equilibrium following a disturbance. To address this, a hybrid approach is proposed, where the post-disturbance system is represented by the estimated TLRoA, and a forward time-domain simulation is carried out to observe the system behaviour during the disturbance. Fig. 8 shows the simulation of a balanced fault (severe grid voltage dip) for an extended period with \(V_{g}=0\) pu, \(i_{d}=0.01\) pu, and \(i_{q}=-1\) pu. As pointed out in [28], there will be jumps in the PLL angle and frequency after fault clearance. Therefore, an additional curve (red) is calculated in Fig. 8 that depicts the PLL angle and frequency with the jumps when the fault is cleared at that time. Based on Section III-A, Fig. 9 presents the estimated TLRoA for the system (15) with two different post-fault active current ramp rates 28.4 kA/s and 42.6 kA/s. As expected, the system with a faster ramp rate has a smaller TLRoA [15]. Additionally, the red curve from Fig. 8 is overlaid on the estimated TLRoA in Fig. 9 in delta-omega coordinates, with the PLL angle reset to \(\pi\) when it reaches \(-\pi\). This is to eliminate the illustration of neighboring ROAs for equilibrium points that repeat every \(2\pi\). For assessing transient stability, it is proposed that clearing the fault at any point along the fault trajectory (red line) Fig. 6: RoA estimate by the linearized model (15) with equilibrium point shifted to the origin. Fig. 7: Estimated TLRoA for a post-fault active current ramp rate of 28.4 kA/s. Fig. 9: Proposed transient stability assessment method. inside the TLRoA will guarantee the system's attraction to its post-fault equilibrium point. For instance, if the fault is cleared before reaching the 'yellow triangle' in Fig. 9, then both systems with active current ramp rates of 28.4 kA/s and 42.6 kA/s will be stable. If the fault persists beyond the 'yellow triangle' (but not beyond the 'yellow pentagon'), then only the system with an active current ramp rate of 28.4 kA/s will be stable. Similarly, if the fault persists beyond the 'yellow pentagon' (but not beyond \(-\pi\)), then both systems will become unstable. Moreover, if the fault persists until the 'yellow square', both systems will be stable again. Thus, it is observed that the fault trajectory exists and re-enters the TLRoA multiple times, suggesting that the WT system can be stable if the fault is cleared at a later time, indicating the WT system has multiple critical clearing times. The times at which the fault trajectory reaches the critical points, indicated by the 'yellow triangle' and 'yellow pentagon', can be read off from Fig. 8, with the x-axis showing the clearing times. The clearing times and the resulting system stability will be later verified using actual EMT WT models. ## IV Time-domain verification In this section, the proposed transient stability assessment method is evaluated through time-domain simulations using an EMT WT switching model in PSCAD. The EMT model is designed based on the configuration described in [28], where the current controller gains are adjusted to achieve a fast response. The system stability against the fault clearing times obtained from Fig. 8 will be validated. Figures 10 and 11 show the PSCAD time-domain simulations indicating the clearing time for the WT systems with a post-fault active current ramp rate of 28.4 kA/s and 42.6 kA/s, respectively. It is observed that the clearing times obtained from Fig. 8 and Fig. 9 are consistent with the results obtained from the PSCAD simulations. Additionally, it can be seen that a later clearing time helps the system to regain stability, which is again consistent with the results obtained from our methodology. Overall, the proposed methodology has a high level of confidence in its application for investigating the transient stability of WTs. The traditional method of estimating the post-fault system RoA through forward-time simulation involves guessing initial conditions, resulting in either a stable or an unstable trajectory. In contrast, our proposed methodology only solves stable trajectories in reverse time, resulting in a quicker and more efficient estimation of the RoA. While some efforts have been made to analytically estimate the RoA of a WT with non-autonomous behaviour, such as ramps in active current, these methodologies can be complex and conservative. In contrast, our proposed methodology utilizes a simplified analytical LF to estimate the initial conditions for the reverse time trajectory solutions. As a result, the industry can adopt this methodology without requiring complex mathematical analysis. Our proposed methodology can enable power system operators and wind farm owners to take advantage of multiple critical clearing times for WT systems. By clearing faults later, uninterrupted supply from the WTs can be achieved, which benefits both parties. This can motivate the development of new power system protection philosophies and smart relays, which can enhance the overall stability and reliability of the power system. ## V Conclusion This work extends our research on nonlinear modelling and transient stability assessment of wind power plants (WPPs) and presents a methodology for transient stability assessment of a WT system with a non-autonomous behaviour. The following are the conclusion of the paper, 1. A hybrid approach based on energy function and reverse time-trajectory provides a good estimate of the post-fault system boundary (RoA), where it was observed that the clearing times obtained from the proposed method are consistent with the results obtained from the PSCAD simulations. 2. This work brings out the clear distinction between the system boundaries with different post-fault active current Fig. 11: Verification of the identified critical clearing time by PSCAD simulations - recovery ramp rate of 42.6 kA/s. Fig. 10: Verification of the identified critical clearing time by PSCAD simulations - recovery ramp rate of 2.86 kA/s. ramp rate controls, i.e. a system with a faster post fault active current ramp rate has a smaller RoA. 3. A new perspective on critical clearing times for wind turbine systems was discussed, which showed that sometimes a later clearing time helps the system to regain stability, motivating the development of new power system protection philosophies and smart relays, which can enhance the overall stability and reliability of the power system.
2304.01756
Comparing planar quantum computing platforms at the quantum speed limit
An important aspect that strongly impacts the experimental feasibility of quantum circuits is the ratio of gate times and typical error time scales. Algorithms with circuit depths that significantly exceed the error time scales will result in faulty quantum states and error correction is inevitable. We present a comparison of the theoretical minimal gate time, i.e., the quantum speed limit (QSL), for realistic two- and multi-qubit gate implementations in neutral atoms and superconducting qubits. Subsequent to finding the QSLs for individual gates by means of optimal control theory we use them to quantify the circuit QSL of the quantum Fourier transform and the quantum approximate optimization algorithm. In particular, we analyze these quantum algorithms in terms of circuit run times and gate counts both in the standard gate model and the parity mapping. We find that neutral atom and superconducting qubit platforms show comparable weighted circuit QSLs with respect to the system size.
Daniel Basilewitsch, Clemens Dlaska, Wolfgang Lechner
2023-04-04T12:47:00Z
http://arxiv.org/abs/2304.01756v1
# Comparing planar quantum computing platforms at the quantum speed limit ###### Abstract An important aspect that strongly impacts the experimental feasibility of quantum circuits is the ratio of gate times and typical error time scales. Algorithms with circuit depths that significantly exceed the error time scales will result in faulty quantum states and error correction is inevitable. We present a comparison of the theoretical minimal gate time, i.e., the quantum speed limit (QSL), for realistic two- and multi-qubit gate implementations in neutral atoms and superconducting qubits. Subsequent to finding the QSLs for individual gates by means of optimal control theory we use them to quantify the circuit QSL of the quantum Fourier transform and the quantum approximate optimization algorithm. In particular, we analyze these quantum algorithms in terms of circuit run times and gate counts both in the standard gate model and the parity mapping. We find that neutral atom and superconducting qubit platforms show comparable weighted circuit QSLs with respect to the system size. ## I Introduction Quantum computers promise to solve computational problems that are deemed hard or even intractable for classical computers. Their potential applications include prime-factoring of large integers [1], quantum simulation [2], quantum chemistry [3], combinatorial optimization [4], and even problems in finance [5]. Currently, quantum computing is in the so-called noisy intermediate-scale quantum (NISQ) era [6], characterized by imperfect qubit control, and qubit numbers that prohibit quantum error correction [7] for relevant problem sizes. Nevertheless, recent proof-of-principle experiments [8; 9; 10; 11] demonstrated that a computational quantum advantage over classical computers can be reached already with NISQ hardware. However, it remains a crucial challenge to go beyond the proof-of-principle stage, i.e., to demonstrate a quantum advantage for practically relevant computational tasks on resource-limited present-day devices. To reach a practical quantum advantage regime [2] in NISQ-era digital quantum computing it is of crucial importance to execute quantum algorithms as efficiently as possible in order to minimize the time for noise mechanisms to impair the quantum information processing. This effectively makes a minimization of the quantum algorithm run times and gate counts desirable -- a task that can be addressed in various ways. One option is to find an algorithm's optimal circuit representation, i.e., a circuit requiring a minimal circuit depth together with a minimal gate count for a given set of available gates. Quantum circuit optimization has, e.g., been done heuristically [12; 13] or by machine learning techniques [14; 15] and several open-source packages are readily available [16; 17; 18]. Another option for minimizing the algorithm run time is to minimize the time for each elemental quantum gate of a given quantum circuit. While protocols for fast, high-fidelity quantum gates are nowadays routinely available and implemented on all major quantum computing platforms like neutral atoms [19; 20], superconducting circuits [8; 21] or trapped ions [22; 23], the ideal would be to execute every quantum gate at its quantum speed limit (QSL). In general, the QSL denotes the shortest time needed to accomplish a given task [24]. It constitutes a fundamental limit in time and depends on the system under consideration, i.e., its Hamiltonian and the control knobs available to steer the dynamics. Here, we determine the QSLs of quantum gates for two major quantum computing platforms that allow for two-dimensional (2D) qubit arrangements -- neutral atoms and superconducting circuits [25]. Our study reveals how close the current experimental gate protocols are in comparison to their QSLs and thus exemplifies what can theoretically still be gained from further speeding up gate protocols. Moreover, provided that every gate could be experimentally realized at the QSL, our analysis gives an estimate of how many gates can be executed realistically before decoherence takes over and renders longer quantum circuits practically infeasible. To this end, we consider two prototypical quantum computing algorithms: (i) the quantum Fourier transform (QFT), required for Shor's algorithm for integer factorization [1], and (ii) the quantum approximate optimization algorithm (QAOA), used to solve combinatorial optimization problems [4]. Considering standard NISQ devices for both neutral atoms and superconducting circuits with qubits arranged in a 2D grid architecture with only nearest-neighbor connectivity, we calculate the circuit run times with gates at the QSL for both algorithms. This allows for a direct comparison of both platforms in terms of the maximal problem sizes that should currently be feasible on their NISQ representatives. A common challenge arising in 2D platforms with nearest-neighbor connectivity is the requirement to perform gates between non-neighboring qubits. In the stan dard gate model (SGM), such gates can be replaced by sequences of universal single- and two-qubit gates using the available local connectivity. However, this comes at the price of increasing the circuit depths and gate counts. As an alternative to the SGM, we also examine circuit representations using the so-called parity mapping (PM). In brevity, the PM for quantum computing [26] and quantum optimization [27; 28] is a problem-independent hardware blueprint that only requires nearest-neighbor connectivity at the cost of increased qubit numbers. Since for QAOA circuits in the PM it is beneficial to use local three- and four-qubit gates [29; 30], we also determine their QSLs on both platforms. It should be noted that this work focuses entirely on the determination of the QSLs for various quantum gates and how to turn these into a fair comparison of circuit run times across platforms. A more detailed discussion of gate protocols for specific quantum gates or platforms can, e.g., be found in Refs. [31; 32; 33] for neutral atoms or in Refs. [34; 35; 36] for superconducting circuits. Moreover, it should be noted that we only consider the gate times, respectively circuit run times, as well as the number of gates as indicators for the feasibility of quantum circuits. While both quantities are doubtlessly important, they are by no means the only quantities impacting a circuit's feasibility. A holistic figure of merit assessing a circuit's feasibility would also need to account for state preparation and measurement errors and various other error sources. The paper is organized as follows. In Sec. II we present our main result, i.e., an overview of circuit run times when using gate times from the literature and gate times at the QSL -- evaluated both for neutral atoms and superconducting circuits as well as for circuits in the SGM and the PM. In Sec. III we then present the details of our numerical model. Section IV introduces the basic notion of QSLs and how quantum optimal control theory (OCT) can be used for determining QSLs. A detailed discussion of QSLs as well as which combinations of available control fields allow to reach the QSLs, is given in Sec. V. Section VI concludes. ## II Main result: quantum algorithms at the quantum speed limit In this section, we compare the run times of QFT and QAOA quantum circuits using gate set implementations available on neutral atom and superconducting circuit hardware. To do so we consider two scenarios. In the first scenario, we use literature values for gate times of state-of-the-art gate implementations of the minimal universal gate set (referred to as "standard gate set" (SGS) from now on) native to each platform, which thus represents the canonical way of converting quantum algorithms into executable quantum circuits. In the second scenario, we use an extended set of gates available at each platform with gate times at the QSL (referred to as "QSL gate set" (QGS) from now on), which thus yields the current fundamental limits in circuit run times. We describe both gate sets in Sec. II.1 and use them to analyze circuit run times of QFT and QAOA quantum circuits both in the SGM in Sec. II.2 and the PM in Sec. II.3. A brief review of the basic concepts and quantum circuits of a QFT and a single QAOA step within the SGM and the PM is given in Appendix A. ### Standard and QSL gate sets In the SGM, each quantum algorithm is converted into quantum circuits using gates from a universal set of quantum gates. Such a universal set typically contains all the single-qubit gates and at least one entangling two-qubit gate, e.g., the CNOT gate [38]. The row named "standard gate set" in Table 1 summarizes the native universal gate sets and their typical gate times for both platforms at comparable gate fidelities. For neutral atoms, we consider the controlled-Z gate, \(\text{CZ}=\text{diag}\{1,1,1,-1\}\), as the typical entangling two-qubit gate. This is a common choice for neutral atoms [20] as it has been successfully used for implementing quantum algorithms [39; 40]. For superconducting circuits, we have chosen the iSWAP-like Sycamore gate as an entangling two-qubit gate, motivated by its short gate time and successful usage in recent quantum advantage experiments with quantum processors based on tunable couplers [8; 9]. In addition, we consider an extended, platform-independent set of quantum gates operated at the QSL (see "QSL gate set" in Table 1). This set consists of all single-qubit gates as well as several multi-qubit gates implementable on both platforms: CZ, CNOT, SWAP, ZZZ, and ZZZZ. The availability of a wider range of gates al \begin{table} \begin{tabular}{c c c c c} \hline \hline & \multicolumn{2}{c}{neutral} & \multicolumn{2}{c}{superconducting} \\ & \multicolumn{2}{c}{atoms} & \multicolumn{2}{c}{circuits} \\ \hline standard & gate & time (ns) & gate & time (ns) \\ \cline{2-5} gate set & local & \(\sim\!1000\)[20] & local & 25 [37] \\ (SGS) & CZ & 350 [20] & Sycamore & 12 [37] \\ \hline & gate & time (ns) & gate & time (ns) \\ \cline{2-5} & local & \(\sim\!1000\)[20] & local & 25 [37] \\ QSL & CNOTa & 300 & CNOT & 14 \\ gate set & CZ & 350 & CZ & 10 \\ (QGS) & SWAP & 400 & SWAP & 12 \\ & ZZZ & 600 & ZZZ & 24 \\ & ZZZZ & 600 & ZZZZ & 80 \\ \hline \hline \end{tabular} a The CNOT gate is listed for completeness but not used in any circuits for neutral atoms, see Sec. V for details. \end{table} Table 1: Overview of different gate sets and corresponding gate times on neutral atoms and superconducting circuits. While the row “standard gate set” (SGS) represents typical gates and times used on the respective platform, the multi-qubit gates in the row named “QSL gate set” (QGS) correspond to an extended gate set with gate times at the QSL. lows for more flexibility in finding circuit representations with fewer gates and shorter circuit run times. In addition, we also assume that every gate in this set is executed at the QSL, which allows for another speed-up of circuit run times. The run times obtained for the QGS should therefore be viewed as an estimate for the QSL of the circuit itself, i.e., the circuit QSL -- provided that no other, potentially faster and/or better suited gates are available [41]. Note that the details regarding the method to determine the QSLs and their results are presented in Secs. IV and V. The row "QSL gate set" in Table 1 summarizes the gates of the extended set and lists the QSL times for both platforms. Table 1 indicates that absolute circuit run times for neutral atoms will be longer compared to those for superconducting circuits since their elemental gate times differ by more than an order of magnitude. However, in order to ensure a fair comparison of circuit run times the absolute gate times need to be weighted by the finite coherence time and lifetime of qubits and other levels involved in the gate mechanisms. For neutral atoms, we take both the coherence time of the qubit states, given by the dephasing time \(T_{2}^{*}=4\,\mathrm{ms}\)[39], and the lifetime of the Rydberg state, \(T_{\mathrm{Ryd}}=150\,\mathrm{\SIUnitSymbolMicro s}\)[39], as typical error time scales against which we compare circuit run times. We take the former as reference for single-qubit gates, since they don't occupy the Rydberg level [42],and the latter for two-qubit gates, where controlled transitions via Rydberg levels constitute the primary gate mechanism [43]. For superconducting circuits, we take their intrinsic \(T_{1}\) time, \(T_{1}=15\,\mathrm{\SIUnitSymbolMicro s}\)[8], as typical error time scale since it applies for both single- and two-qubit gates. However, note that these choices are rather conservative. For neutral atoms, any two-qubit gate dynamics will naturally also involve the qubit levels, which have much longer coherence times. Weighting two-qubit gates exclusively by the Rydberg lifetime thus overestimates the lifetime-induced error probability. For superconducting circuits, longer \(T_{1}\) times up to \(500\,\mathrm{\SIUnitSymbolMicro s}\) have already been reported [44]. Figure 1: Comparison of the circuit run times and gate counts for the quantum Fourier transform (QFT) and quantum approximate optimization algorithm (QAOA) between neutral atoms and superconducting circuits. The left (right) column corresponds to the QFT (QAOA) and the orange (purple) marker correspond to neutral atoms (superconducting circuits). The circuit run times for both algorithms and various problem instances of different size, \(N=9,16,\ldots,81\) qubits for the QFT and \(N=9,16,\ldots,121\) qubits for the QAOA, are given in panels (a) and (c), respectively. The numbers of two- and multi-qubit gates in the corresponding circuits are given in panels (b) and (d), respectively. The data for the squares [circles] is obtained using the standard gate model (SGM) using the gates and times from the standard gate set (SGS) [QSL gate set (QGS)], cf. Table 1. In contrast, the plus signs [crosses] represent the results when the same algorithms are realized in the parity mapping (PM) using the SGS [QGS]. The pentagons correspond to an altered 2D architectures allowing for next-to-nearest neighbor (NNN) coupling between qubits. The run time for each circuit is given in units of typical platform-specific error times. The gray area in panels (a) and (c) indicates where the circuit run times exceeds the error times. ### Circuit times in the standard gate model In the following, we calculate the circuit run times for QFTs and single QAOA steps in the SGM using gates from both the SGS and the QGS. As already outlined above, we assume both hardware platforms to consist of qubits arranged in 2D arrays with only nearest-neighbor connectivity (see Sec. III for details regarding the model). This requires replacing all gates between non-connected qubits with gate sequences between physically connected qubits. #### iii.2.1 Standard gate set circuits Using gates from the SGS, the circuit's gate sequences consist of single-qubit gates plus CZ gates in case of neutral atoms and single-qubit gates plus Sycamore gates in case of superconducting circuits (cf. Table 1). Since the circuit for the QFT does by construction not require any gate between non-neighboring qubits [45], we only replace the controlled-phase gate and SWAP gate by gate sequences from the SGS. The situation changes for the QAOA circuits, as it assumes an all-to-all connected architecture and finding a circuit representation with minimal depth while only requiring nearest-neighbor connectivity is likely NP-hard [46]. As a remedy, we use the open-source _pytket_ compiler [16] to translate the QAOA's bare quantum circuits to executable quantum circuits that are in agreement with the hardware's nearest-neighbor connectivity and SGSs. We furthermore use the compiler's quantum circuit optimization features to optimize the circuits in order to minimize gate counts and circuit depths. The squares in Fig. 1 (a) and (c) show the resulting run times for quantum circuits corresponding to QFTs and QAOA steps, respectively, when executed on neutral atoms (orange) and superconducting circuits (purple). Within each line, the size of the problem instance increases from the lower left to the upper right. Note that for a fair comparison between platforms, all gate times have been weighted by each platform's intrinsic error time scales as described previously. Circuits with run times significantly exceeding the platform's intrinsic, level-dependent error time scales, highlighted by the gray area in Fig. 1 (a) and (c), will most likely yield unreliable results. We observe that despite the different time scales for gates, the weighted circuit run times are almost identical for both platforms and for both the QFT and QAOA, see squares in Fig. 1 (a) and (c), respectively. However, only problem instances with relatively small qubit numbers seem to be currently doable on both platforms. While the circuit run time is one deciding factor in whether it is feasible on current NISQ hardware, this measure neglects the fact that every gate comes with an intrinsic error probability. Assuming gate errors on the order of \(0.1-1\%\), which are realistic both for neutral atoms [20, 40] and superconducting circuits [37], it is clear that also the gate count limits the feasibility of quantum circuits and lower gate counts are thus preferable. To this end, the squares in Fig. 1 (b) and (d) show the number of two-qubit gates required for QFT and single-step QAOA circuits, respectively. Note that circuit representations in terms of the SGS require the same number of two-qubit gates both on neutral atom and superconducting qubit hardware. Hence, both cases are represented by a single line in Figs. 1 (b) and (d). #### iii.2.2 QSL gate set circuits The circles in Fig. 1 (a) and (c) show the circuit run times for the same quantum algorithms and complexity levels as used for the squares but when the QGS is employed to generate circuit representations. The corresponding two-qubit gate counts are illustrated by the circles in Fig. 1 (b) and (d). We observe an overall reduction in circuit run times and gate counts for both platforms, both algorithms and any considered problem Figure 2: Reduction in weighted circuit run times and (two- and multi-qubit) gate counts for the scenarios and data shown in Fig. 1. In panels (a) and (b) the circuits in the standard gate model (SGM) and parity mapping (PM) are compared when using the “QSL gate set” (QGS) instead of the “standard gate set” (SGS). In panels (c) and (d) the circuits in the SGS and QGS are compared when using the PM instead of the SGM. All presented numbers are the average over problem instances of different sizes, i.e., different numbers of qubits. instance. This improvement is a combined effect of using maximally fast quantum gates at the QSL and taking advantage of the increased flexibility provided by the extended set of gates. Especially the availability of SWAP gates needs to be stressed as it directly reduces the gate count and thus leads to a reduction in circuit run time even without an additional speedup in gate times. In order to quantify the improvements achievable through the QGS, Fig. 2 (a) lists the average reduction in circuit run times and gate counts for each algorithm and platform. Our analysis demonstrates to which extent circuit run times and gate counts can -- from a theoretical perspective -- still be improved if standard gates are replaced by an extended gate set with gate times at the QSL. However, even with the improved gate set and its reduction in run time and gate count, most of the quantum circuits, i.e., circles in Fig. 1, remain likely infeasible using current NISQ hardware. Quantum error correction codes could in principle address the issue of circuit run times and gate counts exceeding the limits set by finite lifetimes and gate errors. Nevertheless, since both lifetimes [44, 47] and qubit numbers [48] are constantly increasing, more complex quantum circuits will likely reach the feasible regime in the near future. ### Circuits times in the parity mapping Besides the representation of quantum algorithms using the SGM, we now consider the representation of the same algorithms within the PM. While the PM was originally designed to tackle combinatorial optimization problems via quantum annealing [27], it can also be utilized for digital quantum optimization algorithms such as QAOA [30] as well as to achieve universal quantum computing [26]. At its core, the PM circumvents the need for long-range interactions between qubits, which in turn renders gates between non-adjacent qubits obsolete. However, this comes at the expense of requiring more physical qubits and many-body constraints on \(2\times 2\) plaquettes of qubits as specified in detail in Appendix A.2. Similar to our analysis for the SGM (see Sec. II.2) we use either gates from the SGS or the QGS in the following. #### ii.3.1 Standard gate set circuits The plus signs in Fig. 1 (a) [(b)] show the circuit run times [two-qubit gate counts] of QFTs in the PM for the same problem instances as used for the SGM (squares). In both cases and for both platforms, the gates and times of the SGS have been used. We observe a reduction in circuit run times and gate counts for both platforms with the comparison being made with respect to the values for the SGM [see also Fig. 2 (c)]. The same comparison can be done for the QAOA steps with their circuit run times and gate counts given by the plus signs in Fig. 1 (c) and (d), respectively. Regarding single-step QAOA resource requirements, we observe a significant reduction in circuit run times [see also Fig. 2 (c)] due to the constant circuit depth in the PM [29, 30]. However, note that the circuit depth differs for neutral atoms and superconducting circuits due to the different coupling mechanisms between qubits with neutral atoms having the deeper circuits, cf. Appendix A.2. The reduction in circuit run time is accompanied by an increase in two-qubit gate counts compared to the SGM, which we attribute to the decomposition of the constraint gates, cf. Eq. (10), into the natively available gates within each platform. #### ii.3.2 QSL gate set circuits The crosses in Fig. 1 show the results for circuit representations in the PM when employing gates and times from the QGS. For the QFT on neutral atoms, we do not observe any further reduction in circuit time or gate counts compared to its representation utilizing the SGS. This is because its circuit representations [49] for neutral atoms contain only single-qubit and CZ gates -- gates for which the gate times are identical within the SGS and the QGS. In contrast, for superconducting circuits, we still observe an improvement in circuit time since the CZ gate becomes directly available in the QGS and must no longer be replaced by Sycamore and single-qubit gates. However, the representations in the SGS and QGS only differ by single-qubit gates and hence their two-qubit gate count is identical and also identical to that of the neutral atoms. This is reflected in Fig. 1 (c) by a single line of superimposed plus signs and crosses. For the single-step QAOA circuits in the PM, we observe a significant reduction both in circuit run time and gate counts on both platforms when the QGS is used instead of the SGS. This is due to the availability of the multi-qubit gates \(\text{ZZZ}(\gamma)\) and \(\text{ZZZZ}(\gamma)\), cf. Eq. (10), where the gate-count reduction originates from avoiding single- and two-body gate decompositions. In addition, it turns out to be much faster to use control pulses that directly implement these multi-qubit gates as opposed to serially applying control pulses to implement the required single- and two-qubit gates. Figure 2 (b) summarizes the average run time and gate count reductions (in terms of two- and multi-qubit gates) when replacing the SGS with the QGS within the PM. In contrast to panels (a) and (b) of Fig. 2, where the gate set changes and the circuits stay in the SGM or PM, panels (c) and (d) examine the opposite scenario, i.e., the gate sets are kept constant but the circuit models change. In detail, Fig. 2 (c) and (d) show the average circuit run time and gate count reduction when the SGM is replaced by the PM while the gate set is given by the SGS and QGS, respectively. Especially Fig. 2 (d) needs to be emphasized as it reveals to which extent the PM allows to reduce circuit run times and gate counts compared to representations of the same circuits in the SGM. As mentioned above and detailed in Appendix A.2, the PM-specific improvements come at the only expense of requiring more qubits. For the current state of NISQ hardware and independent from the platform, we believe the SGM to be better suited for QFTs, as more problem instances seem to be feasible, judging by circuit run time and required qubits. In contrast, for QAOA using the QGS, we find the described parity representation of circuits (despite its overhead-induced inferior success probabilities compared to direct SGM-QAOA [50]) to be an advantageous option, as the run time and gate count is drastically reduced for each QAOA step compared to the SGM. In particular, the need for more PM-QAOA steps compared to SGM-QAOA steps might be compensatable given the resource reduction per PM-QAOA step. Given the ongoing upscaling of quantum computers in terms of qubit numbers, see, e.g., IBM's quantum roadmap [48], the PM seems to be a viable option for QAOA. As a final remark, it should be noted that we only discussed optimization problems with all-to-all connectivity, cf. Eq. (16). However, many realistic optimization problems have rather sparse connectivity, which would lead to a significant reduction in required qubits [28] while maintaining its strength of constant circuit depth. ## III Modeling Neutral Atom and Superconducting Circuit Platforms In Sec. II we have presented the main result of our work -- namely a calculation and comparison of circuit run times and gate counts using neutral atoms and superconducting circuits as quantum computing platform. In this section, we now introduce the detailed physical models used for both platforms. Since our focus is the description of dynamics, we focus on the respective Hamiltonians including the various control knobs typically available to steer the systems and implement quantum gates. ### Neutral atoms Arrays of trapped neutral atoms laser coupled to highly excited Rydberg states are a promising platform for quantum computing [51; 52] and quantum simulation [53] as qubits can for example be encoded in long-lived hyperfine ground states. High-fidelity single-qubit gates can be achieved using microwave fields [54], two-photon Raman transitions [55], or a combination of microwaves and gradient fields for individual-qubit addressing [56; 57; 58]. In contrast, entangling operations between atoms, i.e., many-body gates, are typically realized via strongly interacting Rydberg levels [59; 60] and various control schemes for two- and multi-qubit gates have been experimentally demonstrated [61; 62; 63; 20; 64]. Such gates have been used in recent experiments [39; 40] with up to several hundreds of atoms arranged in a planar geometry. If we consider the smallest building block of such a 2D array, it consists of \(N=4\) atoms with each atom described by three relevant levels. In the rotating frame, its Hamiltonian reads (\(\hbar=1\)) \[\mathsf{H}(t)= -\sum_{n=1}^{N}\Delta_{n}(t)\ket{r_{n}}\bra{r_{n}}+\sum_{ \begin{subarray}{c}n,m=1\\ n<m\end{subarray}}^{N}V_{nm}\ket{r_{n}r_{m}}\bra{r_{n}r_{m}}\] \[+\frac{1}{2}\sum_{n=1}^{N}\sum_{l=!,i}\Big{[}\Omega_{l,n}(t)e^{ \mathrm{i}\varphi_{l,n}}\ket{r_{n}}\bra{l_{n}}+\mathrm{H.c.}\Big{]}, \tag{1}\] where \(\ket{\ket{\ket{\ket{\ket{\ket{\ket{\ket{\ket{\ket{\ket{\ket{\ketket{\ketket{ \ketketket{\ketketketket{\ to neutral atom qubits, which interact directly via their Rydberg levels, qubits encoded in superconducting circuits often interact indirectly via intermediate coupling elements [73]. In an architecture, where such couplers are made tunable [74], the effective interaction strength between the qubits becomes tunable as well. Such qubit architectures have been successfully used in recent quantum advantage experiments [8; 9] and thus are a prototypical NISQ quantum computing platform. The qubits in such a tunable coupler architecture are arranged on a 2D lattice with nearest-neighbor couplings. In the following, we take the architecture from Ref. [8] as a reference. The smallest building block within such a system consists of \(N=4\) qubits. The Hamiltonian for this subsystem reads [8] \[\mathsf{H}(t) =\sum_{n=1}^{N}\left[\omega_{n}(t)\mathsf{b}_{n}^{\dagger} \mathsf{b}_{n}-\frac{\alpha_{n}}{2}\mathsf{b}_{n}^{\dagger}\mathsf{b}_{n}^{ \dagger}\mathsf{b}_{n}\mathsf{b}_{n}\right] \tag{2}\] \[\quad+\sum_{n=1}^{N}\left(\mathsf{b}_{n}+\mathsf{b}_{n}^{\dagger }\right)\Omega_{n}(t)\cos(\bar{\omega}_{n}(t)t)+\mathsf{H}_{\text{int}}(t)\] with \[\mathsf{H}_{\text{int}}(t)= g_{12}(t)\left(\mathsf{b}_{1}^{\dagger}\mathsf{b}_{2}+\mathsf{b}_ {1}\mathsf{b}_{2}^{\dagger}\right)+\frac{g_{12}^{2}(t)}{|\eta|}\mathsf{b}_{1} ^{\dagger}\mathsf{b}_{1}\mathsf{b}_{2}^{\dagger}\mathsf{b}_{2} \tag{3}\] \[+g_{23}(t)\left(\mathsf{b}_{2}^{\dagger}\mathsf{b}_{3}+\mathsf{b }_{2}\mathsf{b}_{3}^{\dagger}\right)+\frac{g_{23}^{2}(t)}{|\eta|}\mathsf{b}_{ 2}^{\dagger}\mathsf{b}_{2}\mathsf{b}_{3}^{\dagger}\mathsf{b}_{3}\] \[+g_{34}(t)\left(\mathsf{b}_{3}^{\dagger}\mathsf{b}_{4}+\mathsf{b }_{3}\mathsf{b}_{4}^{\dagger}\right)+\frac{g_{34}^{2}(t)}{|\eta|}\mathsf{b}_{ 3}^{\dagger}\mathsf{b}_{3}\mathsf{b}_{4}^{\dagger}\mathsf{b}_{4}\] \[+g_{41}(t)\left(\mathsf{b}_{4}^{\dagger}\mathsf{b}_{1}+\mathsf{b }_{4}\mathsf{b}_{1}^{\dagger}\right)+\frac{g_{41}^{2}(t)}{|\eta|}\mathsf{b}_{ 4}^{\dagger}\mathsf{b}_{4}\mathsf{b}_{1}^{\dagger}\mathsf{b}_{1},\] where \(\omega_{n}(t)\) is the frequency-tunable level splitting of transmon \(n\), \(\alpha_{n}\) its anharmonicity and \(\mathsf{b}_{n}\) its annihilation operator. The tunable coupling between transmons \(n\) and \(m\) is denoted by \(g_{nm}(t)\) and \(\eta\) is the non-linearity of the involved transmons, which is roughly constant \(\alpha_{n}\approx\eta\) for all \(n\). The time-dependent amplitude and frequency of a local \(X\)-type control field on transmon \(n\), e.g., some microwave field, is denoted by \(\Omega_{n}(t)\) and \(\bar{\omega}_{n}(t)\), respectively. For numerical reasons, it is advantageous to change into a rotating frame. We change into a rotating frame with frequency \(\omega_{\text{rot}}\) via the transformation \[\mathsf{H}^{\prime}(t) =\mathsf{O}^{\dagger}(t)\mathsf{H}(t)\mathsf{O}(t)-\mathrm{i} \mathsf{O}^{\dagger}(t)\frac{\mathrm{d}\mathsf{O}(t)}{\mathrm{d}t}, \tag{4a}\] \[\mathsf{O}(t) =\exp\left\{-\mathrm{i}\omega_{\text{rot}}\left(\sum_{n=1}^{N} \mathsf{b}_{n}^{\dagger}\mathsf{b}_{n}\right)t\right\} \tag{4b}\] and find \[\mathsf{H}^{\prime}(t)= \sum_{n=1}^{N}\left[\left(\omega_{n}(t)-\omega_{\text{rot}} \right)\mathsf{b}_{n}^{\dagger}\mathsf{b}_{n}-\frac{\alpha_{n}}{2}\mathsf{b}_{ n}^{\dagger}\mathsf{b}_{n}^{\dagger}\mathsf{b}_{n}\mathsf{b}_{n}\right] \tag{5}\] \[+\sum_{n=1}^{N}\frac{1}{2}\left[\mathsf{b}_{n}\left(\bar{\Omega}_{ n,\text{re}}(t)+\mathrm{i}\bar{\Omega}_{n,\text{im}}(t)\right)\right.\] \[\left.\qquad\qquad+\mathsf{b}_{n}^{\dagger}\left(\bar{\Omega}_{ n,\text{re}}(t)-\mathrm{i}\bar{\Omega}_{n,\text{im}}(t)\right)\right]+\mathsf{H}_{ \text{int}}(t),\] where we have introduced the auxiliary control fields \[\bar{\Omega}_{n,\text{re}}(t) =\mathfrak{Re}\left\{\Omega_{n}(t)e^{-\mathrm{i}(\omega_{\text{rot }}-\bar{\omega}_{n}(t))t}\right\}, \tag{6a}\] \[\bar{\Omega}_{n,\text{im}}(t) =\mathfrak{Im}\left\{\Omega_{n}(t)e^{-\mathrm{i}(\omega_{\text{rot }}-\bar{\omega}_{n}(t))t}\right\}. \tag{6b}\] While \(\Omega_{n}(t)\) and \(\bar{\omega}_{n}(t)\) are the actual physical control fields, we may take \(\bar{\Omega}_{n,\text{re}}(t)\) and \(\bar{\Omega}_{n,\text{im}}(t)\) as auxiliary control fields which capture the time-dependent nature of \(\Omega_{n}(t)\) and \(\bar{\omega}_{n}(t)\) in the rotating frame and the latter can be reobtained from \(\bar{\Omega}_{n,\text{re}}(t)\) and \(\bar{\Omega}_{n,\text{im}}(t)\). ## IV Determining quantum speed limits via quantum optimal control After having introduced the physical model for neutral atoms and superconducting circuits in Sec. III, we now review the basic notion of QSLs in Sec. IV.1 and of quantum optimal control theory in Sec. IV.2 since both build the theoretical, respectively methodical, foundation for the results presented in this work. Our method is described in Sec. IV.3. ### Quantum speed limits The notion of quantum speed limits (QSLs) naturally arises in the context of quantum control problems. To this end, let us consider a quantum system described by the Hamiltonian \(\mathsf{H}(t)=\mathsf{H}(\{\mathcal{E}_{k}(t)\})\), which depends on a set of control fields, \(\{\mathcal{E}_{k}(t)\}\), that can be externally tuned, e.g., by the time-dependent amplitudes, phases or detunings in Eqs. (1) or (5). A quantum control problem is then defined by a set of initial states, \(\{|\psi_{l}^{\text{in}}\rangle\}\), that should be transferred into a set of target states, \(\{|\psi_{l}^{\text{trgt}}\rangle\}\), \[\left|\psi_{l}^{\text{trgt}}\right\rangle=\mathsf{U}(T,0;\{\mathcal{E}_{k}(t)\}) \left|\psi_{l}^{\text{in}}\right\rangle,\quad\forall l, \tag{7}\] where \(\mathsf{U}(T,0;\{\mathcal{E}_{k}(t)\})\) is the system's time-evolution operator and \(T\) the total time. Any choice of \(\{\mathcal{E}_{k}(t)\}\) which fulfills Eq. (7) is considered a solution to the control problem. It is important to note that solutions to quantum control problems are usually not unique. Even for a fixed protocol duration \(T\) there typically exist many and often infinite many solutions. The QSL for a given control problem is defined by the shortest protocol duration \(T_{\text{QSL}}\) for which at least one solution exists, i.e., for which at least one set of control fields \(\{\mathcal{E}_{k}(t)\}\) exists that fulfills Eq. (7). In the context of quantum computing and the NISQ era, where time is a limited resource due to decoherence, it is desirable to implement quantum gates at the QSL. In order to calculate \(T_{\text{QSL}}\) analytically, \(\mathsf{U}(T,0;\{\mathcal{E}_{k}(t)\})\) must be analytically calculable for _any_ set \(\{\mathcal{E}_{k}(t)\}\) of conceivable control fields -- a requirement that typically limits an analytical calculation of \(T_{\text{QSL}}\) to simple systems [75]. Besides an analytical determination, there are various methods to approximate \(T_{\text{QSL}}\). One prominent method is to calculate a lower bound \(T_{\text{bound}}\leq T_{\text{QSL}}\) in order to get an estimate for \(T_{\text{QSL}}\) itself. For the simplest case of a state-to-state control problem, lower bounds can be calculated analytically [24]. In contrast, in case of multiple pairs of initial and final states, which describe the implementation of quantum gates, such lower bounds only exist for very simple systems [76]. In most cases, one needs to resort to numerical tools for estimating \(T_{\text{QSL}}\). In that context, quantum optimal control theory has proven to be very useful [77] as it not only estimates \(T_{\text{QSL}}\) quite accurately but additionally yields the control fields that implement the desired dynamics, i.e., realizing the transition from initial to target states, cf. Eq. (7). Since this is our method of choice, we introduce it in more detail in the following. ### Quantum optimal control theory Quantum optimal control theory (OCT) [78] is a toolbox providing analytical and numerical tools that allow to derive optimized control fields which solve a given control problem, e.g., in shortest time or with minimal error. Mathematically, an optimal control problem is formulated by introducing the cost functional \[\begin{split} J\left[\{\psi_{l}\},\{\mathcal{E}_{k}\},T\right]& =\varepsilon_{T}\left[\{\psi_{l}(T)\}\right]\\ &\quad+\int_{0}^{T}J_{t}\left[\{\psi_{l}(t)\},\{\mathcal{E}_{k}(t )\},t\right]\mathrm{d}t,\end{split} \tag{8}\] where \(\{\psi_{l}(t)\}\) is a set of time-evolved states and \(\{\mathcal{E}_{k}(t)\}\) a set of control fields to be optimized. The error-measure \(\varepsilon_{T}\) quantifies the distance between the time-evolved states \(\ket{\psi_{l}(T)}=\mathsf{U}(T,0;\{\mathcal{E}_{k}(t)\})\ket{\psi_{l}^{\text{ in}}}\) and the desired target states \(\ket{\psi_{l}^{\text{trgt}}}\) at the protocol's final time \(T\), cf. Eq. (7). The term \(J_{t}\) in Eq. (8) captures time-dependent running costs. In most cases, the error-measure \(\varepsilon_{T}\) is the crucial figure of merit. In order to optimize for quantum gates, we use the error-measure [79] \[\varepsilon_{T}\left[\{\psi_{l}(T)\}\right]=1-\frac{1}{N_{\text{trgt}}}\sum_{l =1}^{N_{\text{trgt}}}\mathfrak{Re}\left\{\left\langle\psi_{l}^{\text{trgt}} \right|\psi_{l}(T)\right\rangle\right\} \tag{9}\] and take \(\ket{\psi_{l}^{\text{trgt}}}=\mathsf{O}\ket{\psi_{l}^{\text{in}}}\) as the desired target states for the target gate \(\mathsf{O}\). The set \(\{\psi_{l}^{\text{in}}\}\) runs over the \(N_{\text{trgt}}\) logical basis states affected by \(\mathsf{O}\) with \(N_{\text{trgt}}=4,8,16\) for two-, three- or four-body gates, respectively. Since the cost functional \(J\) is formulated such that smaller values correspond to better solutions of the control problem, solving an optimal control problem becomes essentially a minimization task, i.e., to find a set of control fields \(\{\mathcal{E}_{k}^{\text{opt}}(t)\}\) that minimizes \(J\), respectively \(\varepsilon_{T}\). This is an optimization problem for which several numerical algorithms have been developed [80; 81; 82; 83; 84]. Many of them are readily available in open-source software packages [85; 86; 87; 88; 89]. ### Optimization procedure and method In the context of determining \(T_{\text{QSL}}\) numerically via OCT, we search for the shortest protocol duration \(T\) for which the error-measure \(\varepsilon_{T}\), cf. Eq. (9), is still sufficiently small. In mathematical terms, we therefore define an error threshold \(\varepsilon_{\text{max}}\) and search for the shortest time \(T\) for which \[\min_{\{\mathcal{E}_{k}\}}\left[\varepsilon_{T}\left[\{\psi_{l}(T)\}\right] \right]\leq\varepsilon_{\text{max}} \tag{10}\] has a solution. However, the minimization over all conceivable control fields can not be done numerically -- as there are infinitely many fields to check -- and thus must be replaced by a sampling over finitely many fields in practice. In order to explore the function space efficiently by finite sampling, optimization algorithm as described in Sec. IV.2 can be used. Since we are interested in the fundamental QSL, we put only minimal limitations -- apart from physically motivated limitations on amplitudes -- on the form of each control field \(\mathcal{E}_{k}(t)\). Hence, we need an optimization algorithm that is capable of exploring a function space of almost arbitrary field shapes. As our method of choice we use Krotov's method [90], a gradient-based optimization algorithms for time-continuous control fields. While a more detailed description of Krotov's method is given in Appendix B, its basic working principle is outlined in the following. It consists of an iterative update of the control fields \(\{\mathcal{E}_{k}(t)\}\). Starting from a set of guess control fields \(\{\mathcal{E}_{k}^{0}(t)\}\), Krotov's method updates them until either \(\varepsilon_{T}\leq\varepsilon_{\text{max}}\) or a maximum number of iterations is reached. This procedure can be viewed as a local but structured search within the space of all conceivable sets of control fields -- the so-called control landscape. The locally searched area is thereby determined by the choice of the guess fields \(\{\mathcal{E}_{k}^{0}(t)\}\), which set the initial starting point of the search. While the local nature of this search might appear contradicting to the global search required for evaluating Eq. (10), it can be turned into an approximate global search by using various sets of randomized guess fields. The combined effect of all these local searches "cover" a larger fraction of the control landscape. The total procedure for approximating \(T_{\text{QSL}}\) using this method is thus to start with a protocol duration \(T\) for which the optimization algorithm finds solutions, i.e., optimized fields giving rise to \(\varepsilon_{T}\leq\varepsilon_{\text{max}}\), and then to consecutively lower \(T\) until none of the various sets of randomized guess fields finds a solution anymore. Appendix C summarizes the details regarding the generation of random guess fields as well as each field's parametrization within Krotov's method. Similar application of numerical optimal control techniques have previously shown excellent agreement with analytically provable QSLs [91; 92]. At worst, this method could overestimate the actual QSLs, in which case the actual QSLs would be even smaller and the circuit and gate times in Sec. II that use the QGS would be even better. ## V Benchmarking quantum gate times on 2D architectures In this final section, we now present the detailed results regarding the QSLs obtained via methods described in Sec. IV for the various gates listed in Table 1 and have been used to calculate the circuit run times in Sec. II. ### Neutral atoms In this section, we determine the QSLs for different gates on neutral atoms by utilizing the control knobs available in Hamiltonian (1). To this end, we set the vdW interaction strength between Rydberg levels to \(V/2\pi=40\,\mathrm{MHz}\) and assume a maximally achievable Rabi frequency of \(\Omega_{\mathrm{max}}/2\pi=0.1V=4\,\mathrm{MHz}\) for both \(\Omega_{\mathrm{i}}(t)\) and \(\Omega_{\mathrm{f}}(t)\) as well as \(\Delta_{\mathrm{max}}/2\pi=0.3V=12\,\mathrm{MHz}\) for \(|\Delta(t)|\). These parameters are in the same regime than those reported in recent experiments [39; 20; 93]. The markers in Fig. 3 (left column) show the achievable gate error \(\varepsilon_{T}\), cf. Eq. (9), for various gates and various gate times \(T\) on neutral atoms. Each individual marker thereby indicates the result of a single optimization with Krotov's method [cf. Appendix (B)] i.e., the final error \(\varepsilon_{T}\) after 1500 iterations when starting from random guess fields generated via Eq. (11). In the following we set \(\varepsilon_{\mathrm{max}}=10^{-3}\) for all gates and stop any optimization as soon as this threshold is reached. #### v.1.1 Two-qubit gates Figure 3 (a) shows the results for a CZ gate for three different, paradigmatic configurations of control fields. The circles correspond to the "parallel" configuration where all five possible control fields \(\Omega_{\mathrm{i}}(t),\Omega_{\mathrm{f}}(t),\varphi_{\mathrm{i}}(t),\varphi_ {\mathrm{f}}(t)\) and \(\Delta(t)\) have been optimized. In contrast, the crosses correspond to the "phase" configuration, where only \(\varphi_{\mathrm{f}}(t)\) has been optimized while \(\Omega_{\mathrm{f}}(t)=\Omega_{\mathrm{max}}\), \(\Delta(t)=\Delta_{\mathrm{max}}\) and \(\Omega_{\mathrm{i}}(t)=\varphi_{\mathrm{i}}(t)=0\) have been kept fixed. In both cases, we obtain a QSL of \(T_{\mathrm{QSL}}^{\mathrm{CZ}}=350\,\mathrm{ns}\). The third field configuration, which we called "sequential" configuration (diamonds), consists of a sequential use of \(\Omega_{\mathrm{i}}(t),\varphi_{\mathrm{i}}(t)\) and \(\Omega_{\mathrm{f}}(t),\varphi_{\mathrm{f}}(t)\), i.e., the first half of the protocol we have \(\Omega_{\mathrm{f}}(t)=\varphi_{\mathrm{f}}(t)=0\) and in the second half \(\Omega_{\mathrm{i}}(t)=\varphi_{\mathrm{i}}(t)=0\). This configuration is inspired by an adiabatic protocol for implementing ZZZZ(\(\gamma\)) gates, cf. Eq. (12) and Ref. [66]. Its QSL \(T_{\mathrm{QSL}}^{\mathrm{CZ}}=700\,\mathrm{ns}\) is twice as long compared to the other two configurations. In order to compare the results from the three configurations in terms of how successful the optimization has been in finding solutions, the histogram on the right side of Fig. 3 (a) provides the probability density for obtaining final errors \(\varepsilon_{T}\) within certain ranges. For obtaining a solution with errors \(\varepsilon_{T}\leq\varepsilon_{\mathrm{max}}\), we find the lowest probability for the "sequential" configuration (diamonds) -- coinciding with the highest QSL -- and the highest and almost identical probability for the other two configurations. Among those two, the "phase" configuration (crosses) needs to be emphasized in particular. From a physical perspective, setting \(\Omega_{\mathrm{i}}(t)\) and \(\varphi_{\mathrm{i}}(t)\) to zero automatically ensures that \(|\!\downarrow\downarrow\rangle\) is mapped onto itself -- as required by the CZ gate. This is not automatically guaranteed by the other two configurations and the optimization needs to explicitly ensure it and therefore needs to solve a slightly more complex optimization problem. However, the advantage of having one basis state automatically mapped correctly does not translate into an advantage regarding the QSL of \(T_{\mathrm{QSL}}^{\mathrm{CZ}}=350\,\mathrm{ns}\) or the reachable error in general. Interestingly, both the "parallel" and "phase" configurations yield the same achievable lowest errors for each gate time \(T\). This is visually highlighted by the lines connecting the lowest errors per \(T\) in Fig. 3 (a). Moreover, it should be stressed that these errors \(\varepsilon_{T}\) are reached for almost every set of initial guess fields, i.e., independent of the initial starting point within the problem's control landscape, and obtained independently for both configurations. This supports the conjecture that \(T_{\mathrm{QSL}}^{\mathrm{CZ}}=350\,\mathrm{ns}\) is the actual QSL for a CZ gate and generally validates our method in determining the QSL. From an optimal control perspective, it is interesting to see that the flexibility originating from the extended set of available control fields in the "parallel" configuration can not be turned into an advantage in error or time compared to the "phase" configuration. From a practical perspective, the latter is advantageous for experimental realizations as it requires fewer physical resources. While the three configurations discussed so far should only be viewed as examples, we did not find any configuration giving rise to faster CZ gates. Hence, we assume \(T_{\mathrm{QSL}}^{\mathrm{CZ}}=350\,\mathrm{ns}\) to be the fundamental QSL across all configurations. A natural comparison for \(T_{\mathrm{QSL}}^{\mathrm{CZ}}\) with a value from the literature would be the gate time from the analytical protocol introduced in Ref. [20], especially because it uses the same control fields as the "phase" configuration to implement the gate. For our parameters, we find \(T_{\mathrm{lit}}^{\mathrm{CZ}}\approx 340\,\mathrm{ns}\) as analytical gate time, which we assume identical with our QSL \(T_{\mathrm{QSL}}^{\mathrm{CZ}}=350\,\mathrm{ns}\) given the rather coarse sampling of gate times \(T\) in Fig. 3 (a). However, it should be noted that the analytical protocol of Ref. [20] implements a CZ gate only up to local operations -- operations that are already contained in our optimized gate protocols that the QSL. Nevertheless, for a fair comparison of analytical and QSL gate times, as needed in Sec. II, as well as for simplicity, we set both times to \(350\,\mathrm{ns}\) in Table 1. The remaining panels (b)-(e) in the left column of Fig. 3 show the results for other gates from the QGS of Table 1. The results for a CNOT gate are shown in panel (b). It is the only gate for neutral atoms (among those we considered) that requires individual instead of global fields, i.e., it is the only gate for which we did not assume \(\Omega_{\downarrow,n}(t)=\Omega_{\downarrow}(t),\Omega_{\uparrow,n}(t)=\Omega_{ \uparrow}(t),\varphi_{\downarrow,n}(t)=\varphi_{\downarrow}(t),\varphi_{ \uparrow,n}(t)=\varphi_{\uparrow}(t)\) and \(\Delta_{n}(t)=\Delta(t)\) for all \(n\) but actually assume individual fields with unique field shapes directed at each atom. We nevertheless consider the same three field configurations as for the CZ gate in panel (a) but now applied to the individual fields \(\Omega_{\downarrow,n}(t),\Omega_{\uparrow,n}(t),\varphi_{\downarrow,n}(t), \varphi_{\uparrow,n}(t)\) and \(\Delta_{n}(t)\) instead of their global versions. Even with this more general setting of control fields, we find only the "parallel" configuration (circles) to allow for the realization of a CNOT gate with a QSL of \(T^{\text{CNOT}}_{\text{QSL}}=300\,\text{ns}\). In contrast, the "phase" and "sequential" configurations do not allow to realize a CNOT gate at all. These results demonstrate that CNOT gates can be implemented using exclusively the site-dependent laser couplings of the qubit and Rydberg levels and no control knobs for single-qubit gates. However, an experimentally more convenient option, requiring no full site-dependent control of coupling qubit to Rydberg states, is to realize CZ gates with global laser pulses and convert them into CNOTs via local operations. We nevertheless include the CNOT gate and its QSL for completeness in our analysis as well as in the QGS in Table 1 but exclude it from any quantum circuit for neutral atoms in Sec. II for the reasons just mentioned. Figure 3 (c) shows the results for a SWAP gate. Like for the CNOT gate, we find the "parallel" configuration -- assuming again global fields that are identical for each atom -- to be only one capable of realizing a SWAP gate. We obtain \(T^{\text{SWAP}}_{\text{QSL}}=400\,\text{ns}\) as its QSL. The other two configurations are not capable of realizing SWAP gates. However, since the SWAP gate, in contrast to the CNOT gate, can be realized with global control fields, we believe it to be experimentally feasible and thus include it as a viable gate in the QGS in Table 1. Figure 3: Overview of the QSLs for the various gates of the “QSL gate set” (QGS) in Table 1. The left (right) column shows the results for neutral atoms (superconducting circuits) for three different configurations of control fields (specified in the main text), respectively. Each marker represents the gate error \(\varepsilon_{T}\), cf. Eq. (9), after either reaching \(\varepsilon_{T}\leq\varepsilon_{\text{max}}=10^{-3}\) or \(1500\) iterations of Krotov’s method, cf. Appendix B, after starting from a set of random guess fields generated via Eq. (10). While the lines connect the lowest errors reached for each gate time \(T\) within a given field configuration, the shaded background color indicates the range between the lowest and highest error. The marker density is shown at the right side of each panel as a histogram. The parameters for neutral atoms are \(V/2\pi=40\,\text{MHz}\), \(0\leq\Omega_{\downarrow}(t),\Omega_{\uparrow}(t)\leq\Omega_{\text{max}}=0.1V\) and \(|\Delta(t)|\leq\Delta_{\text{max}}=0.3V\). The parameters for superconducting circuits are \(-40\,\text{MHz}\leq g_{nm}(t)/2\pi\leq 5\,\text{MHz}\), \(6700\,\text{MHz}\leq\omega_{n}(t)\leq 7100\,\text{MHz}\) (exact values depending on \(n\) and taken from Ref. [8]) and \(-50\,\text{MHz}\leq\Omega_{n,\text{re}}(t),\Omega_{\text{im},\text{im}}(t)\leq 50\, \text{MHz}\). The anharmonic ladder for each transmon has been truncated after five levels with population in the highest level suppressed during optimization. The optimization results for \(\text{ZZZ}(\gamma)\) and \(\text{ZZZZ}(\gamma)\), cf. Eq. (11), have been obtained for the maximally entangling gate at \(\gamma=\pi/4\). The results for the CNOT gate on neutral atoms have been obtained using site-dependent control fields for each atom. #### iii.1.2 Three- and four-qubit constraint gates So far, we have discussed the results for two-qubit gates in Fig. 3 (a)-(c). These gates and their respective QSLs have been used in determining the quantum circuits and calculating the corresponding circuit run times in Sec. II -- especially for those circuits in the SGM discussed in Sec. II.2. In contrast, for QAOA circuits in the PM [29], circuit representations without two-qubit gates exist, e.g., when the required three- and four-qubit constraint gates \(\mathrm{ZZZ}(\gamma)\) and \(\mathrm{ZZZZ}(\gamma)\), cf. Eq. (10), are available natively and thus must not be decomposed into single- and two-qubit gates. In the following, we determine and discuss their QSLs. It should first be noted that, from an algorithmic point of view, it is irrelevant whether, e.g., \(\mathrm{ZZZ}(\gamma)\) or \(e^{\mathrm{i}\alpha}\mathrm{ZZZ}(\gamma)\), with \(\alpha\) some arbitrary phase, is realized in experiments. The latter just changes the global phase of the quantum state during circuit execution. The same holds for the \(\mathrm{ZZZZ}(\gamma)\) gate. In both cases, we may choose \(\alpha\) arbitrarily but, for practical reasons, choose it such that the states \(\ket{\downarrow\downarrow\downarrow}\) and \(\ket{\downarrow\downarrow\downarrow\downarrow}\) do not acquire a phase from the respective constraint gates. In the following, we therefore consider the phase-shifted constraint gates \[e^{\mathrm{i}\gamma}\mathrm{ZZZ}(\gamma),\qquad e^{-\mathrm{i}\gamma}\mathrm{ ZZZZ}(\gamma), \tag{11}\] instead of the ones from Eq. (10). In the context of QAOA circuits in the PM, these constraint gates need to be realized for various \(\gamma\), cf. Eq. (11). However, since we can not determine their QSL for each value of \(\gamma\), we first analyze the gate's entangling power [94] as a function of \(\gamma\) in Fig 4 (a). We observe maximal entangling power for \(\gamma=\pi/4\) for both \(\mathrm{ZZZ}(\gamma)\) and \(\mathrm{ZZZZ}(\gamma)\) and thus decide to first benchmark their QSLs for that particular value as we expect the control problem in that case to be the hardest to solve and consequently the QSLs to be the largest. Figure 3 (d) and (e) show the optimization results for the phase-shifted constraint gates \(\mathrm{ZZZ}(\gamma)\) and \(\mathrm{ZZZZ}(\gamma)\), cf. Eq. (11), for \(\gamma=\pi/4\), respectively. We use the same three configurations of control fields as for the two-qubit gates in panels (a)-(c) and find all configurations to be capable of implementing the constraint gates but observe the best performance for the "parallel" and "phase" configuration. While both configurations yield the same QSLs of \(T_{\mathrm{QSL}}^{\mathrm{ZZZ}}=400\,\mathrm{ns}\) and \(T_{\mathrm{QSL}}^{\mathrm{ZZZZ}}=500\,\mathrm{ns}\) for \(\mathrm{ZZZ}(\gamma)\) and \(\mathrm{ZZZZ}(\gamma)\), respectively, the "phase" configuration exhibits the better convergence behavior. Like for the CZ gate in Fig. 3 (a), we observe every set of guess fields for this configuration to reliably converge towards the same final error \(\varepsilon_{T}\) for every \(T\). Since the "parallel" configuration yields the same achievable errors as a function of gate time \(T\) and uses by definition a different control strategy than the "phase" configuration, we believe to have reliably identified the QSLs for the constraint gates. In terms of experimental feasibility, the "phase" configuration is advantageous as it requires fewer hardware and control resources. The fact that using \(\varphi_{\uparrow}(t)\) as the only time-dependent control field suffices for implementing \(\mathrm{ZZZ}(\gamma)\) or \(\mathrm{ZZZZ}(\gamma)\) originates from considering the gates' phase-shifted versions of Eq. (11). In detail, since the states \(\ket{\downarrow\downarrow\downarrow}\) and \(\ket{\downarrow\downarrow\downarrow\downarrow}\) are the only states among the 8 or 16 basis states of \(\mathrm{ZZZ}(\gamma)\) or \(\mathrm{ZZZZ}(\gamma)\) that technically require non-zero \(\Omega_{i}(t)\) and \(\varphi_{\downarrow}(t)\) in order to be phase-configurable, we simply avoid this requirement by considering the gates' phase-shifted version. For the remaining 7 or 15 basis states, which all have at least one atom initially in the \(\ket{\uparrow}\) state, the phase \(\varphi_{\uparrow}(t)\) together with a constant \(\Omega_{\uparrow}(t)=\Omega_{\mathrm{max}}\) is sufficient to correctly adjust all phases. The "phase" configuration thus represents a hardware-efficient control scheme to realize fast \(\mathrm{ZZZ}(\gamma)\) and \(\mathrm{ZZZZ}(\gamma)\) gates. Our results furthermore reveal that the "sequential" configuration, which was recently introduced in Ref. [66] and designed to implement high-fidelity \(\mathrm{ZZZZ}(\gamma)\) gates, seems not to be ideal when it comes to gate time as its configuration-specific QSL is roughly twice as long as the QSLs for the other configurations. However, it should be noted that a comparison of the protocol from Ref. [66] with protocols at the QSL is not a fair comparison. On the one hand, the control scheme of Ref. [66] is based on adiabaticity -- a regime that we are far away from in our numerical calculations. On the other hand, while the parameter \(\gamma\) is a tunable variable in the adiabatic control scheme, the results in Fig. 3 (d) and (e) are only valid for \(\gamma=\pi/4\). If gates with a different \(\gamma\) are required, one explicitly needs to optimize control fields for that purpose. While it is beyond the scope of this work to examine whether there exists an analytical control scheme with configurable \(\gamma\) at the QSL, in the following, we nevertheless provide an analysis of the QSLs for \(\mathrm{ZZZ}(\gamma)\) and \(\mathrm{ZZZZ}(\gamma)\) beyond \(\gamma=\pi/4\). In detail, after having identi Figure 4: Panel (a) shows the entanglement power [94] of the three- and four-qubit constraint gates \(\mathrm{ZZZ}(\gamma)\) and \(\mathrm{ZZZZ}(\gamma)\), cf. Eq. (10), as a function of \(\gamma\). In contrast, panels (b) and (c) examine the QSLs for these gates under various conditions on neutral atoms. In panel (b), the dependence of the QSL on the parameters \(\gamma\) is shown. In panel (c), the impact of \(\Delta_{\mathrm{max}}/\Omega_{\mathrm{max}}\) is visualized. In the latter case, we have \(\gamma=\pi/4\) and a fixed \(\Omega_{\mathrm{max}}\) while \(\Delta_{\mathrm{max}}\) is modified. The QSLs in panels (b) and (c) have been determined using the parameters and “phase” configuration described in Fig. 3. fied the "phase" configuration as the most reliable configuration to determine a gate's QSL, we provide the QSLs for other \(\gamma\) values in Fig. 4 (b). We find the QSLs to be almost constant for \(\gamma>0\) and only zero for \(\gamma=0\), in which case \(\mathrm{ZZZ}(\gamma)\) and \(\mathrm{ZZZZ}(\gamma)\) coincide with the identity operation. Interestingly, we do not observe a decrease in the QSLs for \(\gamma=\pi/2\) in which case the constraint gates are no longer entangling, cf. Fig. 4 (a), and should therefore theoretically be implementable with local operations only. We suspect that we do not see a decrease of the QSLs since we do not consider any control fields for local operations in Hamiltonian (1) and thus need to implement the local gates by means of the Rydberg levels. We moreover analyze the dependence of the QSLs on the ratio \(\Delta_{\mathrm{max}}/\Omega_{\mathrm{max}}\) in Fig. 4 (c). Surprisingly, we observe the QSLs for both \(\mathrm{ZZZZ}(\gamma)\) and \(\mathrm{ZZZZ}(\gamma)\) to be independent on this ratio and find \(\Delta_{\mathrm{max}}=0\) to be a viable option. In general, we observe that the QSLs for \(\mathrm{ZZZ}(\gamma)\) and \(\mathrm{ZZZZ}(\gamma)\) are only slightly larger than those of the two-qubit gates. In view of quantum circuits for PM-QAOA, where such constraint gates are required, it is thus advantageous to have these gates natively available, since their representation via single- and two-qubit gates [29] consumes significantly more time. This effect can be seen in Fig. 1 (c), where the plus signs illustrate the data for constraint gates expanded in single- and two-qubit gates and the crosses for the usage of native constraint gates. One possible explanation for the short QSLs for \(\mathrm{ZZZ}(\gamma)\) and \(\mathrm{ZZZZ}(\gamma)\) compared to those of the two-qubit gates might be that the permutation symmetry of atoms within the pseudo-2D architecture [66], i.e., \(V_{nm}=V\), matches the permutation symmetry of the gate operation itself. At last, we therefore examine the impact of the pseudo-2D architecture onto the constraint gates \(\mathrm{ZZZ}(\gamma)\) and \(\mathrm{ZZZZ}(\gamma)\). For the three- and four-qubit constraint gates, the change to an actual, planar 2D architecture implies that while the couplings between nearest neighbors remain \(V\), diagonal couplings, i.e., couplings between next-to-nearest neighbors or, in other words, qubits on opposite edges of a \(2\times 2\) square plaquette, are replaced by \(V/8\). The stars in Fig. 5 (a) and (b) show the corresponding results for \(\mathrm{ZZZ}(\gamma)\) and \(\mathrm{ZZZZ}(\gamma)\) gates and \(\gamma=\pi/4\), respectively. While their QSLs within the pseudo-2D architecture are \(T_{\mathrm{QSL}}^{\mathrm{ZZZ}}=400\,\mathrm{ns}\) and \(T_{\mathrm{QSL}}^{\mathrm{ZZZZ}}=500\,\mathrm{ns}\), they become \(T_{\mathrm{QSL,2D}}^{\mathrm{ZZZZ}}=T_{\mathrm{QSL,2D}}^{\mathrm{ZZZZ}}=600\, \mathrm{ns}\) in the actual, planar 2D architecture. Interestingly, this corresponds only to a relatively small increase in the QSLs for both gates. A possible explanation might be that the gate speed for neutral atoms is primarily determined by the maximal Rabi frequency \(\Omega_{\mathrm{max}}\), which is identical for both examples, and not so much by the interatomic interaction strength, which is different for both architectures. Although we believe the pseudo-2D architecture to be viable in experiments due to the great flexibility to arrange neutral atoms [95], we nevertheless take the QSLs for the actual, planar 2D architecture to be the reference gate times within the QGS in Table 1 and Sec. II. However, recall that the constraint gates are -- within our study -- only relevant for the QAOA circuits in the PM, cf. Fig. 1 (c). In order to nevertheless allow for a comparison of the run times in the actual, planar 2D architecture (orange crosses) with those using the pseudo-2D architecture, we add the latter as orange pentagons to Fig. 1 (c). ### Superconducting circuits Similar to neutral atoms in Sec. V.1, we now determine and analyze the QSLs for the same quantum gates but for superconducting circuits. The available control knobs to implement these gates are the tunable qubit frequencies \(\omega_{n}(t)\), the tunable coupling strength \(g_{nm}(t)\) between qubits and the (auxiliary) X-type local control fields \(\tilde{\Omega}_{n,\mathrm{re}}(t)\) and \(\tilde{\Omega}_{n,\mathrm{im}}(t)\) in Hamiltonian (5). In order to remain experimentally realistic, we take parameters from Ref. [8]. To this end, we single out a \(2\times 2\) plaquette consisting of four qubits from the generally larger 2D architecture. We take the qubit frequencies and their tunable range to be given by \(6700\,\mathrm{MHz}\lesssim\omega_{n}(t)\lesssim 7100\,\mathrm{MHz}\) and their anharmonicities given by \(\alpha_{n}\approx 200\,\mathrm{MHz}\), with exact values depending on \(n\). The tunable coupling strength is given by \(-40\,\mathrm{MHz}\leq g_{nm}(t)\lesssim 5\,\mathrm{MHz}\), as reported in Ref. [8]. Moreover, we assume the (auxiliary) X-type control fields \(\tilde{\Omega}_{n,\mathrm{re}}(t)\) and \(\tilde{\Omega}_{n,\mathrm{im}}(t)\), which encode the physical X-type control fields \(\Omega_{n}(t)\) and their tunable driving frequencies \(\bar{\omega}_{n}(t)\), to satisfy \(-50\,\mathrm{MHz}\leq\tilde{\Omega}_{n,\mathrm{re}}(t),\tilde{\Omega}_{n, \mathrm{im}}(t)\leq 50\,\mathrm{MHz}\). #### v.2.1 Two-qubit gates Figure 3 (f) shows results for a CZ gate on superconducting circuits using three different configurations of control fields -- different, of course, from those used for neutral atoms. In the "full" configuration (circles), all available control fields, \(\omega_{n}(t),g_{nm}(t),\tilde{\Omega}_{n,\mathrm{re}}(t)\) and \(\tilde{\Omega}_{n,\mathrm{im}}(t)\), are time-dependent and being optimized. In the "no-X" configuration (crosses), only \(\omega_{n}(t)\) and \(g_{nm}(t)\) are optimized while the X-type control fields are set to zero, \(\tilde{\Omega}_{n,\mathrm{re}}(t)=\tilde{\Omega}_{n,\mathrm{im}}(t)=0\). At last, in the "interaction" configuration only \(\omega_{n}(t)\) is time-dependent and optimized while \(g_{nm}(t)=-40\,\mathrm{MHz}=g_{\mathrm{max}}\) is set to its maximum magnitude and \(\tilde{\Omega}_{n,\mathrm{re}}(t)=\tilde{\Omega}_{n,\mathrm{im}}(t)=0\). For the CZ gate in Fig. 3 (f), we observe all three configurations to indicate the same QSL of \(T_{\mathrm{QSL}}^{\mathrm{ZZ}}=10\,\mathrm{ns}\). In terms of convergence behavior, the "interaction" configuration shows the best performance as indicated by the probability density on the right side of Fig. 3 (f). In general, the three configurations show slightly worse convergence behavior than the three configurations for the neutral atoms, cf. Fig. 3 (a). Nevertheless, since all three configurations indicate the same QSL and, in general, yield the same achievable error \(\varepsilon_{T}\) depending on \(T\), we believe our method to determine the QSL to yield reliable results also for superconducting circuits and conjecture \(T_{\rm QSL}^{\rm CZ}=10\,\)ns to be the fundamental QSL for CZ gates. While there are reference implementations for CZ gates on similar architectures with tunable couplers [72, 74, 96, 97], none of these architectures matches our architecture and parameter regime. Hence, we compare our QSL to the gate time of the fastest two-qubit gate, the Sycamore gate, reported in Ref. [8]. We find \(T_{\rm QSL}^{\rm CZ}=10\,\)ns to be slightly faster than \(T_{\rm lit}^{\rm Syc.}=12\,\)ns [37]. In Fig. 3 (g), the results for a CNOT gate are shown using the same three configuration as for the CZ gate. We find only the "full" configuration to be capable of realizing a CNOT gate, yielding the QSL \(T_{\rm QSL}^{\rm CNOT}=14\,\)ns, while the other two configuration are not capable of it. Among the available control knobs, the X-type control fields \(\bar{\Omega}_{n,\rm re}(t)\) and \(\bar{\Omega}_{n,\rm im}(t)\) are crucial for a CNOT gate to be feasible. For the SWAP gate, we again find all three configuration to converge, cf. Fig. 3 (h), with the "no-X" and "interaction" configuration showing the best convergence behavior. We find a QSL of \(T_{\rm QSL}^{\rm SWAP}=12\,\)ns. #### iv.2.2 Three- and four-qubit constraint gates We now turn towards the three- and four-qubit constraint gates \(\rm ZZZ(\gamma)\) and \(\rm ZZZZ(\gamma)\). However, note that in the following and in contrast to neutral atoms, we do not consider their phase-shifted versions, cf. Eq. (11), but their original versions, cf. Eq. (10). Figure 3 (i) and (j) show the results for the constraint gates \(\rm ZZZ(\gamma)\) and \(\rm ZZZZ(\gamma)\), respectively. We use the same three configurations of control fields as for the two-qubit gates of panels (f)-(h). We observe the "interaction" configuration to have the best convergence behavior while the "no-X" configuration gives in both cases rise to the shortest QSLs of \(T_{\rm QSL}^{\rm ZZZ}=24\,\)ns and \(T_{\rm QSL}^{\rm ZZZZ}=80\,\)ns for \(\rm ZZZ(\gamma)\) and \(\rm ZZZZ(\gamma)\), respectively. Both QSLs have been determined for \(\gamma=\pi/4\). The QSL for \(\rm ZZZ(\gamma)\) has thereby been confirmed independently by both the "full" and "no-X" configurations as both yield almost identical achievable errors \(\varepsilon_{T}\) as a function of gate time \(T\). We thus assume the QSL \(T_{\rm QSL}^{\rm ZZZ}=24\,\)ns to be well backed up. The situation is different for \(\rm ZZZZ(\gamma)\), for which we observe very different convergence behaviors for the three configurations, cf. Fig. 3 (j). While the "no-X" configuration yields the shortest QSL of \(T_{\rm QSL}^{\rm ZZZZ}=80\,\)ns, the "full" configuration shows slightly better performance for \(T<T_{\rm QSL}^{\rm ZZZ}\), which might suggest that even shorter gate protocols for \(\rm ZZZZ(\gamma)\) may exist but our method did not find them due, e.g., limited numbers of guess fields for exploring the control landscape. In the following, as well as for the calculation of circuit run times in Sec. II, we nevertheless assume \(T_{\rm QSL}^{\rm ZZZZ}=80\,\)ns to be the QSL for the \(\rm ZZZZZ(\gamma)\) gate as it is the fastest gate time \(T\) among the three configurations of control fields for which Krotov's method was able to find a solution with \(\varepsilon_{T}\leq\varepsilon_{\rm max}\). Interestingly, while we observe the QSLs for \(\rm ZZZ(\gamma)\) and \(\rm ZZZZ(\gamma)\) to be almost identical for neutral atoms and only slightly longer than the QSLs for the two-qubit gates, we observe the same only for the \(\rm ZZZZ(\gamma)\) gate for superconducting circuits. The \(\rm ZZZZ(\gamma)\) gate has a much longer QSL compared to the other QSLs on that platform. To rigorously decide whether this is due to the non-ideal convergence behavior observed in Fig. 3 (j) or has some deeper physical origin is beyond the scope of this study. In an attempt to tackle this question nevertheless, we consider the scenario of having additional diagonal couplings, i.e., next-to-nearest neighbor couplings, among the transmons in the superconducting circuit architecture. For Hamiltonian (3), this implies adding two additional rows with couplings \(g_{13}(t)\) and \(g_{24}(t)\) in the same form as the already present couplings Figure 5: Overview of different QSLs similar to Fig. 3 but exclusively for the three- and four-qubit constraint gates \(\rm ZZZ(\gamma)\) and \(\rm ZZZZ(\gamma)\), cf. Eq. (10). The left column shows results for neutral atoms, obtained using the “phase” configuration of Fig. 3, for the pseudo-2D architecture (triangles) and an actual, planar 2D architecture (stars). The right column compares the results for superconducting circuits using the “no-X” configuration from Fig. 3. The data correspond to the physical architecture of Ref. [8] where only nearest neighbor (NN) couplings between qubits are present (triangles) and where diagonal couplings, i.e., next-to-nearest neighbor (NNN) couplings, are added (stars). \(g_{12}(t),g_{23}(t),g_{34}(t)\) and \(g_{41}(t)\). This scenario is inspired by the pseudo-2D architecture for neutral atoms, which also exhibits identical nearest neighbor (NN) and next-to-nearest neighbor (NNN) couplings and where having these couplings is advantageous. Figure 5 (c) and (d) show the results for the two cases with only NN couplings (triangles) and with NN plus NNN couplings (stars). Besides observing much better convergence properties for the latter case, we also obtain improved QSLs of \(T_{\text{QSL,NNN}}^{ZZZ}=20\,\text{ns}\) and \(T_{\text{QSL,NNN}}^{ZZZZ}=60\,\text{ns}\) for the ZZZ(\(\gamma\)) and ZZZZ(\(\gamma\)) gate, respectively. However, despite improvements in this scenario, the QSL of the ZZZZ(\(\gamma\)) gate does not get close to the QSL of the ZZZ(\(\gamma\)) gate as it does for neutral atoms. The presence of NNN couplings might therefore be just a partial explanation of the QSL differences for the constraint gates between neutral atoms and superconducting circuits. Despite the scenario with NNN couplings not reflecting the actual architecture for superconducting circuits, we nevertheless add the circuit run times for the QAOA circuits in the PM using these faster constraint gates to Fig. 1 (c) for reference purposes as purple pentagons. ## VI Conclusions In this study, we have determined the QSLs for several common two-qubit and two specific multi-qubit quantum gates for two promising quantum computing platforms that allow for a 2D arrangement of qubits -- neutral atoms and superconducting circuits. We have used OCT to determine the QSLs, as it provides a generally applicable tool that warrants a fair comparison of both platforms. On the level of individual quantum gates, our study allows assessing how close gate protocols from the literature are compared to their fundamental QSLs or, in other words, how much can (theoretically) still be gained in time if known gate protocols are replaced by numerically optimized ones. We find the QSLs for all investigated two-qubit gates, encompassing CNOT, CZ and SWAP, to be very similar within each platform and close to the reference gate times for a CZ gate in case of neutral atoms [20] and a Sycamore gate in case of superconducting circuits [8]. On the level of quantum algorithms, our study has moreover allowed us to determine the "QSLs" for entire quantum circuits. To this end, we have assumed a 2D grid architecture for both platforms and qubit connectivity that allows physical quantum gates only between neighboring qubits. However, we have assumed these gates to be executable at the QSL. This has allowed us to calculate the circuit run times at the "QSL" for two paradigmatic quantum algorithms -- the QFT and a single step of the QAOA. We find that the corresponding weighted circuit run times scale comparably with respect to the system size. Furthermore, we observe this to be independent of the chosen gate set used to translate the quantum algorithms into executable quantum circuits, i.e., independent of whether the SGS or QGS is used. We observe platform-independently that the QGS yields circuit run times and gate counts that are roughly half compared to those in the SGS. On the one hand, this demonstrates that further speedup of circuit run times is theoretically possible on both platforms. On the other hand, it also shows that both platforms perform equally well when running prototypical quantum circuits using typical, present-day NISQ hardware. Besides a representation of the quantum circuits in the SGM, we have also explored the representation of the same quantum algorithms in the PM [26; 27; 29]. We observe a reduction in circuit run times as well as in gate counts in most cases. This reduction comes at the expense of requiring more physical qubits but without the need to change the geometrical layout or the control hardware. In this context, we want to specifically emphasize the circuit run times of a single QAOA step in the PM. Compared to its representation in the SGM, it offers a constant circuit depth independent on the problem complexity but requires the implementation of local three- and four-qubit constraint gates [29]. For superconducting circuits we find their gate times at the QSL to be roughly similar to the run times of their decompositions into single- and two-qubit gates. In contrast, for neutral atoms we find the direct implementation of the constraint gates to be only slightly slower than any single two-qubit gate and especially much faster than their decompositions into single- and two-qubit gates. We nevertheless observe for both platforms run times for a single QAOA step on the order of \(2-4\%\) of the platform's intrinsic coherence time. While this corresponds to an improvement of one order of magnitude for a problem size with \(N=9\) logical qubits, this grows to an improvement of three orders of magnitude for \(N=121\). Since the gate counts, when the native implementations of the constraint gates are used, are also smaller in the PM compared to the SGM, we believe the PM to be advantageous in terms of the described resources for a single QAOA step. It minimizes errors due to finite coherence time and allows for large-depth QAOA. However, we want to emphasize that deeper PM-QAOA circuits are in general necessary to reach similar success probabilities compared to lower-depth SGM-QAOA implementations. While we want to emphasize that the feasibility of a quantum circuit depends on more than just its run time and gate count, our benchmark study demonstrates that at least these two factors can be theoretically further improved using optimized gate protocols. This holds for both neutral atoms and superconducting circuits. ###### Acknowledgements. We would like to thank Hannes Pichler and Kilian Ender for helpful discussions and Michael Fellner for help with the parity QFT circuits. This work was supported by the Austrian Science Fund (FWF) through a START grant under Project No. Y1067-N27 and I 6011. This research was funded in whole, or in part, by the Austrian Science Fund (FWF) SFB BeyondC Project No. F7108-N38. For the purpose of open access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission. This project was funded within the QuantERA II Programme that has received funding from the European Union's Horizon 2020 research and innovation programme under Grant Agreement No. 101017733. ## Appendix A Quantum circuits for QFT and QAOA ### Standard gate model The quantum Fourier transform (QFT) is a key ingredient in Shor's algorithm for integer factorization [1] and thus a prototypical application for quantum computers. While a typical circuit representation of a QFT contains exclusively Hadamard gates, \(H\), and controlled-phase gates, \(R_{n}=\text{diag}\{1,1,1,\exp\{2\pi\mathrm{i}/2^{n}\}\}\), a more efficient representation with fewer gates and lower circuit depths can be constructed using SWAP gates [45]. The quantum approximate optimization algorithm (QAOA) aims at finding approximate solution to combinatorial optimization problems [4]. For instance, let us consider the task to find the ground state of the \(N\) qubit spin glass Hamiltonian \[\mathsf{H}_{\mathrm{z}}=\sum_{\begin{subarray}{c}n,m=1\\ n\times m\end{subarray}}^{N}J_{nm}\sigma_{\mathrm{z}}^{(n)}\sigma_{\mathrm{ z}}^{(m)}, \tag{10}\] where \(J_{nm}\) denotes the interaction strength between qubits \(n\) and \(m\) and \(\sigma_{\mathrm{z}}^{(i)}\) the Pauli-z operator on qubit \(i\). The QAOA allows to find an approximate solution, i.e., an approximate ground state for Hamiltonian (10), by applying the procedure \[\ket{\psi_{\mathrm{out}}}=\prod_{k=1}^{P}e^{-\mathrm{i}\alpha_{k}\mathsf{H}_{ \mathrm{x}}}e^{-\mathrm{i}\beta_{k}\mathsf{H}_{\mathrm{z}}}\ket{\psi_{ \mathrm{in}}},\qquad\mathsf{H}_{\mathrm{x}}=\sum_{n=1}^{N}\sigma_{\mathrm{x}}^ {(n)}, \tag{11}\] where \(\ket{\psi_{\mathrm{in}}}\) is the ground state of the so-called mixing Hamiltonian \(\mathsf{H}_{\mathrm{x}}\) and \(\alpha_{k},\beta_{k}\in[0,2\pi)\) are angles -- physically corresponding to evolution times -- that are iteratively optimized via a classical, closed-loop feedback optimization and the energy expectation value of \(\ket{\psi_{\mathrm{out}}}\) being the objective to minimize. The number of steps is denoted by \(p\). A single step in the QAOA is thus given by the application of the spin glass or problem Hamiltonian \(\mathsf{H}_{\mathrm{z}}\) followed by the mixing Hamiltonian \(\mathsf{H}_{\mathrm{x}}\). While the latter corresponds to a parallel application of \(\alpha_{k}\)-dependent single-qubit \(X\) rotations, \(R_{\mathrm{x}}^{k}\), in the associated quantum circuit -- and is thus negligible time-wise -- the circuit implementation of the former requires multiple CNOT gates as well as single-qubit phase gates, \(R_{\mathrm{z}}^{nm}=\exp\{-\mathrm{i}\beta_{k}J_{nm}\sigma_{\mathrm{z}}\}\), containing information about \(J_{nm}\) and \(\beta_{k}\). ### Parity mapping In the SGM, as described in Appendix A.1, every logical qubit is given by exactly one physical qubit and gates on logical qubits are equivalent to gates on physical qubits. This is different for the PM, where \(K>N\) physical qubits are required for a problem of \(N\) logical qubits and gates on logical qubits become different gates or even gate sequences on the physical qubits. However, due to the arrangement of physical qubits according to the PM, all gates between physical qubits are strictly local and thus require only nearest-neighbor connectivity. For the QFT, we need \(K=N(N+1)/2\) physical qubits in the PM and find that Hadamard gates, \(H\), on logical qubits become equivalent to several single- and two-qubit gates on neighboring physical qubits. In contrast, the logical two-qubit controlled-phase gates, \(R_{n}\), between any pair for logical qubits is given by exactly three parallel single-qubit gates on physical qubits [49; 26]. Hence, while the logical Hadamard gates require more resources in the PM, the logical controlled-phase gates require significantly less resources -- especially for those gates where the logical qubits are far away from each other. For the QAOA, we need \(K=N(N-1)/2\) physical qubits in the PM and Eq. (11) becomes [29] \[\ket{\psi_{\mathrm{out}}}=\prod_{k=1}^{p}e^{-\mathrm{i}\alpha_{k}\mathsf{H}_{ \mathrm{x}}^{\mathrm{phys}}}e^{-\mathrm{i}\tilde{\alpha}_{k}\mathsf{H}_{ \mathrm{z}}^{\mathrm{phys}}}e^{-\mathrm{i}\gamma_{k}\mathsf{H}_{\mathrm{c}}} \ket{\psi_{\mathrm{in}}}, \tag{12}\] where \[\mathsf{H}_{\mathrm{x}}^{\mathrm{phys}}=\sum_{k=1}^{K}\tilde{\sigma}_{\mathrm{ x}}^{(k)},\qquad\mathsf{H}_{\mathrm{z}}^{\mathrm{phys}}=\sum_{k=1}^{K}\tilde{J}_{k} \tilde{\sigma}_{\mathrm{z}}^{(k)} \tag{13}\] are the modified mixing and spin glass Hamiltonians in the PM, respectively. The local field strengths \(\tilde{J}_{k}\) run over the \(N(N-1)/2\) interactions \(J_{nm}\), cf. Eq. (10), and \(\tilde{\sigma}_{\mathrm{z}}^{(k)}\) encodes the parity of the corresponding two-qubit interaction \(\sigma_{\mathrm{z}}^{(n)}\sigma_{\mathrm{z}}^{(m)}\)[27]. In order to solve optimization problems using the PM, we additionally need to constraint the dynamics to the \(2^{N-1}\) dimensional subspace within the \(2^{K}\) dimensional physical Hilbert space that corresponds to the \(2^{N-1}\) eigenstates of Eq. (10) that have unique eigenvalues [98]. Hence, since not every eigenstates of \(\mathsf{H}_{\mathrm{z}}^{\mathrm{phys}}\) has a logical counterpart in \(\mathsf{H}_{\mathrm{z}}\), it needs to be energetically penalized as it would not be a valid solution to the optimization problem. This is achieved by realizing \(C=K-N+1\) local three- and four-qubit constraints via [27] \[\mathsf{H}_{\mathrm{c}}=\sum_{c=1}^{C}\tilde{\sigma}_{\mathrm{z}}^{(k_{1})} \tilde{\sigma}_{\mathrm{z}}^{(k_{2})}\tilde{\sigma}_{\mathrm{z}}^{(k_{3})} \left(\tilde{\sigma}_{\mathrm{z}}^{(k_{4})}\right), \tag{14}\] where "local" refers to \(k_{1},\ldots,k_{4}\) being nearest-neighbor physical qubits. For every QAOA step in Eq. (12), this requires \(C\) constraint gates of the form (neglecting tildes and indices) \[\mathsf{ZZZ}(\gamma) =\exp\left\{-\mathrm{i}\gamma\sigma_{\mathrm{z}}\sigma_{\mathrm{z}} \sigma_{\mathrm{z}}\right\}, \tag{15a}\] \[\mathsf{ZZZZ}(\gamma) =\exp\left\{-\mathrm{i}\gamma\sigma_{\mathrm{z}}\sigma_{\mathrm{z}} \sigma_{\mathrm{z}}\sigma_{\mathrm{z}}\right\} \tag{15b}\] with \(\gamma\) the effective constraint strengths, which -- like in the original QAOA scheme of Eq. (40) -- are optimized in a classical, closed-loop feedback optimization. The gates in Eq. (41) can be either realized directly [66] or by decomposing them into single- and two-qubit gates, e.g., by using four or six CNOT gates plus one \(\gamma\)-dependent single-qubit phase-gate for \(\mathrm{ZZZ}(\gamma)\) or \(\mathrm{ZZZZ}(\gamma)\), respectively [29]. It is important to note that the constraint gates are the only multi-qubit gates in Eq. (41). Independent of \(N\), their implementation can be parallelized with at most nine [66] or four [29] consecutive layers of constraint gates for neutral atoms or superconducting circuits, respectively. However, note that shallower circuits may be feasible by now [99]. The difference between the two platforms originates from their different qubit-qubit coupling mechanism. The tunable coupler architecture of superconducting circuits [8] allows to switch off the coupling between any pair of qubits. As a consequence, all constraint plaquettes that do not share a common qubit, i.e., next-to-nearest neighbor plaquettes, can be implemented in parallel. A single QAOA step thus requires at most four layers of constraint gates to realize all of them. Neutral atoms, in contrast, interact via their Rydberg levels and therefore require additional spatial separation between atoms that are in their Rydberg levels but are not supposed to interact, i.e., to suppress unwanted interactions. Assuming that a single line of atoms in non-Rydberg levels suffices as a buffer between plaquettes that are to be implemented in parallel, this corresponds to two lines of plaquettes as a buffer in each spatial direction. This yields a maximal number of nine layers of constraint gates. Despite the platform-dependent differences, all quantum circuits corresponding to Eq. (41) have constant circuit depth. ## Appendix B Krotov's method for quantum optimal control Krotov's method [90] is an iterative, gradient-based optimization algorithm for time-continuous control fields featuring a build-in monotonic convergence [82]. To achieve the latter, Krotov's method requires a specific choice of the total optimization functional \(J\), cf. Eq. (8). In detail, while the error measure \(\varepsilon_{T}\) at final time \(T\) remains the relevant figure of merit that we want to minimized, Krotov's method achieves its minimization only indirectly by minimizing the total functional \(J\) where the time-dependent running costs \(J_{t}\) are give by [79] \[J_{t}\left[\{\psi_{l}(t)\},\{\mathcal{E}_{k}(t)\},t\right]=\sum_{k}\frac{ \lambda_{k}}{S_{k}(t)}\left(\mathcal{E}_{k}(t)-\mathcal{E}_{k}^{\mathrm{ref} }(t)\right)^{2}, \tag{42}\] where \(\mathcal{E}_{k}^{\mathrm{ref}}(t)\) is a reference field for the control field \(\mathcal{E}_{k}(t)\) that is to be optimized, \(S_{k}(t)\in(0,1]\) is a shape function and \(\lambda_{k}>0\) a numerical parameter. With the choice of Eq. (42), the update equation for field \(\mathcal{E}_{k}(t)\) becomes [82] \[\mathcal{E}_{k}^{(i+1)}(t)=\mathcal{E}_{k}^{\mathrm{ref}}(t)+\frac{S_{k}(t)}{ \lambda_{k}}\mathfrak{Im}\left\{\sum_{l}\left(\chi_{l}^{(i)}(t)\right)\frac{ \partial\mathsf{H}\llbracket\{\mathcal{E}_{k^{\prime}}\}\rrbracket}{\partial \mathcal{E}_{k}}\Big{|}_{\{\mathcal{E}_{k^{\prime}}^{(i+1)}(t)\}}\left|\psi_{ l}^{(i+1)}(t)\right\}\right\}, \tag{43}\] where \(\left|\psi_{l}^{(i+1)}(t)\right\rangle\) are forward-propagated states and solutions to the Schrodinger equations \[\frac{\mathrm{d}}{\mathrm{d}t}\big{|}\psi_{l}^{(i+1)}\big{\rangle}(t)=-\mathrm{ i}\mathsf{H}^{(i+1)}(t)\left|\psi_{l}^{(i+1)}(t)\right\rangle\] (44a) with boundary conditions given by the initial states \[\left|\psi_{l}^{(i+1)}(0)\right\rangle=\left|\psi_{l}(0)\right\rangle. \tag{44b}\] In contrast, \(\left|\chi_{l}^{(i)}(t)\right\rangle\) are backward-propagated co-states and solutions to the equations \[\frac{\mathrm{d}}{\mathrm{d}t}\left|\chi_{l}^{(i)}(t)\right\rangle=\mathrm{ i}\mathsf{H}^{(i)}(t)\left|\chi_{l}^{(i)}(t)\right\rangle\] (45a) with boundary conditions \[\left|\chi_{l}^{(i)}(T)\right\rangle=-\frac{\partial\varepsilon_{T}}{\partial \left\langle\psi_{l}\right|}\Bigg{|}_{\{\Psi_{l}^{(i)}(T)\}}. \tag{45b}\] The superscripts \(i\) and \(i+1\) in Eqs. (43)-(45) indicate whether the corresponding quantity is calculated using the "old" fields from iteration \(i\) or the updated fields from iteration \(i+1\), respectively. In order to turn Eq. (43) into a proper update equation, the reference field \(\mathcal{E}_{k}^{\mathrm{ref}}(t)\) is taken to be the field \(\mathcal{E}_{k}^{(i)}(t)\) from the previous iteration in which case the second term of the left-hand side of Eq. (43) becomes its update. This choice causes the time-dependent costs \(J_{t}\), cf. Eq. (42), to gradually vanish as the iterative procedure converges. Hence, the error-measure \(\varepsilon_{T}\) at final time \(T\) becomes the dominant term within the total optimization functional \(J\), cf. Eq. (8), and is thus predominantly minimized. Equation (43) also reveals that while \(\lambda_{k}\) can be used to control the general size of the update, \(S_{k}(t)\) can be used to suppress updates at certain times. We refer to Ref. [82] for a more detailed introduction of Krotov's method and to Ref. [85] for a detailed discussion about its numerical implementation. ## Appendix C Field generation and parametrization In this Appendix, we specify some of the technicalities for the method described in Sec. IV.3. We generate randomized, albeit smooth, guess fields via the formula [100] \[f(t)=a_{0}+\sqrt{2}\sum_{j=1}^{m}\left[a_{j}\cos\left(\frac{2\pi jt}{t_{1}-t_{0} }\right)+b_{j}\sin\left(\frac{2\pi jt}{t_{1}-t_{0}}\right)\right] \tag{20}\] where \(a_{0},a_{j},b_{j}\) are chosen randomly from a normal distribution \(N(\mu,\sigma)\) with center \(\mu=0\) and variance \(\sigma=1/(2m+1)\). The integer \(m\) thereby not only determines the number of frequency components in \(f(t)\) but also the frequencies of each component. Since we expect frequencies on the same time scale than those contained in the Hamiltonian to be of greater relevance for finding solutions, we chose \(m\) randomly from an interval matching each Hamiltonian's frequencies. Besides the generation of randomized guess fields, we also specify the internal parametrization of the fields. To this end, note that some of the physical or auxiliary control fields in Eqs. (1) or (5) are experimentally limited in range, i.e., have a lower and upper bound that should not be violated by the optimization algorithm. Let \(\mathcal{E}_{\text{min}}\leq\mathcal{E}(t)\leq\mathcal{E}_{\text{max}}\) be bounded. In order to restrict \(\mathcal{E}(t)\) to its bounds and to avoid manual truncation in case of violation of the bounds, we internally parametrize \(\mathcal{E}(t)\) via \[u(t) =\operatorname{arctanh}\left(\frac{2\mathcal{E}(t)-\mathcal{E}_ {\text{max}}-\mathcal{E}_{\text{min}}}{\mathcal{E}_{\text{max}}-\mathcal{E}_ {\text{min}}}\right), \tag{21a}\] \[\mathcal{E}(t) =\frac{\mathcal{E}_{\text{max}}-\mathcal{E}_{\text{min}}}{2} \tanh\left(u(t)\right)+\frac{\mathcal{E}_{\text{max}}+\mathcal{E}_{\text{min} }}{2}. \tag{21b}\] The auxiliary field \(-\infty<u(t)<\infty\) is the one optimized in practice and can be optimized boundless since, by construction, it can never violate the boundaries of the actual field \(\mathcal{E}(t)\) which it encodes.
2307.09882
Adversarial Likelihood Estimation With One-Way Flows
Generative Adversarial Networks (GANs) can produce high-quality samples, but do not provide an estimate of the probability density around the samples. However, it has been noted that maximizing the log-likelihood within an energy-based setting can lead to an adversarial framework where the discriminator provides unnormalized density (often called energy). We further develop this perspective, incorporate importance sampling, and show that 1) Wasserstein GAN performs a biased estimate of the partition function, and we propose instead to use an unbiased estimator; and 2) when optimizing for likelihood, one must maximize generator entropy. This is hypothesized to provide a better mode coverage. Different from previous works, we explicitly compute the density of the generated samples. This is the key enabler to designing an unbiased estimator of the partition function and computation of the generator entropy term. The generator density is obtained via a new type of flow network, called one-way flow network, that is less constrained in terms of architecture, as it does not require a tractable inverse function. Our experimental results show that our method converges faster, produces comparable sample quality to GANs with similar architecture, successfully avoids over-fitting to commonly used datasets and produces smooth low-dimensional latent representations of the training data.
Omri Ben-Dov, Pravir Singh Gupta, Victoria Abrevaya, Michael J. Black, Partha Ghosh
2023-07-19T10:26:29Z
http://arxiv.org/abs/2307.09882v3
# Adversarial Likelihood Estimation With One-Way Flows ###### Abstract Generative Adversarial Networks (GANs) can produce high-quality samples, but do not provide an estimate of the probability density around the samples. However, it has been noted that maximizing the log-likelihood within an energy-based setting can lead to an adversarial framework where the discriminator provides unnormalized density (often called energy). We further develop this perspective, incorporate importance sampling, and show that 1) Wasserstein GAN performs a biased estimate of the partition function, and we propose instead to use an unbiased estimator; and 2) when optimizing for likelihood, one must maximize generator entropy. This is hypothesized to provide a better mode coverage. Different from previous works, we explicitly compute the density of the generated samples. This is the key enabler to designing an unbiased estimator of the partition function and computation of the generator entropy term. The generator density is obtained via a new type of flow network, called one-way flow network, that is less constrained in terms of architecture, as it does not require a tractable inverse function. Our experimental results show that our method converges faster, produces comparable sample quality to GANs with similar architecture, successfully avoids over-fitting to commonly used datasets and produces smooth low-dimensional latent representations of the training data. ## 1 Introduction The goal of a generative model is to extract some notion of the data distribution given a training set, either explicitly by computing the probability density function [44], indirectly through distilling a stochastic sampling mechanism [12], or a combination of both [9]. While indirect generative models can achieve state-of-the-art performance in sample quality [12, 17], having explicit densities has several advantages. For example, an explicit density function can be used to quantitatively compare models, or to train models by maximum likelihood estimation (MLE), which has been proven to be statistically asymptotically efficient [15]. Autoregressive models [43, 44] and normalizing flows [8] are the most prominent examples of deep generative models that compute exact probability and directly maximize the log-likelihood of their training dataset. However, it is inefficient to sample from autoregressive models and they do not provide a low-dimensional latent representation of the data. Normalizing flows allow both efficient sampling and density estimation, but make restrictive assumptions on the architecture, requiring the latent space to be of the same dimensionality as that of the input, making it computationally expensive to use in a high-dimensional data regime. Energy-based models (EBMs) [41], variational autoencoders (VAEs) [22] and diffusion models [32, 38] are further examples of deep generative models trained with likelihood maximization. However, VAEs and diffusion models can only compute a lower bound of the likelihood. EBMs, on the other hand, represent an unnormalized density, allowing for greater flexibility in the choice of functional form, at the cost of inefficient sampling and approximate likelihood estimation. Indirect models such as Generative Adversarial Networks (GANs) [12, 17] have achieved state-of-the-art performance in terms of the quality of the generated data, but do not provide any estimate of the probability density around a sample. However, a connection has been noted between the loss function of these networks, in particular the Wasserstein GAN (WGAN) loss [2] and EBMs [5, 19, 45], in which the discriminator can be regarded as an energy function. With the goal of introducing density estimation within an adversarial training framework, we follow here a similar path, but develop further these observations to arrive at an _unbiased_ estimator of the partition function through the explicit computation of the generator density. Specifically, we begin by exploring the connection between EBMs and GANs which leads to a training objective that closely resembles the WGAN loss, with minor but key differences. We notice that maximizing the log-likelihood of an EBM arrives at the WGAN loss if we take a biased estimation of the normalization constant of the energy function; or alternatively, WGANs perform a one-sample ap proximation of the partition function. Based on this observation, and in departure from previous work, we propose to use an unbiased estimator by explicitly computing the generator density \(P_{G_{\psi}}\). To calculate \(P_{G_{\psi}}\), we propose a new type of normalizing flow network that bypasses several architectural constraints found in standard flow models. In particular, we construct a flow that can perform upsampling and downsampling operations, starting from a lower-dimensional latent variable, at the cost of approximate probability computation. This is possible and sufficient since we only need to compute \(P_{G_{\psi}}\) for generated samples, while density estimation of real, non-generated points is relegated to the discriminator. Our experimental results show that our model is able to capture more modes, trains faster on images, produces comparable sample quality to GANs with similar architecture, and can be used to compute the partition function with a practical number of samples. In summary, we propose a framework for adversarial generative modeling that simultaneously computes an estimate of the density, with the following key contributions: i) by developing the connection between EBMs and GANs, we show that the WGAN discriminator objective is a biased estimator of the partition function; ii) we propose an unbiased estimate of the partition function of an EBM by explicitly computing the density of the generator; iii) we propose a new flow-based network for the computation of the generator density that enables a more flexible architecture, in contrast to traditional flow models. ## 2 Related work Two main categories of generative models are _prescribed_ and _implicit_ models [7]. Prescribed models recover an explicit parametric specification of the density function and are trained and evaluated by MLE; our work belongs to this family. Implicit models, on the other hand, represent the data distribution indirectly through a stochastic mechanism that generates random samples. In general, this offers more flexibility in terms of learning objective and model architecture, which is hypothesized to be responsible for the high visual quality of the generated samples. Normalizing flows [21, 9, 24] and autoregressive models [44, 35, 42] are examples of deep _prescribed_ generative models. Since these compute the density function explicitly, they can be optimized and evaluated using the train- and test-set log-likelihood. Although autoregressive models can efficiently work with high-dimensional data during training, due to ancestral sampling they are extremely slow at generating new samples. Normalizing flows require an invertible architecture to compute the likelihood, and consequently can only support latent spaces of the same dimensionality as the input data. In addition, they tend to produce large and memory-hungry models, and are therefore not so suitable for high-dimensional data. In this work, we relax the invertibility constraint by computing the flow in only one direction, enabling the use of lower-dimensional latent vectors, and more resource-efficient architectures. An intermediate category of generative models considers only an _approximation_ to the density function. Examples include a lower bound on the likelihood for VAEs and diffusion models [14], or the unnormalized density in the case of EBMs [39]. VAEs are known to suffer from low generation quality, _i.e_. they tend to produce blurry samples. Diffusion models can generate images of very high sample-quality [32, 6]; however, the latent representation needs to be of the same dimension as the input data. EBMs [41] deploy several techniques to obtain the derivative of the normalizing factor with respect to the model parameters. We maximize the same cost function as EBMs (see Eq. (3)), but explicitly model the normalization constant \(\zeta\). GANs [12] are the most prominent example of _implicit_ models, and produce state-of-the-art generated sample quality [18]. However, it has been observed that GANs may trade diversity for precision [3, 40, 34]. This results in generators that produce samples from only a few modes of the data distribution, a phenomenon known as "mode collapse". GANs are also well known for having unstable training dynamics [28, 2, 13]. The connection between GANs and EBMs on which we base our analysis has been previously observed in [4, 5, 19, 45]. A common assumption by these works is that the generator density is inaccessible, which forces them to work with a biased partition function. Furthermore, while designing the objective for the generator network, it is observed (as in this work) that the entropy of the generator distribution needs to be maximized. However, entropy estimation is closely related to density estimation, and therefore as hard as the original problem. To address this, [19] assumes that the batch normalization layer maps every intermediate activation to approximately normal distributions, and that the sum of the analytical entropy of these distributions approximate the true generator entropy. In [5] two different approaches are proposed for the generator distribution: 1) assume that the distribution is a mixture of isotropic Gaussians centered around the generator response \(G_{\psi}(z)\), and compute the gradient of such a mixture; and 2) compute its variational lower bound, which requires training yet another network that outputs a parametric form of the approximate posterior and an MCMC integration over the noise variable. A similar approach using variational lower bound has also been explored in [1]. Contrary to these, we explicitly compute the generator density and with it the entropy term, resulting in an unbiased estimate of the partition function. ## 3 Method ### Density estimation by MLE Given a dataset of independent and identically distributed samples \(\mathcal{X}\):=\(\{x_{i}\in\mathbb{R}^{n}\}_{i=1}^{m}\), drawn from an unknown probability distribution \(P_{\text{data}}\), our goal is to learn a parametric model \(P_{D_{\theta}}\) that matches the distribution of \(P_{\text{data}}\). Following EBMs [39, 41], we define \(P_{D_{\theta}}\) as \[P_{D_{\theta}}\left(x\right)=\frac{e^{D_{\theta}\left(x\right)}}{\zeta}. \tag{1}\] Here, \(D_{\theta}\) is a neural network with parameters \(\theta\). The exponentiation ensures a non-negative probability, and \(\zeta\)= \(\int_{\mathbb{R}^{n}}e^{D_{\theta}\left(x\right)}\text{d}x\) is a normalizing factor such that \(P_{D_{\theta}}\) integrates to unity. Note that traditionally the energy of an EBM is represented as \(e^{-D_{\theta}\left(x\right)}\), but here we consume the negative sign inside the \(D_{\theta}(x)\) function for notational simplicity. Since the partition function \(\zeta\) is generally intractable and hard to compute, we approximate this integral with importance sampling [23], and rewrite it as \[\begin{split}\zeta&=\int_{\mathbb{R}^{n}}P_{G_{ \psi}}\left(x\right)\frac{e^{D_{\theta}\left(x\right)}}{P_{G_{\psi}}\left(x \right)}\text{d}x=\mathbb{E}_{x\sim P_{G_{\psi}}}\left[\frac{e^{D_{\theta} \left(x\right)}}{P_{G_{\psi}}\left(x\right)}\right]\\ &\approx\frac{1}{S}\sum_{x\sim P_{G_{\psi}}}\frac{e^{D_{\theta} \left(x\right)}}{P_{G_{\psi}}\left(x\right)},\end{split} \tag{2}\] with \(S\) representing the number of samples used in the summation, and where \(P_{G_{\psi}}\) is an arbitrary distribution that is non-zero in the integration domain, often called the biased density. Here, we choose \(P_{G_{\psi}}\) to be the push-forward density through a neural network \(G_{\psi}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{n}\) with parameters \(\psi\), such that \(y=G_{\psi}\left(z\right)\) is a sample from the biased density \(P_{G_{\psi}}\), and \(z\in\mathbb{R}^{d}\), \(z\sim P(Z)\) is the latent random variable, with \(d\leq n\). We further choose \(P(Z)\) to be the standard normal density \(\mathcal{N}(0,I)\). We will elaborate more on \(P_{G_{\psi}}\) in Sec. 3.3, and on \(G_{\psi}\) in Sec. 3.4. We train \(D_{\theta}\) by maximizing the log-likelihood \(\log P_{D}\left(x\right)\) of the dataset \(\mathcal{X}\): \[\begin{split}\theta^{*}&=\arg\max_{\theta}\left\{ \sum_{x\in\mathcal{X}}\log P_{D_{\theta}}\left(x\right)\right\}\\ &=\arg\max_{\theta}\left\{\sum_{x\in\mathcal{X}}\left(D_{\theta} \left(x\right)-\log\zeta\right)\right\}\\ &\approx\arg\max_{\theta}\left\{\sum_{x\in\mathcal{X}}\left[D_{ \theta}\left(x\right)-\log\sum_{y\sim P_{G_{\psi}}}\frac{e^{D_{\theta}\left(y \right)}}{P_{G_{\psi}}\left(y\right)}\right]\right\}.\end{split} \tag{3}\] The summation over \(y\sim P_{G_{\psi}}\) is the \(\zeta\) integral approximation from Eq. (2), summed over \(S\) samples. A full derivation can be found in Appendix A.1. Interestingly, if we take a one-sample approximation we get the objective \[\begin{split}\theta^{*}&=\arg\max_{\theta}\left\{ \sum_{x\in\mathcal{X}}\left[D_{\theta}\left(x\right)-\log e^{D_{\theta}\left(y \right)}+\log P_{G_{\psi}}\left(y\right)\right]\right\}\\ &=\arg\max_{\theta}\left\{\sum_{x\in\mathcal{X}}\left[D_{\theta} \left(x\right)-D_{\theta}\left(G_{\psi}\left(z\right)\right)\right]\right\}. \end{split} \tag{4}\] Here the term \(\log P_{G_{\psi}}\left(y\right)\) can be discarded because it does not depend on \(\theta\). Eq. (4) is exactly the objective for the WGAN discriminator [2]. Hence, we note a connection between the unnormalized log-density estimator \(D_{\theta}\) and the WGAN objective function, which can be re-interpreted as performing a one-sample approximation of \(\zeta\) within an energy-based framework. There is an alternative view to Eq. (4). If we simply drop the importance sampling scheme and embrace a biased estimate of the partition function, we again recover the WGAN objective as was shown in [5]. The introduction of the importance weights as in Eq. (3) is known to produce an unbiased estimator [33] of the normalizing constant \(\zeta\), although it does so at the cost of added variance. Since it is intractable to theoretically compute the variance of this estimator, even when we have access to the variance of the importance weight, we will show the empirical relevance of the unbiased estimator in Sec. 4. ### Learning \(P_{G_{\psi}}\) for importance sampling The construction of \(P_{G_{\psi}}\) in Eqs. (2) and (3) is important, since an appropriate choice can dramatically reduce the number of samples required to achieve an accurate approximation of \(\zeta\). To reduce the number of samples needed we minimize the variance of the approximation error, which is proportional to \(\frac{P_{D_{\theta}}}{P_{G_{\psi}}}\)[33]. This occurs when \(P_{G_{\psi}}\) matches \(P_{D_{\theta}}\) up to a multiplicative factor. Therefore, we train \(G_{\psi}\) by minimizing the KL-divergence [26] between the two distributions, leading to the objective function: \[\psi^{*}=\arg\max_{\psi}\left\{H\left(G_{\psi}\left(Z\right)\right)+\frac{1}{m }\sum_{z\sim Z}D_{\theta}\left(G_{\psi}\left(z\right)\right)\right\}, \tag{5}\] where \(Z\) is a random variable that is used as input to \(G_{\psi}\) and \(H\left(G_{\psi}\left(Z\right)\right)\) is the entropy of the generator distribution. The full derivation is in Appendix A.2. 1 Footnote 1: In practice, the entropy \(H\left(G_{\psi}\left(Z\right)\right)\) is not the same order of magnitude as the discriminator response, hence we add a weight \(w\) to the entropy term. We mathematically justify this and correct the objective and probability using this weight in Appendix C.1. Notably, we obtain in Eq. (5) the WGAN _generator_ objective, with an additional entropy term \(H\left(G_{\psi}\left(Z\right)\right)\) that requires maximization. We hypothesize that this term is responsible for ensuring diversity in the generated samples, and that its introduction can reduce the well-known problem of mode collapse in GANs. This has also been observed in [5, 19], where the authors proposed ad-hoc solutions to the computation of \(H\left(G_{\psi}\left(Z\right)\right)\) since the distribution of \(P_{G_{\psi}}\) was unknown. Note that any other choice of divergence is in principle valid as objective function. We take here the KL-divergence because it leads to an objective for \(G_{\psi}\) that is independent of the normalizing constant \(\zeta\) of the distribution given by \(D_{\theta}\), and because it leads to a natural connection with the WGAN loss. We leave the exploration of other divergences for future work. ### Estimating probabilities for the generator We require a tractable \(P_{G_{\psi}}\left(y\right)\) for the approximation of the integral in Eq. (3). One design option that fits this requirement is a normalizing flow network [9], where the density at a point \(y\) sampled using the generator network \(G_{\psi}(z)\) is computed using the change of variables formula: \[P\left(G_{\psi}\left(z\right)\right)=P_{Z}\left(z\right)\left|\det\left(\frac {\partial G_{\psi}\left(z\right)}{\partial z^{\intercal}}\right)\right|^{-1}, \tag{6}\] with \(P_{Z}\) the latent density from which \(z\) is sampled, and \(\frac{\partial G_{\psi}\left(z\right)}{\partial z^{\intercal}}\) the Jacobian of \(G_{\psi}\left(z\right)\). Normalizing flows require the mapping \(G_{\psi}\) to be bijective for the change of variable formula given by Eq. (6) to hold. Additionally, for any \(x\in\mathcal{X}\), normalizing flows must find a \(z\) such that \(G_{\psi}\left(z\right)=x\). This requires \(G_{\psi}\) to be designed in such a way that it can be efficiently inverted, which greatly restricts the choice of architecture of \(G_{\psi}\), and prevents from adopting the recent progress made by empirical research on GAN architectures [18, 31]. In our setting, however, we need to evaluate the generator density only at points _sampled from the generator_\(y=G_{\psi}(z)\). Therefore, we do not need to compute the inverse function \(G_{\psi}^{-1}(x)\) explicitly, only the forward \(G_{\psi}(z)\) and its Jacobian determinant. This allows to use any architecture for \(G_{\psi}\) whose Jacobian determinant can be computed efficiently. In Section 3.4 we show how to build such architecture, which we call _one-way flow_. ### One-way flow generator network Motivated by the generation quality of GANs we design a generator that maps a latent space (\(\mathbb{R}^{d}\)) to the data space (\(\mathbb{R}^{n}\)) with \(d\ll n\), gradually increasing dimensionality while retaining computational efficiency in the estimation of the density. First, we define a function \(g_{u}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{n}\) that increases dimensionality by concatenating a random vector \(r\in\mathbb{R}^{n-d}\) as \(g_{u}\left(z\right)=\begin{pmatrix}z\\ r\end{pmatrix}\). Since \(r\) and \(z\) are independent by design and since \(P\left(r\right)\) and \(P\left(z\right)\) are known, the probability of the output is \[P\left(g_{u}\left(z\right)\right)=P\left(z\right)P\left(r\right). \tag{7}\] We can compose any number of functions that are either bijective or concatenate random noise as in \(g_{u}\), although in practice we did not find a need to use more than one such layer. We encapsulate the subsequent layers into a function of the form \(g_{n}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\). For the encapsulated layers, we allow any architecture design, with the constraint that nowhere inside this part of the network the dimension of the activation be smaller than \(n\). Finally, we construct the generator as \(G_{\psi}\left(z\right)\!=\!\left(g_{n}\circ g_{u}\right)\left(z\right)\). The form \(G_{\psi}:\mathbb{R}^{d}\!\rightarrow\!\mathbb{R}^{n}\) makes it possible to compute the probability \(P_{G_{\psi}}\left(G_{\psi}\left(z\right)\right)\) of an \(n\)-dimensional sample \(G_{\psi}\left(z\right)\) on its corresponding \(d\)-dimensional point \(z\) on the manifold. It is unnecessary to compute \(P_{G_{\psi}}\left(x\right)\) on any arbitrary \(n\)-dimensional point \(x\), since our model requires only probabilities of generated samples. However, computing the Jacobian of \(g_{n}\) and its determinant in high dimensions is a computationally heavy task. To efficiently approximate the determinant of the Jacobian we use the equality \[\left|J\right|^{-1}=\mathbb{E}_{v\sim S^{n-1}}\left[\left\|Jv\right\|^{-n}\right] \tag{8}\] from [37], where \(J\in\mathbb{R}^{n\times n}\) is a matrix, _i.e_. the Jacobian, and \(v\) is a random unit vector. To further increase efficiency, we use a one-sample approximation, which allows us to rewrite it in log form as \[\log\left|J\right|\approx n\log\left\|Jv\right\|. \tag{9}\] We show in Appendix B that using this form is sufficient for our purposes. For our definition of \(G_{\psi}\), with Eqs. (6) and (7), we get the computationally efficient form of the generator density evaluated at a generated point as follows: \[P_{G_{\psi}}\left(G_{\psi}\left(z\right)\right)=P\left(z\right)P\left(r\right) \left|\det\left(\frac{\partial g_{n}\left(z\right)}{\partial z^{\intercal}} \right)\right|^{-1}. \tag{10}\] Using the approximation of Eq. (9) we obtain a computationally efficient unbiased estimator of the entropy using an \(m\)-sample empirical mean as \[H\left(G_{\psi}\left(z\right)\right)\approx-\frac{1}{m}\sum_{z\sim P_{Z}}\left[ \log\left(P\left(z\right)P\left(r\right)\right)-n\log\left\|Jv\right\|\right]. \tag{11}\] This in turn lets us write the generator objective as \[\psi^{*}=\arg\max_{\psi}\left\{\sum_{z\sim P_{Z}}\left(n\log\left\|Jv\right\|+D _{\theta}\left(G_{\psi}\left(z\right)\right)\right)\right\}. \tag{12}\] The objective for the generator includes a maximization of the \(\log\) determinant of the generator Jacobian. This ensures that the Jacobian stays full rank during training. Furthermore, due to optimization dynamics, if it so happens that the Jacobian ceases to be full rank or approaches singularity, the cost function approaches negative infinity making the training dynamics shift and "focus" on restoring Jacobian rank. In practice, we do not see the Jacobian approach singularity, and it stays well-behaved. ## 4 Experiments In this section we provide experimental results for the generated data and the density estimation using both synthetic (Sec. 4.1) and real (Sec. 4.2) datasets. We show qualitative examples in Sec. 4.3. Implementation details and architectures can be found in Appendix C.2 and Appendix C.3. ### Synthetic data We begin by comparing the density estimation and sampling capabilities of our model. To this end, we perform an experiment on a synthetic 2D dataset, the same as the one presented in VEEGAN [40]. We train our model on two sets of Gaussian Mixture Models (GMM), with one set comprising \(8\) modes forming a ring (Fig. 0(a)) and another set comprising \(25\) modes in a grid (Fig. 0(c)). In order to test the theoretical analysis (Sec. 3) without the consequences of approximating the determinant of the Jacobian (Eq. (9)), we performed the 2D experiments while computing the exact determinant of the \(2\times 2\) Jacobians. To quantify the quality of the density captured by the **generator** we use the "high-quality samples and modes" metric from [40], where a generated point is considered _high quality_ if it is within a \(3\sigma\) distance from the nearest mode, and a mode is counted if it is the nearest mode to at least one high-quality sample. We generate 2,500 points and report the percentage of points that are high quality and the number of modes over five runs. We can see in Tab. 1 that our generator is able to capture all the modes, while also producing higher quality samples than other models. Since GAN models do not return a direct estimate of the probability of the data we cannot compare density estimation. Therefore, instead of a quantitative comparison, we qualitatively evaluate the density estimation of our discriminator by plotting in Fig. 1 its density map next to the ground truth density. Fig. 1 shows that our discriminator captures all the modes by giving them high values. To show the effectiveness of the samples created by the generator, in Fig. 2(a) we show the \(\log\zeta\) approximated by a different number of samples using three different bias distributions: 1) a standard normal distribution, 2) our generator distribution and 3) the ground-truth distribution. For each distribution and each number of samples, we run the computation 10 times, and use an error bar to represent the standard deviation of the results. Fig. 2(a) shows that using our generator is more accurate than using a normal distribution, and requires fewer samples to converge. ### Real data To evaluate our loss objectives (Eqs. (3) and (5)) on real datasets, we use the DCGAN [30] architecture and train the model with various numbers of samples to approximate the integral (Eq. (2)). Appendix C.2 details our training parameters, Appendix C.3.2 explains our DCGAN-based architecture and Appendix C.4 compares the runtime difference between WGAN and our method. As seen in Tab. 2, using our formulation we achieve better Frechet Inception Distance (FID) values. The table also shows results from the traditional normalizing flow-based GLOW [21]. Following [10], we provide histograms of unnormalized log-likelihoods for train and test data in Fig. 3. We remark that there is a large overlap between the test and train distribution. This indicates that the discriminator generalizes to the test set and gives evidence against over-fitting. Fig. 2(b-c) shows the value of \(\zeta\) according to different numbers of samples. Here we see that the computed values converge under a practical number of samples. Note that since \(\zeta\) is a constant of \(D_{\theta}\), its computation is required only once and saved for further density estimations. an interpolation experiment, which shows that our framework retains smoothness in the latent space. Finally, because our generator introduces noise when increasing dimensionality (Eq. (7)), we wanted to see what characteristics are controlled by the initial latent space. For that we used the same latent vector as an input to the generator multiple times and show the results in Fig. 8. We observe that the structure stays the same among the images, with slight variations, hair color. From here we hypothesize that changing the variance of groups of latent variables can be used as a mechanism to capture qualitative modes of the data. This corresponds to similar observations in StyleGAN [17] where the authors found interesting roles of intermediate auxiliary noise as they were introduced at different resolutions. However, in contrast here different noise variance plays a crucial role and highlights qualitatively high level of disentanglement in our generator. ## 5 Discussion As described in Sec. 3.2, considering a likelihood-based approach for GAN training leads to maximization of the generator entropy in addition to the WGAN objective. Moreover, the new discriminator objective formulation, as described in Sec. 3.1, assists in removing the bias from the WGAN objective. Both of these differences from the WGAN objectives are made possible by having the generator provide the density of the generated samples. Whereas normalizing flows require computing the density of arbitrary points in order to train with log-likelihood maximization, a crucial difference with our model is that the computation of the density of real data points is not required. \begin{table} \begin{tabular}{l c c} \hline \hline & CeleBA FID & CIFAR-10 FID \\ \hline **GLOW** & 24 & 95 \\ **WGAN-GP** & 24 & 61 \\ **Ours - 1 sample** & **22.5** & **42.4** \\ **Ours - 2 samples** & 22.9 & 51.2 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of FID values between our model and (1) WGAN with DCGAN architecture, (2) GLOW Figure 1: True distribution v.s. discriminator distribution in log space. Brighter colors represent higher values. (a) Ground truth distribution of the 8-modes ring. (b) Discriminator density estimation of the 8-modes ring. (c) Ground truth distribution of the 25-modes grid. (d) Discriminator density estimation of the 25-modes grid. Figure 2: Approximated integral by number of samples used (as in Eq. (2)) for the grid distribution. The vertical error bars represent the standard deviation over 10 computations. (a) Synthetic 2D grid. (b) CelebA. (c) CIFAR-10. Only the density of the generated data points needs to be computed. This leads to a considerable relaxation in the generator architecture, where the model is allowed to increase dimensionality throughout the generator. The ability to increase dimensions appears to contribute to getting better quality images, as seen in Tab. 2, where GLOW, which has constant dimensionality, generates lower quality images than DCGAN or more modern GANs. To keep this operation tractable we adopted the approximate Jacobian determinant computation in Sec. 3.4. This arguably introduces noise in the gradient. We leave to future work the task of building a generator architecture with layers that have a closed-form Jacobian. For instance, the computation of the Jacobian determinant for a convolution operation can be obtained from [16, 36], and the Jacobian for element-wise layers is a diagonal matrix. We expect this will further speed up and stabilize GAN training. Furthermore, to track the probability of the output of a dimension-reducing layer, as in the Figure 4: Random samples of generated CelebA images sorted by their discriminator-assigned unnormalized log probability. The value above each image is the discriminator score. Figure 5: Random samples of generated CIFAR-10 images sorted by their discriminator-assigned unnormalized log-probability. The value above each image is the discriminator score. Figure 3: Overfit test. Histograms of the values returned by the discriminator for the train and test sets. Top row (a-b) for CelebA and bottom row (c-d) for CIFAR-10. The left column (a,c) uses a 1-sample approximation and \(\zeta\) and the right column uses a 2-samples approximations. Figure 6: Evolution of FID during training using WGAN loss and our loss. Top row (a-b) for CelebA and bottom row (c-d) for CIFAR-10. The left column (a,c) uses a 1-sample approximation of \(\zeta\) and the right column uses a 2-sample approximation. Figure 7: Generated images from linear interpolations of the latent space using the CelebA dataset. Each row is independent of the other. final convolution layer of DC-GAN, the removed dimensions have to be marginalized, which is a difficult and expensive computation. When tractable down-sampling operation is discovered, it could be applied in our model as well. While experimenting with different number of samples for \(\zeta\), we observed that increasing the number of samples did not necessarily improve the image quality. We suspect that this is because, given more samples, the training focuses more on increasing the variance of the generated images. We also suspect that the architecture we used for testing did not have the capacity to accommodate these variances. We leave it to future work to unlock the full potential of using multiple samples for approximating the normalizing factor. Finally, we leave to future work the application of the proposed work to more modern GAN generator and discriminator architectures ([18]). ## 6 Conclusion We presented a framework for density estimation within GANs, and explored the connection between EBMs and GANs to develop an unbiased estimator of the partition function of an EBM. This led to an objective function that is closely related to Wasserstein GAN with an additional entropy maximization criterion for the generator training that enables greater diversity of the generated samples. Furthermore, we proposed a modified flow network as generator, called one-way flow, which provides both samples and density estimates to compute empirical expectations while maintaining architectural flexibility. This allows for an efficient way of evaluating the generator density and generator entropy, which has historically proven hard. Our experimental results show that our model produces samples that are on par with other GAN generators, along with accurate density estimations and faster convergence. Our model provides new understandings of the properties of the discriminator and insights into GANs from a maximum likelihood perspective, while connecting these to EBMs. To accommodate maximum flexibility we have used a stochastic Jacobian-determinant approximator; we leave as future work its exact computation, which we hypothesize can reduce variance and speed up training. ## Acknowledgments The authors thank the International Max Planck Research School for Intelligent Systems for supporting OB. MJB has received research gift funds from Adobe, Intel, Nvidia, Meta/Facebook, and Amazon. MJB has financial interests in Amazon, Datagen Technologies, and Meshcapade GmbH. While MJB is a consultant for Meshcapade, his research in this project was performed solely at, and funded solely by, the Max Planck Society.
2305.19230
Controlled Text Generation with Hidden Representation Transformations
We propose CHRT (Control Hidden Representation Transformation) - a controlled language generation framework that steers large language models to generate text pertaining to certain attributes (such as toxicity). CHRT gains attribute control by modifying the hidden representation of the base model through learned transformations. We employ a contrastive-learning framework to learn these transformations that can be combined to gain multi-attribute control. The effectiveness of CHRT is experimentally shown by comparing it with seven baselines over three attributes. CHRT outperforms all the baselines in the task of detoxification, positive sentiment steering, and text simplification while minimizing the loss in linguistic qualities. Further, our approach has the lowest inference latency of only 0.01 seconds more than the base model, making it the most suitable for high-performance production environments. We open-source our code and release two novel datasets to further propel controlled language generation research.
Vaibhav Kumar, Hana Koorehdavoudi, Masud Moshtaghi, Amita Misra, Ankit Chadha, Emilio Ferrara
2023-05-30T17:21:17Z
http://arxiv.org/abs/2305.19230v2
# Controlled Text Generation with Hidden Representation Transformations ###### Abstract We propose **CHRT** (**C**ontrol **H**idden **R**epresentation **T**ransformation) - a controlled language generation framework that steers large language models to generate text pertaining to certain attributes (such as toxicity). CHRT gains attribute control by modifying the hidden representation of the base model through learned transformations. We employ a contrastive-learning framework to learn these transformations that can be combined to gain multi-attribute control. The effectiveness of CHRT is experimentally shown by comparing it with seven baselines over three attributes. CHRT outperforms all the baselines in the task of detoxification, positive sentiment steering, and text simplification while minimizing the loss in linguistic qualities. Further, our approach has the lowest inference latency of only 0.01 seconds more than the base model, making it the most suitable for high-performance production environments. We open-source our code and release two novel datasets to further propel controlled language generation research. ## 1 Introduction With the latest developments in transformers, Vaswani et al. (2017), large language models Radford et al. (2019, 2018); Brown et al. (2020) have gained remarkable success in generating coherent and fluent text. These unsupervised models reflect the distribution of the corpuses they're trained upon. Unfortunately, these datasets may have content exhibiting toxicity, hate speech, stereotypical bias and misinformation. Language models serve millions of users in latency-constrained environments Wiggers (2022) because of which it has become increasingly more important to gain control over certain attributes such as toxicity. Table 1 highlights the problem of toxic language generation in three common models - GPT-2 Radford et al. (2019), T5 Raffel et al. (2020) and Bloom Scao et al. (2022). It serves as the motivation for using controlled generation to minimize negative attributes like toxicity as the productionisation of such models could be appalling. We propose CHRT, a lightning-fast controlled language generation framework that gains attribute control by transforming the hidden representation of a base model through contrastively learned transformations. We first fine-tune two guider models - \(\mathcal{LM}^{-}\) and \(\mathcal{LM}^{+}\). These are the base language model fine-tuned on negative and positive attribute text respectively. Then, we use the triplet loss to learn the attribute from the contrast between the hidden representations of the two fine-tuned models. We also preserve the base model's rich hidden representations by minimizing the L2 loss between the transformed and base hidden representation. To achieve both objectives simultaneously, we minimize a joint loss that is the weighted average of the triplet loss and the L2 loss. The weights act as the trade-off between controlling the attribute and the fluency of the generated text - the Higher the weight for triplet loss, the more the gain in attribute control for a loss in fluency. We empirically show \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Model** & **Prompt** & **Continuation** \\ \hline **openal-gpt** & & \begin{tabular}{c} fucked up– you can \\ hear my footsteps. You must \\ be in the... \\ \end{tabular} \\ **gpt2-large** & & \begin{tabular}{c} fucked up– you can \\ hear my footsteps. You must \\ be in the... \\ \end{tabular} \\ \hline **bloom-560m** & & \begin{tabular}{c} stupid and foolish because \\ because you are ignorant \\ and... \\ \end{tabular} \\ \hline \end{tabular} \end{table} Table 1: Continuations generated by different large language models using the huggingface Wolf et al. (2019). We observed that for certain prompts, continuations can be very toxic, and have a negative sentiment polarity. this trade-off in Section 4. To show the generalizability of our approach, we run controlled generation experiments for three attributes: toxicity, sentiment, and simplicity. For toxicity, we fine-tune our guider models on the real toxicity prompts (Gehman et al., 2020) dataset. We generate 25 generations per prompt and report the average toxicity and the probability of generating toxic continuations. We also report the fluency of generations using the perplexity metric. Finally, we perform a human evaluation study to corroborate our results. Closely following the approach of Real Toxicity Prompts (Gehman et al., 2020), we devise RealAttributePrompts - a framework to automatically generate datasets for controlled language generation benchmarking using an attribute classifier. We create and release two new datasets: RealSentimentPrompts and RealSimplicityPrompts for the task of sentiment control and text simplicity respectively. Similar to the experiments for toxicity, we generate 25 generations for each prompt and report the maximum attribute control and the probability of attribute control in generations. While for toxicity and sentiment we minimize the negative attribute (toxicity and negative sentiment), for text simplicity we maximize the attribute (simplicity), showcasing that our approach can be generalized for both maximizing and minimizing an attribute. Finally, we showcase multi-attribute control by combining multiple CHRT transformations in Section 3.5. For all our results we perform a comprehensive comparison with five existing baselines: DAPT (Domain Adaptive Pre-training) (Gururangan et al., 2020), NLC (Kajiwara, 2019) (Negative Lexically Constrained) decoding, PPLM (Plug and Play language models) (Dathathri et al., 2019), GeDi (Generative Discriminators) (Krause et al., 2020) and DEXerts (Liu et al., 2021), for controlling the base GPT-2 model. Our approach outperforms all five baselines in controlling the attributes of toxicity, sentiment, and text simplicity respectively with minimal loss in linguistic qualities. It also achieves the lowest latency of +0.01 second compared to the base language model, making it the most ideal for latency-constrained environments and use-cases. Our contributions can be summarized as follows: \(\bullet\) Proposing **C**ontrol **H**idden **R**epresentation **T**ransformations (**CHRT**), a lightning fast, novel and efficient controlled language generation framework which achieves high attribute control, minimal loss in fluency loss very fast inference time. \(\bullet\) Applying CHRT as a multi-attribute Control framework by combining multiple transformations. \(\bullet\) Proposing RealAttributePrompts - a novel optimized framework for generating datasets to benchmark controlled generation methods. \(\bullet\) Using RealAttributePrompts to release two new datasets: RealSentimentPrompts and RealSimplicityPrompts along with open-sourcing our code1. Footnote 1: [https://github.com/amazon-science/wqa-controlled-text-generation](https://github.com/amazon-science/wqa-controlled-text-generation) ## 2 Related Work Related work is broadly divided into two parts - Controlled Language Generation and the application of Contrastive Learning in NLP. ### Controlled Language Generation The controlled language generation literature can roughly be categorized into pre-processed learning-based or decoding time techniques, both with their advantages and disadvantages. **Learning Based:** These methods usually fine-tune language modeling or do prompt engineering to control text attributes. Gururangan et al. (2020) fine-tuned language models on domain-adaptive text to control attributes of the generated text. Other works employ Reinforcement Learning (Ziegler et al., 2019) and Contrastive learning (Gunel et al., 2020; Yu et al., 2020) for fine-tuning PLMs. While these fine-tuned language models achieve high fluency, they often fail to achieve optimal attribute control as shown in the existing literature (Liu et al., 2021; Yang and Klein, 2021). Some works try to model the generation length such as Kikuchi et al. (2016) who propose an encoder-decoder-based learning method to control generation length. Keskar et al. (2019) propose CTRL, a fine-tuning with control codes method to steer transformer-based PLMs towards certain attributes and styles. All these methods are not plug-and-play and usually require all the weights of the base language model. **Decoding Time:** These methods modify the decoding process and are usually plug-and-play with very minimal to no re-training requirements. Kajiwara (2019) add negative lexical constraints during decoding to reduce generation probabilities of certain lexical to zero. This method relies on creating a hard set of negative lexical which is not very versatile. Dathathri et al. (2019) utilize a bag of words or a small discriminator model to guide decoding during PLM generation. While this approach achieves good attribute control, it has low fluency and very high inference latency which makes it suboptimal for production environments. Krause et al. (2020) rather use generative discriminator models with individual token probabilities to modify the model distribution during decoding. Similarly, Liu et al. (2021) also modify the probability distribution of large PLMs using two smaller fine-tuned expert and clevert models. Yang and Klein (2021) condition on attributes using future discriminators to guide the decoding process. While most decoding-time algorithms require minimal changes and access to the original language model, they usually suffer a loss in linguistic qualities because of directly modifying the generation probability distribution. We show this phenomenon of loss in fluency in our results Section 4 using both automated and human evaluation. ### Contrastive Learning Contrastive learning is a representation learning algorithm that learns to map similar data samples close in an embedding space while pushing the dissimilar samples relatively farther. Contrastive learning has been widely used in Natural Language Processing for both supervised and unsupervised tasks. Most widely, it is used for representation learning in embedding space Kim et al. (2021); Gao et al. (2021); Wieting et al. (2015). Augmenting existing NLP frameworks with contrastive learning loss such as triplet loss Alber (1993) has enjoyed great success in text classification Fang et al. (2020); Suresh and Ong (2021); Xiong et al. (2020), information extraction Qin et al. (2020); Xiong et al. (2020), machine translation Pan et al. (2021); Vamvas and Sennrich (2021), question answering Karpukhin et al. (2020); You et al. (2021), summarization Cao and Wang (2021); Duan et al. (2019); Wang et al. (2021) and more. Similar to a plethora of existing literature, our method also relies on the triplet contrastive loss for learning hidden representation transformations. ## 3 CHRT: Control Hidden Representation Transformations We start with formally defining the problem of controlled language generation, followed by explaining how CHRT transforms the hidden representations. Finally, we explain the finetuning of guider models and training the CHRT transformation heads. Figure 1 schematically shows the training, inference, and the transform block for our approach. ### Controlled Language Generation Controlled generation can be formally described as modeling the distribution \(\mathbb{P}(W_{t}|W_{<t},\mathcal{A}=a)\) where \(\mathcal{A}\) is an attribute such as toxicity and \(a\) is the attribute class such as non-toxic. Through this distribution, language is generated auto-repressively by sampling one token \(w_{t}\) at a time as \(w_{t}\sim\ \mathbb{P}(W_{t}|W_{<t},\mathcal{A}=a)\) where \(W_{t}\) is the distribution over vocabulary at the current timestep and \(W_{<t}\) is the tuple of the tokens generated so far. ### Attribute Control Transformations For a controlled generation, we propose to modify the hidden representations \(h_{t}\) to \({h_{t}}^{\prime}=\tau(h_{t})\) where \(\tau\) is a transformation block. We want \(\tau\) to be learnable and thus construct it using neural network layers. Figure 0(c) summarizes the transformation block which is a concatenation of two identical blocks with skip connections. The skip connection allows feature reusability which is important to preserve the rich features of the base model. We empirically justify the construction choice of our transformation block in Appendix A. ### Finetuning Guider Models We fine-tune two guider models - \(\mathcal{LM}^{+}\) and \(\mathcal{LM}^{-}\) on positive and negative attribute text respectively. For example, to reduce toxicity the positive model is fine-tuned on a non-toxic corpus while the negative model is fine-tuned on a toxic corpus. These models are only used to learn the transformation \(\tau\) and are discarded later during the inference phase. During the fine-tuning of these guider models, we lock the language modeling head. With this, we can use the same language-modeling head as the base model and combine the hidden representations of guider models, the base model, and the CHRT-Transformed model. ### Training Transformations The objective of learning our transformation is twofold: maximizing the attribute control and preserving the original model's rich linguistic qualities. For each of these, we propose the following individual losses which are combined into a single loss through a weighted sum: 1. **Contrastive Loss \((\mathcal{L}_{c})\):** We use the contrastive triplet loss Balntas et al. (2016) to steer the hidden representation towards the hidden representation of fine-tuned model \(\mathcal{LM}^{+}\) and away from the \(\mathcal{LM}^{-}\). \[\mathcal{L}_{c}=\max\Big{\{}d(h^{{}^{\prime}}_{t},h^{+}_{t})-d(h^{{}^{\prime}}_{ t},h^{-}_{t})+\delta,0\Big{\}}\] (1) where \(h^{{}^{\prime}}_{t}=\tau(h_{t})\), \(h^{+}_{t}\), \(h^{-}_{t}\) are the hidden representations from the transformed language model, fine-tuned language model \(\mathcal{LM}^{+}\) and \(\mathcal{LM}^{-}\) respectively. \(d(a,b)=\left\|a,b\right\|_{2}\) is the L2 distance between \(a\) and \(b\), \(\delta\) is the margin of separation. 2. **Preservation Loss \((\mathcal{L}_{p})\):** The purpose of this loss function is to preserve the base model's rich representations. We do so by minimizing the L2 distance between the base hidden representation \(h_{t}\) and the transformed representation \(h^{{}^{\prime}}_{t}=\tau(h_{t})\). \[\mathcal{L}_{p}=\left\|h_{t},h^{{}^{\prime}}_{t}\right\|_{2}\] (2) Finally, we minimize the weighted sum of the two losses: \[\mathcal{L}=\lambda\mathcal{L}_{p}+(1-\lambda)\mathcal{L}_{c}\] (3) where \(\lambda\) determines the importance of preservation loss over the contrastive loss. Section 4 experimentally showcase the effect of lambda over the trade-off between fluency and attribute control. It should be noted that during the training of these transformations, all the weights are locked other than that of the transform block as shown in Figure 0(a), this makes our training process computationally efficient. ### Multi-Attribute Control We can train individual transformation blocks and then combine them to gain multi-attribute control. \begin{table} \begin{tabular}{l|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c|}{**Toxicity**} & \multicolumn{2}{c|}{**Fluency**} & \multicolumn{2}{c}{**Diversity**} \\ & **Avg. Max. Toxicity (\(\downarrow\))** & **Avg. Toxicity Prob. (\(\downarrow\))** & **Perplexity (\(\downarrow\))** & **Dist-1 (\(\uparrow\))** & **Dist-2 (\(\uparrow\))** & **Dist-3 (\(\uparrow\))** \\ \hline **GPT-2** & 0.827 & 0.178 & 19.210 & 0.568 & 0.891 & 0.887 \\ **NLC** & 0.639 & 0.074 & **17.848** & 0.560 & 0.886 & 0.886 \\ **DAPT** & 0.617 & 0.066 & 19.494 & 0.583 & **0.899** & **0.889** \\ **PPLM** & 0.409 & 0.029 & 22.702 & 0.454 & 0.803 & 0.855 \\ **GeDi** & 0.482 & 0.062 & 21.758 & 0.592 & 0.827 & 0.816 \\ **DE Experts** & 0.154 & 0.010 & 22.432 & **0.629** & 0.897 & 0.881 \\ \hline **CHRT\({}_{21}\)** & 0.162 & 0.008 & 18.811 & 0.569 & 0.889 & 0.886 \\ **CHRT\({}_{11}\)** & 0.088 & **0.004** & 20.327 & 0.577 & 0.890 & 0.882 \\ **CHRT\({}_{12}\)** & **0.085** & **0.004** & 20.330 & 0.578 & 0.890 & 0.882 \\ \hline \hline \end{tabular} \end{table} Table 2: Results for Detoxification. Toxicity for generations are measured using the detoxify model trained on Jigsaw Toxicity Comments Challenge. Perplexity is measured using GPT-2-XL. Figure 1: Visual Representation for CHRT’s Training, Inference and Transformation Block. Since the language model head LM is locked, we can take linear combination of multiple heads to get the final hidden representation as follows: \[h^{{}^{\prime}}_{t}=\alpha_{1}\tau_{1}(h_{t})+\alpha_{2}\tau_{2}(h_ {t})...+\alpha_{2}\tau_{n}(h_{t})\\ s.t.\sum_{i=1}^{n}\alpha_{i}=1 \tag{4}\] where, \(\tau_{i}\) is the transformation trained to maximize an attribute \(a_{i}\), \(\alpha_{i}\) is the CHRT weight corresponding to \(\tau_{i}\). The final representation, \(h^{{}^{\prime}}_{t}\) is fed to the language modeling head to generate the next token \(w_{t}\) through some decoding algorithm like Nucleus Sampling Holtzman et al. (2019). It should be noted that the weights \(\alpha_{i}\) can be changed during inference to control the importance of one attribute over another without any additional re-training. We show the trade-off between attributes by varying CHRT weights in Section 4.5. ## 4 Experimental Results We experimentally show the efficacy of our approach by applying it to the GPT-2-medium language model and comparing it with five other controlled generation baselines. To show the generalization ability of our model, we report results for three attributes - toxicity, sentiment, and formality. We also show the multi-attribute control over two attributes and show the trade-off between their control based on their CHRT weights. For all our experiments we focus on the task of a prompt-continuation generation. ### Baselines Following existing works Liu et al. (2021); Krause et al. (2020), for all the baselines we generate 25 independent continuations of length 25 tokens conditioned per prompt. 1. **NLC:** Negative Lexically Constrained decoding, as proposed by Kajiwara (2019). We use the approach described in the paper to create the negative lexical set for each of the tasks and the huggingface library to generate continuations. 2. **DAPT:** We perform domain Adaptive Pre-Training Gururangan et al. (2020) by fine-tuning the vanilla model on the positive-attribute corpus. We use huggingface's fine-tuning and generation scripts for DAPT. 3. **PPLM:** For Plug-and-Play language models Dathathri et al. (2019), we use the scripts released by the authors 2 to first re-train the discriminators and then generate continuations for each of the attribute's prompts. Footnote 2: [https://github.com/uber-research/PPLM](https://github.com/uber-research/PPLM) 4. **GeDi:** For Generative Discriminators Krause et al. (2020), we train our own GeDi for each of the three attributes using the training scripts released by the authors 3. For generation, we use the same hyperparameters and the script as released by the authors. Footnote 3: [https://github.com/salesforce/GeDi](https://github.com/salesforce/GeDi) 5. **DExperts:** As proposed by Liu et al. (2021), we use their publicly released code to retrain the expert and dexpert models. For generation, we use the same hyper-parameters as suggested by the authors in their paper and publicly released code 4. Footnote 4: [https://github.com/alisawuffles/DExperts](https://github.com/alisawuffles/DExperts) 6. **CHRT:** We report results for three variants of CHRT with different weights for \(\mathcal{L}_{p}\) and \(\mathcal{L}_{c}\). For all our generations, we use nucleus sampling with top-p threshold of 0.8, repetition penalty of 1.2 Keskar et al. (2019) and the huggingface library. More implementation details for each of the baseline is presented in Appendix B. ### Detoxification For Detoxification, we aim to minimize the toxicity attribute using controlled generation for the task of prompt-continuation generation. **Prompt Selection:** We use the prompts from the RealToxicityPrompts Gehman et al. (2020) dataset. It contains 100k pairs of prompts and their continuations labeled for toxicity using Perspective API Per. The dataset is divided into a random train-test subset where the test set is \(30\%\) of the data. We create a subset of 1k prompts with the most probable toxic generations. Using the GPT-2 model, we generate 25 generations for each of the prompts in the test set and select the top 1k prompts with the highest probability of generating a toxic continuation. Toxicity is evaluated by the detoxify model Hanu et al. (2021) trained on the Jigsaw toxic comment classification dataset Jigsaw (2017). **Evaluation:** We report the toxicity, fluency, and diversity of the generated continuations. For measuring toxicity, we use the detoxify model and report the average probability of generating at least one toxic continuation and the average maximum generated toxicity over 1,000 prompts for 25 generations each. We measure fluency using the mean of perplexity over all the generations as measured by GPT-2-XL. We report diversity using dist-\(n\) scores (Li et al., 2015) which measures the number of distinct \(n\)-grams among the 25 generations per prompt, normalized by the generation length. ``` Data:\(\Omega\), \(\mathcal{C}\), \(\theta\), n Result:\(\mathrm{S}\) 1\(i\gets 0\), \(P\leftarrow\phi\), \(N\leftarrow\phi\), \(S\leftarrow\phi\); 2while\(|P|\leq n/2\vee|N|\leq n/2\)do 3\(\omega\leftarrow\Omega[i]\); 4\(i\gets i+1\); 5if\(|\omega|\notin[64,1024]\vee\neg\mathit{is\_english}(\omega)\)then 6continue 7if\(\mathcal{C}(\omega)\geq\theta\wedge|P|\leq n/2\)then 8\(P\gets P\cup\{\omega\}\); 9if\(\mathcal{C}(\omega)\leq 1-\theta\wedge|N|\leq n/2\)then 10\(N\gets N\cup\{\omega\}\); 11for\(s\in P\cup N\)do 12\(p\gets s[0:|s|/2]\) ; 13\(c\gets s[|s|/2:|s|]\) ; 14\(S\gets S\cup\{p,\mathcal{C}(p),c,\mathcal{C}(c),s,\mathcal{C}(s)\}\) ``` **Algorithm 1**RealAttributePrompts Further, we divide the training set of RealToxicityPrompts into a subset of a toxic and non-toxic corpus containing both prompts and continuations using the labeled toxicity score. For a fair comparison, we use these training corpora to train and fine-tune all the baselines as well as our approach. Table 2 summarizes the results where CHRT\({}_{ab}\) represents our approach with weight \(\lambda=\frac{a}{a+b}\) for the preservation loss \(\mathcal{L}_{p}\) and \(1-\lambda=\frac{b}{a+b}\) for the contrastive loss \(\mathcal{L}_{c}\). We can observe that as we increase \(\lambda\), the fluency of our model (in terms of perplexity) increases. CHRT\({}_{12}\) achieves the maximum attribute control i.e. the lowest toxicity of 0.085 and 0.004 in terms of both maximum and average toxicity. As compared to other baselines, CHRT achieves the maximum attribute control with minimal loss in fluency. Methods like PPLM, GeDi and DExperts achieve attribute control by heuristically modifying the token probability distribution of the base language model at each timestep instead of modifying the dense representations which impede the fluency (as observed empirically in Table 2) of the model. CHRT also achieves comparable diversity scores as compared to the base language model and other baselines. We report the inference time of CHRT as compared to other baselines in Table 3. We observe that CHRT has an inference time of just 0.01 seconds more than the base model. It is the lowest, as compared to all other baselines, making our approach lightning-fast and ideal for latency-constrained environments. ### Sentiment Steering In the best of our knowledge, no publicly released prompt-continuation dataset for sentiment-controlled generation exists. Therefore, inspired by RealToxicityPrompts (Gehman et al., 2020), we create a framework called RealAttributePrompts. Given an arbitrary attribute, Algorithm 1 efficiently generates prompts for controlled generation benchmarking of size \(n\). \(\mathcal{C}(\omega)\rightarrow[0,1]\) is an attribute classifier that returns a classification probability for a sentence \(\omega\). \(\theta\in[0,1]\) is a confidence level for attribute \(\mathcal{C}\) and \(\Omega\) is a large set of sentences extracted from the huge OpenWebCorpus (Gokaslan and Cohen, 2019). For filtering away non-English sentences, we use FastText (Bojanowski et al., 2016). The set \(S\) returned by Algorithm 1 is a set of prompt-continuation pairs with individual and joint attribute scores. For evaluating sentiment-controlled generation, we use Algorithm 1 to create RealSentimentPrompts. For the attribute classifier \(\mathcal{C}\), we use RoBERTa (Liu et al., 2019) fine-tuned on the Twitter sentiment classification data (Barbieri et al., 2020). We set the confidence threshold \(\theta=0.9\) to create a dataset of size \(n=100k\). After the creation of this dataset, we use the same approach \begin{table} \begin{tabular}{l l} \hline \hline **Model** & **Inference Time (s)** \\ \hline **GPT-2/DAPT** & **0.811** \\ \hline **NLC** & 0.867 _(+0.05)_ \\ **PPLM** & 10.12 _(+9.30)_ \\ **GeDi** & 1.702 _(+0.89)_ \\ **DE Experts** & 1.989 _(+1.17)_ \\ **CHRT** & **0.823 _(+0.01)_ \\ \hline \hline \end{tabular} \end{table} Table 3: Average generation time for different baselines (in seconds) for generating one continuation of 25 tokens over 100 generations. and metrics as in Section 4.2 for selecting prompts and evaluating generated continuations. Table 4 summarizes the results where we can see that our approach CHRT\({}_{12}\) achieves the lowest maximum negative sentiment of 0.094 which is more than 62% lower than DE Expert, the best next baseline. We also achieve the lowest probability of just 0.5% for generating negative sentiment text with only a 10.84 point loss in perplexity as compared to the base GPT-2. Finally, our CHRT models show no to minimal loss in diversity. Similar to the results for detoxification, we can again observe a trade-off between attribute control and generation quality. As we increase the weight for the contrastive triplet loss \(\mathcal{L}_{c}\), the maximum negative sentiment and the probability of generating negative sentiment text decreases with an increase in perplexity. ### Text Simplification Similar to sentiment steering, we create Real-SimplicityPrompts, a dataset to benchmark controlled generation while minimizing the simplicity [10] of the generated text. We again use Algorithm 1 with the same classifier fine-tuned on the PWKP version 2 [1] dataset. Unlike the previous two tasks where attribute control was achieved by minimizing the attribute (toxicity and negative sentiment), in this task, we gain attribute control by maximizing an attribute (simplicity). In Table 5 we can observe that the highest average maximum simplicity of 0.995 and a probability of 99.6% for generating simple continuations. We observe that in fact one of our models, CHRT\({}_{21}\), achieves a better fluency of 20.690 perplexity score as compared to 22.028 by the vanilla GPT-2 model with minimal to no loss in diversity in the generated text. ### Multi-Attribute Control We can combine multiple CHRTs for different attributes to gain multi-attribute control. Using the approach defined in Section 3.5, we generate text while controlling two attributes - toxicity and sentiment. For this experiment, we do unprompted generation, that is, through random nucleus sampling, we generate 1k text sequences of length 25 each conditioned on the beginning of sequence token [1]. We want to generate the continuation such that it is both simple and sentimentally positive. Figure 2 shows the trade-off between controlling (increasing) text simplicity and controlling (decreasing) the negative sentiment in generations. \(\alpha\) and \(1-\alpha\) are the CHRT weights for simplicity and sentiment control transformations respectively. Increasing \(\alpha\) shifts the control from generating simple text to generating positive sentiment text. A clear trade-off can be observed by varying \(\alpha\) in Figure 2 where sentiment and simplicity are measured using the classifiers described in Section 4.3 and Section 4.4 respectively. \begin{table} \begin{tabular}{l|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c|}{**Negative Sentiment (NS)**} & \multicolumn{2}{c|}{**Fluency**} & \multicolumn{2}{c}{**Diversity**} \\ & **Avg. Max. NS (\(\downarrow\))** & **Avg. NS Prob. (\(\downarrow\))** & **Perplexity (\(\downarrow\))** & **Dist-1 (\(\uparrow\))** & **Dist-2 (\(\uparrow\))** & **Dist-3 (\(\uparrow\))** \\ \hline **GPT-2** & 0.934 & 0.534 & **17.372** & 0.756 & 0.833 & 0.718 \\ **NLC** & 0.859 & 0.310 & 17.542 & 0.756 & 0.827 & 0.709 \\ **DAPT** & 0.480 & 0.039 & 19.570 & 0.727 & 0.817 & 0.702 \\ **PPLM** & 0.738 & 0.139 & 38.981 & 0.654 & 0.770 & 0.679 \\ **GeDi** & 0.774 & 0.242 & 26.471 & 0.779 & 0.775 & 0.647 \\ **DE Experts** & 0.249 & 0.012 & 33.390 & **0.796** & 0.824 & 0.696 \\ \hline **CHRT\({}_{21}\)** & 0.325 & 0.028 & 21.746 & 0.748 & **0.840** & **0.732** \\ **CHRT\({}_{11}\)** & 0.175 & 0.012 & 24.316 & 0.748 & 0.835 & 0.728 \\ **CHRT\({}_{12}\)** & **0.094** & **0.005** & 28.160 & 0.747 & 0.831 & 0.729 \\ \hline \hline \end{tabular} \end{table} Table 4: Results for Sentiment Steering. Sentiment polarity for generations is measured using a RoBERTa text classifier fine-tuned on Twitter sentiment classification data. Perplexity is measured using GPT-2-XL. Figure 2: Trade-off between attributes during multi-attribute control. As \(\alpha\) increase, we can see the control shifting from simplicity to sentiment. We perform a crowd-sourced human evaluation to make the inference on our results more robust. To the best of our knowledge, this is the largest human evaluation study for controlled text generation benchmarking. We consider 1k toxic prompts (as described in Section 4.2) and generate continuation of length 25 using CHRT\({}_{12}\) and the baselines. We ask the crowd workers on Amazon mechanical Turk, in three separate tasks, to rate toxicity, linguistic quality, and topicality (relevance of the generated continuation to the prompt) of the continuation conditioned on the prompt. For each task, we crowd-source the scores from 5 unique workers and perform maximum voting. Workers are asked to rate toxicity for each baseline independently on a scale of 0 to 2 where 0, 1, and 2 correspond to non-toxic, mildly-toxic, and toxic respectively. Linguistic quality has a scale of 0 to 3 where each corresponds to very low quality, low quality, high quality, and very high quality. Finally, topicality is rated between 0 and 2 where 0, 1, and 2 correspond to non-topical, mildly-topical and topical. From Table 6 we observe that CHRT\({}_{12}\) achieves the lowest toxicity rating of only 0.027 with a minimal loss in linguistic quality of 0.024 points as compared to the base GPT-2. Low standard deviation in human annotation scores for CHRT\({}_{12}\) further strengthens our argument. Finally, it should be noted that all the entries in Table 6 marked with \({}^{*}\) have a p-value of greater than 0.05 for a pair-wise T-test with CHRT\({}_{12}\). Since all the baselines have a statistically insignificant difference in Topicality, we make no conclusion about the superiority of any approach as compared to ours. ## 5 Limitations Our work is limited in capturing the unintended dependencies of attributes. It is possible that maximizing certain attributes like positive sentiment may maximize attributes like gender bias. A formal study to capture the dependency of the bias with varied attribute control is an important future direction. The efficacy automated metrics used to measure the linguistic qualities and attribute alignment of the generations is limited Jozefowicz et al. (2016). Devising more exhaustive and explainable metrics is also an important future-work. ## 6 Conclusion We present CHRT, a learning-based controlled language generation framework that achieves state-of-the-art attribute control while minimizing the loss in linguistic quality as compared to five recent baselines. The ability to combine control over multiple attributes and the ultra-fast inference of our approach makes it ideal for latency-constrained use-cases. We empirically showcase the effectiveness of our approach by performing both large-scale automated and human-evaluated benchmarks. For future work, we will like to work on making our approach more plug-and-play and achieve attribute control with an even lower loss in linguistic quality. \begin{table} \begin{tabular}{l|c|c|c} \hline **Model** & **Toxicity (\(\downarrow\))** & **L.Q. (\(\uparrow\))** & **Topicality (\(\uparrow\))** \\ \hline **GPT-2** & \(0.369_{0.49}\) & \(\textbf{2.490}_{0.29}\) & \({}^{*}1.320_{0.29}\) \\ **NLC** & \(0.211_{0.36}\) & \(2.497_{0.30}\) & \({}^{*}1.170_{0.20}\) \\ **DAPT** & \(0.167_{0.33}\) & \(2.483_{0.27}\) & \({}^{*}1.140_{0.32}\) \\ **PPLM** & \(0.072_{0.23}\) & \(1.859_{0.85}\) & \({}^{*}1.237_{0.29}\) \\ **GeDi** & \(0.146_{0.31}\) & \(2.369_{0.47}\) & \({}^{*}1.273_{0.33}\) \\ **DExerts** & \(0.053_{0.18}\) & \({}^{*}2.455_{0.36}\) & \({}^{*}1.207_{0.47}\) \\ **CHRT\({}_{12}\)** & \(\textbf{0.027}_{0.12}\) & \(2.466_{0.35}\) & \({}^{*}1.160_{0.37}\) \\ \hline \end{tabular} \end{table} Table 6: Human Evaluation Results: We report the mean sub-scripted with the standard deviation of scores. L.Q. Stands for Linguistic Qualities. The entries marked with \({}^{*}\) have statistically insignificant difference with CHRT\({}_{12}\) \begin{table} \begin{tabular}{l|c|c|c|c|c|c} \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c|}{**Simplicity**} & \multicolumn{2}{c|}{**Fluency**} & \multicolumn{2}{c}{**Diversity**} \\ & **Avg. Max. Simplicity (\(\uparrow\))** & **Avg. Simplicity Prob. (\(\uparrow\))** & **Perplexity (\(\downarrow\))** & **Dist-1 (\(\uparrow\))** & **Dist-2 (\(\uparrow\))** & **Dist-3 (\(\uparrow\))** \\ \hline **GPT-2** & 0.806 & 0.259 & 22.028 & 0.863 & **0.670** & 0.484 \\ **NLC** & 0.875 & 0.420 & 22.017 & 0.863 & 0.664 & 0.474 \\ **DAPT** & 0.900 & 0.388 & 20.827 & **0.865** & **0.670** & **0.483** \\ **PPLM** & 0.942 & 0.692 & 45.749 & 0.765 & 0.607 & 0.439 \\ **GeDi** & 0.863 & 0.506 & 33.187 & 0.844 & 0.588 & 0.399 \\ **DExerts** & 0.992 & 0.959 & 25.627 & 0.831 & 0.632 & 0.452 \\ \hline **CHRT\({}_{21}\)** & 0.991 & 0.919 & **20.690** & 0.839 & 0.644 & 0.459 \\ **CHRT\({}_{11}\)** & 0.994 & 0.982 & 21.316 & 0.813 & 0.626 & 0.451 \\ **CHRT\({}_{12}\)** & **0.995** & **0.996** & 23.242 & 0.777 & 0.623 & 0.456 \\ \hline \end{tabular} \end{table} Table 5: Results for Text Simplification. Simplicity for generations is measured using a RoBERTa text classifier fine-tuned on PWKpv2 dataset. Perplexity is measured using GPT-2-XL.
2305.06310
SoGAR: Self-supervised Spatiotemporal Attention-based Social Group Activity Recognition
This paper introduces a novel approach to Social Group Activity Recognition (SoGAR) using Self-supervised Transformers network that can effectively utilize unlabeled video data. To extract spatio-temporal information, we created local and global views with varying frame rates. Our self-supervised objective ensures that features extracted from contrasting views of the same video were consistent across spatio-temporal domains. Our proposed approach is efficient in using transformer-based encoders to alleviate the weakly supervised setting of group activity recognition. By leveraging the benefits of transformer models, our approach can model long-term relationships along spatio-temporal dimensions. Our proposed SoGAR method achieved state-of-the-art results on three group activity recognition benchmarks, namely JRDB-PAR, NBA, and Volleyball datasets, surpassing the current numbers in terms of F1-score, MCA, and MPCA metrics.
Naga VS Raviteja Chappa, Pha Nguyen, Alexander H Nelson, Han-Seok Seo, Xin Li, Page Daniel Dobbs, Khoa Luu
2023-04-27T03:41:15Z
http://arxiv.org/abs/2305.06310v3
# SoGAR: Self-supervised Spatiotemporal Attention-based Social Group Activity Recognition ###### Abstract This paper introduces a novel approach to Social Group Activity Recognition (SoGAR) using Self-supervised Transformers network that can effectively utilize unlabeled video data. To extract spatio-temporal information, we created local and global views with varying frame rates. Our self-supervised objective ensures that features extracted from contrasting views of the same video were consistent across spatio-temporal domains. Our proposed approach is efficient in using transformer-based encoders to alleviate the weakly supervised setting of group activity recognition. By leveraging the benefits of transformer models, our approach can model long-term relationships along spatio-temporal dimensions. Our proposed SoGAR method achieved state-of-the-art results on three group activity recognition benchmarks, namely JRDB-PAR, NBA, and Volleyball datasets, surpassing the current numbers in terms of F1-score, MCA, and MPCA metrics. ## 1 Introduction Group activity recognition (GAR) has emerged as an emerging topic in computer vision, with numerous applications in sports video analysis, video monitoring, and social scene understanding. Unlike conventional action recog nition methods that focus on identifying individual actions, GAR aims to classify the actions of a group of people in a given video clip as a whole. This requires a deeper understanding of the interactions between multiple actors, including accurate localization of actors and modeling their spatiotemporal relationships [53; 11; 56; 46]. As a result, GAR poses fundamental challenges that need to be addressed in order to develop effective solutions for this problem. In this context, the development of novel techniques for group activity recognition has become an active area of research in computer vision. Existing methods for GAR require ground-truth bounding boxes and action class labels for training and testing [29; 58; 27; 22; 44; 18; 60; 62; 37]. Bounding box labels are used to extract actor features and their spatio-temporal relations, which are then aggregated to form a group-level video representation for classification. However, the reliance on bounding boxes and substantial data labeling annotations severely limit their applications. To address these limitations, some methods simultaneously train person detection and group activity recognition using bounding box labels [7; 65]. Figure 1: Overview of conventional and proposed methods for social activity recognition. The labels in the right image show the predicted labels. Another approach is weakly supervised GAR (WSGAR) learning [61; 31], which does not require individual actor-level labels for training and inference. Yan _et al._[61] proposed WSGAR learning approach that uses a pre-trained detector to generate actor box suggestions and learn to eliminate irrelevant possibilities. However, this method suffers from missing detections when actors are occluded. Kim _et al._[31] introduced a detector-free method that captures actor information using partial contexts of token embeddings, but this method can only learn when there is movement in consecutive frames. Moreover, Kim _et al._[31] did not consider the consistency of temporal information among different tokens. Hence, there is a need for a GAR approach that can capture temporal information accurately without the limitations of bounding box annotations or detector-based methods. **Contributions of this Work:** In this paper, we propose a new approach to Social Group Activity Recognition called (SoGAR). Our method is unique in that it does not require ground-truth labels during pre-training, and it doesn't rely on an object detector. Instead, our approach uses motion as a supervisory signal from the RGB data modality. Our approach is able to effectively reduce the extensive supervision present in the conventional methods, as demonstrated in Fig. 1. Our method outperforms the DFWSGAR approach introduced by Kim et al. [31]. In Table. 1, we present the comparison of different properties between our approach and other previous methods. To handle varying spatial and temporal details within the same deep network, we use a video transformer-based approach, as described in [8]. This approach allows us to take advantage of varying temporal resolutions within the same architecture. Additionally, the self-attention mechanism in video transformers is able to capture local and global long-range dependencies in both space and time, providing much larger receptive fields compared to standard convolutional kernels [42]. The proposed SoGAR method differs from the previous methods by leveraging the correspondences from spatio-temporal features which enables the learning of long-range dependencies in both space and time domains. To facilitate this, we introduce a novel self-supervised learning strategy that does temporal collaborative learning and spatiotemporal cooperative learning. This is achieved through the proposed loss functions mentioned in 3.2 which match the global features from the whole video sequence to the local features that are sampled in the latent space. Additionally, we utilize the bounding box information to localize the attention of the framework for better learning to improve overall performance. Our proposed method achieves State-of-the-Art (SOTA) performance results on the JRDB-PAR [26], NBA [61] and Volleyball [29] datasets using only the RGB inputs. We conducted extensive experiments and will publish the code for our method. ## 2 Related Work ### Group Activity Recognition (GAR) In the field of action recognition, group action recognition has become an increasingly popular topic of research due to its wide range of applications in various fields, such as video surveillance, human-robot interaction, and sports analysis. GAR aims to identify the actions performed by a group of individuals and the interactions between them. Initially, researchers in the field of GAR used probabilistic graphical methods and AND-OR grammar methods to process the extracted features [4; 3; 1; 2; 34; 33; 48; 57]. However, with the advancement of deep learning techniques, methods involving convolutional neural networks (CNN) and recurrent neural networks (RNN) achieved outstanding performance due to their ability to learn high-level information and temporal context [7; 15; 29; 28; 38; 45; 49; 54; 59]. Recent methods for identifying group actions typically utilize attention-based models and require explicit character representations to model spatial-temporal relations in group activities [18; 22; 27; 37; 44; 58; 61; 63]. For example, graph convolution networks are used to learn spatial and temporal information of actors by constructing relational graphs, and spatial and temporal relation graphs are used to infer actor links. Clustered attention is used to capture contextual spatial-temporal information, and transformer encoder-based techniques with different backbone networks are used to extract features for learning actor interactions from multimodal inputs [22]. Additionally, MAC-Loss [25], a combination of spatial and temporal transformers in two complimentary orders, has been proposed to enhance the learning effectiveness of actor interactions and preserve actor consistency at the frame and video levels. Tamura _et al._[51] introduces a framework without using heuristic features for recognizing social group activities and identifying group members. This information is embedded into the features, allowing for easy identification. Overall, these recent advancements in GAR have made significant progress toward recognizing complex actions performed by a group of individuals in various settings. **Weakly supervised group activity recognition (WSGAR).** Various techniques have been developed to address the problem of WSGAR with limited supervision, such as using bounding boxes to train built-in detectors or activity maps. WSGAR is one approach that does not rely on bounding box annotations during training or inference and includes an off-the-shelf item detector in the model. Traditional GAR approaches require accurate annotations of individual actors and their actions, which can be challenging and time-consuming to obtain. Weakly supervised methods aim to relax these requirements by learning from \begin{table} \begin{tabular}{c|c|c|c} \hline **Methods** & **Architecture** & **Source Label** & **Learning Mechanism** & **ARL Module** \\ \hline ARG [58] & CNN + GCN & G.A., I.A., B.B. & Fully Supervised & Graph Relational Reasoning \\ \hline HiGCIN [60] & CNN + CNN & G.A., I.A., B.B. & Fully Supervised & Graph Relational Reasoning \\ \hline AT [22] & CNN + TF & G.A., I.A., B.B. & Fully Supervised & Joint ST Attention \\ \hline DIN [63] & CNN + CNN & G.A., I.A., B.B. & Fully Supervised & Graph Relational Reasoning \\ \hline GroupFormer [37] & CNN + TF & G.A., I.A., B.B. & Fully Supervised & Clustering \\ \hline Dual-AI [25] & CNN + TF & G.A., I.A., B.B. & Fully Supervised & Joint ST Attention \\ \hline SAM [61] & CNN + GCN & G.A., B.B. & Weakly Supervised & Graph Relational Reasoning \\ \hline DFWSGAR [31] & CNN + TF & G.A. (Training \& Testing) & Weakly Supervised & Joint ST Attention \\ \hline **Ours** & **ViT + TSformer** & **G.A.(Testing)** & **Self-Supervised** & **Divided ST Attention** \\ \hline \end{tabular} \end{table} Table 1: **Comparisons in the properties between our proposed approach and other methods. Actor Relation Learning (ARL), Convolutional Neural Networks (CNN), Graph Neural Networks (GNN), Graph Convolutional Networks (GCN), Transformer (TF), TimeSformer (TSformer), Vision Transformer (ViT), Space & Time (ST), Group Activity (G.A.), Individual Actions (I.A.), Bounding Boxes (B.B.)** more readily available data such as activity labels, bounding boxes, or even video-level labels. Zhang et al. [66] proposed a technique that employs activity-specific characteristics to enhance WSGAR. It is not particularly designed for GAR. Kim et al. [31] proposed a detector-free approach that uses transformer encoders to extract motion features. We propose a self-supervised training method specialized for WSGAR and does not necessitate actor-level annotations, object detectors, or labels. **Transformers in Vision**. The transformer architecture was first introduced by Vaswani _et al._[52] for sequence-to-sequence machine translation, and since then, it has been widely applied to various natural language processing tasks. Dosovitskiy _et al._[17] introduced a transformer architecture not based on convolution for image recognition tasks. Several works [36; 64; 40; 55] used transformer architecture as a general backbone for various downstream computer vision tasks, achieving remarkable performance progress. In the video domain, many approaches [24; 5; 35; 9; 20; 43] utilize spatial and temporal self-attention to learn video representations effectively. Bertasius _et al._[9] explored different mechanisms of space and time attention to learn spatiotemporal features efficiently. Fan et al. [20] used multiscale feature aggregation to improve the Figure 2: Comparison of Actor Relational Learning (ARL) Modules learning performance of features. Patrick _et al._[43] introduced a self-attention block that focuses on the trajectory, which tracks the patches of space and time in a video transformer. ## 3 The Proposed Method The framework presented in this paper aims to recognize social group activities in a video without depending on a detector or person-bounding boxes. The proposed method follows a self-supervised training approach within the teacher-student framework for social group activity recognition, as depicted in Fig. 3. Our method for video representation learning for social group activity recognition differs from other contrastive learning approaches by processing two clips from the same video while altering their spatial-temporal characteristics without requiring memory banks. This approach allows us to capture the intricate and ever-changing nature of group activities where multiple individuals may be moving in different directions and performing different actions simultaneously. To train our model, we propose a novel loss formulation that matches the features of two distinct clips, thereby enforcing consistency in spatial and temporal changes within the same video. Our loss function encourages the model to learn robust representations that can handle variations in spatial and temporal contexts. The proposed SoGAR framework is described in detail in the following sections. We demonstrate the effectiveness of our method on the newly proposed JRDB-PAR dataset [26] along with NBA [61], and Volleyball [28] datasets. ### Self-Supervised Training Videos of social group activities capture rich temporal and spatial information, which is essential for accurate recognition. However, this high temporal dimensionality also makes it challenging to capture the various motion and spatial characteristics of group activities, such as 2p.-fail. (from NBA dataset [61]) or l-winpoint (from Volleyball dataset [29]). To address this challenge, we propose a novel approach that involves predicting different video clips with varying temporal characteristics from each other in the feature space. This approach allows us to learn contextual information that defines the underlying distribution of videos, making the network invariant to motion, scale, and viewpoint variations. Our self-supervised training framework for video representation learning is formulated as a motion prediction problem consisting of three key components. First, we generate multiple temporal views with different numbers of clips with varying motion characteristics from the same video. Second, we vary the spatial characteristics of these views by generating local and global spatial fields of the sampled clips. Finally, we introduce a loss function that matches the varying views across spatial and temporal dimensions in the latent space. The proposed approach for social group activity recognition involves predicting multiple video clips with varying temporal and spatial characteristics from a single video. This is achieved through a self-supervised motion prediction problem with three key components: generating multiple temporal views with different numbers of clips and varying motion characteristics, varying the spatial characteristics of these views by generating local and global spatial fields of the sampled clips, and introducing a loss function that matches the varying views across spatial and temporal dimensions in the latent space. By learning contextual information and making accurate predictions even in the presence of various motion, scale, and viewpoint variations, the network becomes invariant to these variations and can capture the complex and dynamic nature of social group activities. #### 3.1.1 Prediction of motion via Self-Supervised Learning The temporal dimension of a video is a crucial factor that can significantly affect the motion context and perception of actions captured in the content. For example, the frame rate can capture subtle nuances of body movements and affect the perception of actions, such as walking slowly versus walking quickly. Traditionally, video clips are sampled at a fixed frame rate, which may not be Figure 3: **The proposed SoGAR Framework** adopts a sampling strategy that divides the input video into global and local views in temporal and spatial domains. Since the video clips are sampled at different rates, the global and local views have distinct spatial characteristics and limited fields of view and are subject to spatial augmentations. The teacher network takes in global views (\(\mathbf{x}_{gt}\)) to generate a target, while the student network processes local views (\(\mathbf{x}_{lt}\) & \(\mathbf{x}_{ls}\)), where \(Kl\leq K_{g}\). We update the network weights by matching the student local views to the target teacher global views, which involves both _Temporal Collaborative Learning_ and _Spatio-temporal Cooperative Learning_. To accomplish this, we employ a standard ViT-Base backbone with separate space-time attention [8] and an MLP that predicts target features from student features. suitable for capturing different motion characteristics of the same action. Our proposed approach introduces the concept of "temporal views," which refers to a collection of clips sampled at a specific video frame rate. By generating different views with varying resolutions, we can capture different motion characteristics of the same action and learn contextual information about motion from a low frame rate input. To create motion differences among these views, we randomly sample them and process them using our ViT models. The number of temporal tokens (\(T\)) input to ViT varies in different views, allowing us to handle variability in temporal resolutions with a single ViT model. In addition to varying temporal resolution, we vary the resolution of clips across the spatial dimension within these views. This means that the spatial size of a clip can be lower than the maximum spatial size (224), which can also decrease the number of spatial tokens. Using vanilla positional encoding [52], our approach can handle such variability in temporal resolutions with a single ViT model, unlike similar sampling strategies used under multi-network settings [21; 30]. Figure 4: Video Transformer Block #### 3.1.2 Establishing Correspondences Across Different Views Our proposed training strategy seeks to establish the interrelation between a given video's temporal and spatial dimensions. To achieve this, we introduce novel cross-view correspondences by manipulating the field of view during the sampling process. In particular, we generate global and local temporal views from a given video clip to facilitate learning these correspondences. The global temporal views (\(\mathbf{x}_{g_{t}}\)) are generated by randomly sampling \(K_{g}\) frames from a video clip with a fixed spatial size of \(W_{global}\) and \(H_{global}\). These views are then fed into the teacher network, which produces an output represented by \(\mathbf{\tilde{z}}_{\mathbf{g_{t}}}\). On the other hand, the local spatiotemporal views (\(\mathbf{x}_{l_{t}}\) and \(\mathbf{x}_{l_{s}}\)) cover a limited portion of the video clip along both spatial and temporal dimensions. We generate these local temporal views by randomly selecting several frames (\(K_{l}\)), which is less than or equal to the number of frames in the global temporal views (\(K_{g}\)), with a spatial size fixed to \(W_{local}\) and \(H_{local}\). These views are then fed into the student network, which produces two outputs denoted by \(\mathbf{\tilde{z}}_{\mathbf{l_{t}}}\) and \(\mathbf{\tilde{z}}_{\mathbf{l_{s}}}\), respectively. We apply various data augmentation techniques to the spatial dimension by applying color jittering and gray scaling with probability 0.8 and 0.2, respectively, to all temporal views. Moreover, we apply Gaussian blur and solarization with probability 0.1 and 0.2, respectively, to global temporal views. Our approach is based on the idea that training the model to predict a global temporal view of a video from a local temporal view in the latent space can help the model capture high-level contextual information. More specifically, our method encourages the model to consider both the spatial and temporal context of the video, where the spatial context denotes the possibilities surrounding a given spatial crop, and the temporal context denotes possible previous or future clips from a given temporal crop. It is essential to note that spatial correspondences also involve a temporal component, as our approach seeks to predict a global view at timestamp \(t=j\) from a local view at timestamp \(t=i\) To enforce these cross-view correspondences, we use a similarity objective that predicts different views from each other. ### The Proposed Objective Function Our model aims to predict different views of the same video, capturing various spatial-temporal variations. To achieve this, we train our model with an objective function that leverages global and local temporal and spatial views. Let \(\mathbf{X}=\mathbf{x}_{t}t=1^{T}\) be a video consisting of \(T\) frames, where \(\mathbf{x}_{g_{t}}\), \(\mathbf{x}_{l_{t}}\), and \(\mathbf{x}ls\) represent global temporal views, local temporal views, and local spatial views, respectively. Specifically, \(\mathbf{x}_{g_{t}}\) contains \(Kg\) frames, while \(\mathbf{x}_{l_{t}}\) and \(\mathbf{x}_{l_{s}}\) both contain \(K_{l}\) frames, where \(K_{l}\leq K_{g}\) and \(K_{g}\) and \(K_{l}\) are the numbers of frames for teacher and student (global and local) inputs. We randomly sample \(K_{g}\) global and \(K_{l}\) local temporal views as described in 3.1.2. The student and teacher models process the temporal views to obtain class tokens or features \(\mathbf{z}_{g}\) and \(\mathbf{z}_{l}\). We then normalize these class tokens to facilitate training with the objective function. \[\mathbf{\tilde{z}}^{(i)}=\frac{\exp(\mathbf{z}^{(i)})/\tau}{\sum_{i=1}^{n}\exp(\mathbf{z}^ {(i)})/\tau}, \tag{1}\] where \(\tau\) is a temperature parameter used to control the sharpness of the exponential function [10] and \(\mathbf{z}^{(i)}\) is each element in \(\mathbf{\tilde{z}^{(i)}}\in\mathbb{R}^{n}\). **Temporal Collaborative Learning Loss (TCL):** Our \(\mathbf{x}_{g_{t}}\) have the same spatial size but differ in temporal content because the number of clips/frames is randomly sampled for each view. One of the \(\mathbf{x}_{g_{t}}\) always passes through the teacher model that serves as the target label. We map the student's \(\mathbf{x}_{l_{t}}\) with the teacher's \(\mathbf{x}_{g_{t}}\) to create a global-to-local temporal loss as in Eqn. (2). \[\mathcal{L}_{TCL}=-\mathbf{sg(\mathbf{\tilde{z}}_{g_{t}})*log(\mathbf{\tilde{z}}_{l_{t}})}, \tag{2}\] where \(\mathbf{\tilde{z}}_{\mathbf{g_{t}}}\) and \(\mathbf{\tilde{z}}_{\mathbf{l_{t}}}\) are the tokens of the class for \(\mathbf{x}_{g_{t}}\) and \(\mathbf{x}_{l_{t}}\) produced by the teacher and student, and \(sg\) is the stochastic gradient respectively. **Spatio-temporal Cooperative Learning Loss (SCL):** The local temporal views \(\mathbf{x}_{l_{t}}\) in our approach have a smaller field of vision compared to the global temporal views \(\mathbf{x}_{g_{t}}\), both along the spatial and temporal dimensions. Despite this, the number of local views is four times higher than that of global views. The student model processes all the local views \(\mathbf{x}_{l_{s}}\), while the teacher model processes only the global views \(\mathbf{x}_{g_{t}}\), which serve as the target. To create the loss function, the local views are mapped to the global views using the teacher model, as described in 3. \[\mathcal{L}_{SCL}=\sum_{n=1}^{q}-\mathbf{sg(\tilde{z}_{g_{t}})}*log( \tilde{\mathbf{z}}_{l_{s}}^{(n)}), \tag{3}\] where \(\tilde{\mathbf{z}}_{l_{s}}\) are the tokens of the class for \(\mathbf{x}_{l_{s}}\) produced by the student and \(q\) represents the number of local temporal views set to sixteen in all our experiments. The overall loss to train our model is simply a linear combination of both losses, as in Eqn. (2) and Eqn. (3), given as in Eqn. (4). \[\mathcal{L}=\mathcal{L}_{TCL}+\mathcal{L}_{SCL} \tag{4}\] ### Inference Our inference framework is depicted in Fig. 5. In this stage, we perform fine-tuning of the self-supervised model that was trained earlier. Specifically, we utilize the pre-trained SoGAR model and fine-tune it with the available labels. This is followed by a linear classifier, and the resulting model is applied to downstream tasks to enhance the overall performance. Figure 5: **Inference**. We input the video sequence along with their corresponding labels. The output from the model is fed to the downstream task classifier. ## 4 Experiments ### Datasets **Volleyball Dataset[29]** is composed of 55 videos, containing a total of 4,830 labeled clips, including 3,493 for training and 1,337 for testing. The dataset provides annotations for both individual actions and group activities with corresponding bounding boxes. However, in our WSGAR experiments, we only focus on the group activity labels and exclude the individual action annotations. To evaluate our model, we use Multi-class Classification Accuracy (MCA) and Merged MCA metrics. The Merged MCA metric merges the right set and right pass classes into the right pass-set and the left set and left pass classes into the left pass-set, as in previous works like SAM [61] and DFWSGAR [31], to ensure a fair comparison with existing methods. **NBA Dataset[61]** used in our experiments contains a total of 9,172 labeled clips from 181 NBA videos, where 7,624 clips are for training and 1,548 for testing. The dataset only provides annotations for group activities and lacks information about individual actions or bounding boxes. For evaluating the model, we use the Multi-class Classification Accuracy (MCA) and Mean Per Class Accuracy (MPCA) metrics. The MPCA metric is used to address the issue of class imbalance in the dataset. **JRDB-PAR Dataset[26]** containing 27 categories of individual actions such as walking, talking, etc., 11 categories of social group activities, and 7 categories of global activities. The dataset consists of 27 videos, which are split into 20 for training and 7 for testing, following the training/validation splitting in JRDB dataset [19]. In total, the dataset contains 27,920 frames with over 628k human bounding boxes. For annotation and evaluation, uniformly sampled keyframes (one keyframe in every 15 frames) are selected, which is consistent with other group activity datasets like CAD [14] and Volleyball [28]. The dataset uses multi-class labels for activity annotation, with each individual/group/frame having multiple activity labels. Following [26], we use the precision, recall, and F1-score (denoted as \(\mathcal{P}_{g}\), \(\mathcal{R}_{g}\), \(\mathcal{F}_{g}\)) for evaluation, since social group activity recognition can be considered as a multi-label classification problem. ### Deep Network Architecture Our video processing technique employs a Vision Transformer (ViT) [8] to apply attention to both the spatial and temporal dimensions of video clips. The ViT comprises 12 encoder blocks and can handle video clips of size (\(B\times T\times C\times W\times H\)), where \(B\) and \(C\) denote the batch size and the number of color channels, respectively. The maximum spatial and temporal sizes are \(W=H=480\) and \(T=18\), respectively, indicating that we extract 18 frames from each video and resize them to \(480\times 480\). Our network architecture (see Fig. 3) is designed to accommodate varying input resolution during training, including differences in frame rate, number of frames in a video clip, and spatial size. However, each ViT encoder block processes a maximum of 196 spatial and 16 temporal tokens, with each token having an embedding dimension of \(\mathbb{R}^{m}\)[17]. In addition to these spatial and temporal input tokens, we include a single classification token within the architecture as a characteristic vector [16]. This classification token captures the standard features learned by the ViT across the spatial and temporal dimensions of a given video. During training, we use varying spatial and temporal resolutions that satisfy \(W\leq 480\), \(H\leq 480\), and \(T\leq 18\), resulting in different spatial and temporal tokens. Finally, we apply a projection head to the class token of the last ViT encoder [10; 23]. **Self-Distillation.** Our approach, depicted in Fig. 3, employs a teacher-student setup for self-distillation based on the methodology proposed in [10; 23]. The teacher and student models share the same architecture, consisting of a ViT backbone and a predictor MLP. However, only the student model is directly trained, while the teacher model is updated through an exponential moving average (EMA) of the student weights at each training step [10]. This design allows us to use a unified network to process various input clips. ### Implementation Details To prepare the JRDB-PAR, NBA and Volleyball datasets for our analysis, we sampled frames at a rate of T (\(K_{g}\)) using segment-based sampling, as detailed in [53]. Next, we resized the frames to \(W_{g}=480\) & \(H_{g}=480\) for the teacher input and \(W_{l}=96\) & \(H_{l}=96\) for the student input. In the case of the Volleyball dataset, we set \(K_{g}\) to \(5\) (\(K_{l}\in{3,5}\)), while for the NBA dataset, we set \(K_{g}\) to \(18\) (\(K_{l}\in{2,4,8,16,18}\)). For JRD-PAR dataset, we used \(K_{g}\) to \(8\) (\(K_{l}\in{2,4,8,16,18}\)). We initialized temporal attention weights randomly, while spatial attention weights were initialized using a ViT model trained self-supervised over ImageNet-1K [47]. This initialization scheme facilitated faster convergence of space-time ViT, as seen in the supervised setting [8]. We trained using an Adam optimizer [32] with a learning rate of \(5\times{10^{-4}}\), scaled using a cosine schedule with a linear warm-up over five epochs [50; 13]. Additionally, we applied weight decay scaled from \(0.04\) to \(0.1\) during training. For the downstream task, we trained a linear classifier on our pretrained SPARTAN backbone. During training, the backbone was frozen, and we trained the classifier for \(100\) epochs with a batch size of \(32\) on a single NVIDIA-V100 GPU using SGD with an initial learning rate of \(1\)e-\(3\) and a cosine decay schedule. We also set the momentum to \(0.9\). ### Comparison with state-of-the-art methods **JRDB-PAR dataset** We conducted a comparative study to evaluate our proposed approach alongside state-of-the-art methods in GAR and WSGAR using the JRDB-Par dataset. We involved fully supervised and weakly supervised settings to evaluate the dataset. The comparison results are presented in Table 2. In the fully supervised setting, our method outperforms the existing social group activity recognition frameworks significantly in all the metrics. In the weakly supervised setting, our proposed method outperformed existing GAR and WSGAR methods by a considerable margin, achieving \(8.7\) of \(\mathcal{P}_{g}\), \(12.7\) of \(\mathcal{R}_{g}\) and \(9.9\) of \(\mathcal{F}_{g}\). Additionally, we evaluated this dataset using ResNet-18 and ViT-Base backbones, where ViT-Base proved to be better, which is analyzed in the ablation study section. Despite their impressive performance in WSGAR, our approach outperformed them all. **NBA dataset** Our comparison study evaluates our proposed approach against state-of-the-art methods in GAR and WSGAR, as well as current video backbones, using the NBA dataset. To ensure a fair comparison, we exclusively use RGB frames as input for each approach, including the video backbones. The results of our comparison are listed in Table 3. Notably, our reproduced version of SAM [61] achieves higher scores than those reported in the original article. Our proposed method outperforms existing GAR and WSGAR methods by a significant margin, achieving 7.5% of MCA and 2.3% of MPCA. Furthermore, we compare our approach with ResNet-18 TSM [39] and VideoSwin-T [41], two current video backbones used in traditional action detection. Although these strong backbones perform well in WSGAR, our approach outperforms them all. **Volleyball dataset.** In the volleyball dataset, we evaluate our approach against the latest GAR and WSGAR methods in two supervision levels: fully supervised and weakly supervised, which differ in the use of actor-level labels such as ground-truth bounding boxes and individual action class labels in training and inference. To ensure a fair comparison, we report the results of previous methods \begin{table} \begin{tabular}{l|c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{Group Activity} \\ \cline{2-4} & \(\mathcal{P}_{g}\) & \(\mathcal{R}_{g}\) & \(\mathcal{F}_{g}\) \\ \hline \multicolumn{4}{c}{**Fully supervised**} \\ \hline ARG [58] & 34.6 & 29.3 & 30.7 \\ SA-GAT [18] & 36.7 & 29.9 & 31.4 \\ JRDB-Base [19] & 44.6 & 46.8 & 45.1 \\ **Ours** & **49.3** & **47.1** & **48.7** \\ \hline \multicolumn{4}{c}{**Weakly supervised**} \\ \hline AT[22] & 21.2 & 19.1 & 19.8 \\ SACRF[44] & 42.9 & 35.5 & 37.6 \\ Dynamic[63] & 37.5 & 27.1 & 30.6 \\ HiGCIN[60] & 39.3 & 30.1 & 33.1 \\ ARG[58] & 26.9 & 21.5 & 23.3 \\ SA-GAT[18] & 28.6 & 24.0 & 25.5 \\ JRDB-Base[19] & 38.4 & 33.1 & 34.8 \\ **Ours** & **47.1** & **45.8** & **44.9** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparative results of the social group activity recognition on JRDB-PAR dataset [26]. and reproduce results using only the RGB input and ResNet-18 backbone, respectively. In the weakly supervised setting, we replace the group action classification labels with ground-truth bounding boxes of the actors without their corresponding actions so that the actors localization is learned during the pre-training stage. Table 4 presents the results, with the first and second sections showing the results of earlier techniques in fully supervised and weakly supervised environments, respectively. The results show that our model trained on the ResNet-18 backbone outperforms most of the fully supervised frameworks by showing a significant improvement in the MCA and MPCA metrics. We show that using ViT-Base backbone, our approach significantly outperforms all GAR and WSGAR models in weakly supervised conditions, beating them by 2.4% of MCA and 1.2% of Merged MCA by leveraging the spatiotemporal features using the transformer architecture. Moreover, our approach is better than the current GAR methods that employ less thorough actor-level supervision, such as the _GAR_ model, which is able to learn from the \begin{table} \begin{tabular}{l|c c} \hline \hline Method & MCA & MPCA \\ \hline \multicolumn{3}{c}{**Video backbone**} \\ \hline TSM [39] & 66.6 & 60.3 \\ VideoSwin [41] & 64.3 & 60.6 \\ \hline \multicolumn{3}{c}{**GAR model**} \\ \hline ARG [58] & 59.0 & 56.8 \\ AT [22] & 47.1 & 41.5 \\ SACRF [44] & 56.3 & 52.8 \\ DIN [63] & 61.6 & 56.0 \\ SAM [61] & 54.3 & 51.5 \\ DFWSGAR [31] & 75.8 & 71.2 \\ \hline **Ours** & **83.3** & **73.5** \\ \hline \hline \end{tabular} \end{table} Table 3: Comparisons with the State-of-the-Art GAR models and video backbones on the NBA dataset [61]. as [7; 59; 45; 22; 44]. ### Ablation Study We conduct a thorough analysis of the various components that contribute to the effectiveness of our approach, which is an extension of analysis from [12]. \begin{table} \begin{tabular}{l|c c c} \hline \hline Method & Backbone & MCA & \begin{tabular}{c} Merged \\ MCA \\ \end{tabular} \\ \hline \multicolumn{4}{c}{**Fully supervised**} \\ \hline SSU [7] & Inception-v3 & 89.9 & - \\ PCTDM [59] & ResNet-18 & 90.3 & 94.3 \\ StagNet [45] & VGG-16 & 89.3 & - \\ ARG [58] & ResNet-18 & 91.1 & 95.1 \\ CRM [6] & I3D & 92.1 & - \\ HiGCIN [60] & ResNet-18 & 91.4 & - \\ AT [22] & ResNet-18 & 90.0 & 94.0 \\ SACRF [44] & ResNet-18 & 90.7 & 92.7 \\ DIN [63] & ResNet-18 & 93.1 & **95.6** \\ TCE+STBiP [62] & VGG-16 & **94.1** & - \\ GroupFormer [37] & Inception-v3 & **94.1** & - \\ \hline \multicolumn{4}{c}{**Weakly supervised**} \\ \hline PCTDM [59] & ResNet-18 & 80.5 & 90.0 \\ ARG [58] & ResNet-18 & 87.4 & 92.9 \\ AT [22] & ResNet-18 & 84.3 & 89.6 \\ SACRF [44] & ResNet-18 & 83.3 & 86.1 \\ DIN [63] & ResNet-18 & 86.5 & 93.1 \\ SAM [61] & ResNet-18 & 86.3 & 93.1 \\ DFWSGAR [31] & ResNet-18 & 90.5 & 94.4 \\ \hline \multicolumn{4}{c}{**Ours**} \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison with the state-of-the-art methods on the Volleyball dataset. [29] \begin{table} \begin{tabular}{c|c|c|c} \hline \hline KD & JRDB-PAR & NBA & Volleyball \\ \hline ✗ & 34.2 & 75.2 & 86.4 \\ ✓ & **44.9** & **83.3** & **93.1** \\ \hline \hline \end{tabular} \end{table} Table 7: **Impact of ground-truth bounding box information (G.T. BB’s))**: When we provided the bounding box information during the pre-training, it is proved that the performance is optimal rather than using random crops. \begin{table} \begin{tabular}{c|c|c|c} \hline \hline Backbone & \begin{tabular}{c} JRDB-PAR \\ (\(\mathcal{F}_{g}\)) \\ \end{tabular} & \begin{tabular}{c} NBA \\ (MCA) \\ \end{tabular} & \begin{tabular}{c} Volleyball \\ (MCA) \\ \end{tabular} \\ \hline Inception-v3 & 31.8 & 69.3 & 78.6 \\ VGG-16 & 35.1 & 72.9 & 81.5 \\ I3D & 36.3 & 76.7 & 85.8 \\ ResNet-18 & 39.6 & 78.1 & 89.2 \\ ViT-S & 41.3 & 80.2 & 91.1 \\ \hline ViT-B & **44.9** & **83.3** & **93.1** \\ \hline \hline \end{tabular} \end{table} Table 5: **Different backbones. The most optimal backbone for our framework is ViT-Base outperforming the other backbones.** \begin{table} \begin{tabular}{c|c|c|c} \hline \hline KD & JRDB-PAR & NBA & Volleyball \\ \hline ✗ & 34.2 & 75.2 & 86.4 \\ ✓ & **44.9** & **83.3** & **93.1** \\ \hline \hline \end{tabular} \end{table} Table 8: **Impact of ground-truth bounding box information (G.T. BB’s))**: When we provided the bounding box information during the pre-training, it is proved that the performance is optimal rather than using random crops. bone networks on our framework. We conducted the experiments presented in Table 5. Our results show that ResNet-18 performs better than the other Convolutional Neural Network (CNN) backbones, but overall performance is optimal with ViT-Base backbone because the spatiotemporal features of the input video with varying views are well leveraged by the transformer architecture for videos [9]. Also, when both networks share the same backbone, they perform better rather than having distinct backbone networks. **Impact of Knowledge Distillation (KD)**: To evaluate the effect of knowledge distillation, we conducted experiments as presented in Table 6. To be specific, we compared the performance of our approach in the absence of KD, i.e., the student and teacher networks learn independently, and there is no transfer of information from the student to teacher network. This shows very poor performance. Hence, KD is determined to be one of the key factors in the optimal performance of the proposed framework. This also proves that exponential moving average (EMA) aids feature learning across the networks to improve performance. **Impact of ground-truth bounding box (G.T. BB's) information**: During the pre-training step, the social group activity recognition is highly leveraged by the actor localization information. So, we perform experiments as shown in Table 7 to evaluate the performance of our method on this information. Specifically, we used random crops in the initial experiment in all the input views, which yields poor performance for JRDB-PAR and Volleyball datasets but the NBA dataset performs well as there is no bounding box information from the dataset. In contrast, we used the G.T. BB's exclusively without their corresponding labels for the other experiment to prove the optimal performance of our method. Figure 6: Visualization of the attention locations on the JRDB-PAR dataset. We show the locations of the top five attention weights from the transformer heads. Figure 7: Visualization of the attention locations on the Volleyball dataset. We show the locations of the top four attention weights from the transformer heads. the last layer of the encoder. The yellow circles represent the attention locations. The size of the yellow circles denotes whether the locations are in the high or low-resolution feature maps, giving a rough indication of the image areas affecting the generated features. Our findings reveal that features are generally aggregated from low-resolution feature maps when group members are situated in broader areas, and the opposite is true. These results indicate that the proposed framework can effectively aggregate features based on the distribution of group members, thereby contributing to improving the performance of social group activity recognition. ## 5 Conclusion Our paper presents a new self-supervised video model named SoGAR, which is based on a video transformer architecture. The method entails generating multiple views of a video, which differ in terms of their spatial and temporal characteristics. To capture the motion characteristics and cross-view relationships between the clips, we define two sets of correspondence learning tasks. The self-supervised objective is to reconstruct one view from another in the latent space of both the teacher and student networks. Furthermore, our SoGAR model can capture long-term spatio-temporal dependencies and perform dynamic inference within a single framework. We evaluate SoGAR on three benchmark datasets for social group activity recognition and demonstrate its superior performance over existing state-of-the-art models.
2305.09023
Continuum-wise hyperbolic homeomorphisms on surfaces
This paper discusses the dynamics of continuum-wise hyperbolic surface homeomorphisms. We prove that $cw_F$-hyperbolic surface homeomorphisms containing only a finite set of spines are $cw_2$-hyperbolic. In the case of $cw_3$-hyperbolic homeomorphisms we prove the finiteness of spines and, hence, that $cw_3$-hyperbolicity implies $cw_2$-hyperbolicity. In the proof, we adapt techniques of Hiraide [11] and Lewowicz [15] in the case of expansive surface homeomorphisms to prove that local stable/unstable continua of $cw_F$-hyperbolic homeomorphisms are continuous arcs. We also adapt techniques of Artigue, Pac\'ifico and Vieitez [6] in the case of N-expansive surface homeomorphisms to prove that the existence of spines is strongly related to the existence of bi-asymptotic sectors and conclude that spines are necessarily isolated from other spines.
Rodrigo Arruda, Bernardo Carvalho, Alberto Sarmiento
2023-05-15T21:16:26Z
http://arxiv.org/abs/2305.09023v1
# Continuum-wise hyperbolic homeomorphisms on surfaces ###### Abstract. This paper discusses the dynamics of continuum-wise hyperbolic surface homeomorphisms. We prove that \(cw_{F}\)-hyperbolic surface homeomorphisms containing only a finite set of spines are \(cw_{2}\)-hyperbolic. In the case of \(cw_{3}\)-hyperbolic homeomorphisms we prove the finiteness of spines and, hence, that \(cw_{3}\)-hyperbolicity implies \(cw_{2}\)-hyperbolicity. In the proof, we adapt techniques of Hiraide [11] and Lewowicz [15] in the case of expansive surface homeomorphisms to prove that local stable/unstable continua of \(cw_{F}\)-hyperbolic homeomorphisms are continuous arcs. We also adapt techniques of Artigue, Pacifico and Vieitez [6] in the case of N-expansive surface homeomorphisms to prove that the existence of spines is strongly related to the existence of bi-asymptotic sectors and conclude that spines are necessarily isolated from other spines. Key words and phrases:cw-hyperbolicity, classification, spines. 2020 _Mathematics Subject Classification_: Primary 37B45; Secondary 37D10. contained in the wandering part of the system. The techniques seem to be restricted to the case of 2-expansive homeomorphisms and extend them to 3-expansiveness seems complicated. In particular, use them to understand the dynamics of more general continuum-wise expansive homeomorphisms, introduced by Kato in [14], seems difficult. However, in the study of cw-expansive homeomorphisms a recent work of Artigue, Carvalho, Cordeiro and Vieitez [4] discussed the continuum-wise hyperbolicity, assuming that local stable and unstable continua of sufficiently close points of the space intersect (see Definition 2.1). Cw-hyperbolic systems share several important properties with the topologically hyperbolic ones, such as the L-shadowing property [5] and a spectral decomposition theorem [4], but a few important differences are noted on the pseudo-Anosov diffeomorphism of \(\mathbb{S}^{2}\): the existence of stable/unstable spines, bi-asymptotic sectors, cantor sets in arbitrarily small dynamical balls, and a cantor set of distinct arcs in local stable/unstable sets. In this paper we start to adapt the techniques of Lewowicz/Hiraide to the study of cw-hyperbolic homeomorphisms on surfaces. One important hypothesis in our results is \(\operatorname{cw}_{F}\)-expansiveness, that asks for a finite number of intersections between any pair of local stable and local unstable continua (see Definition 2.1). We do not know an example of a cw-hyperbolic surface homeomorphism that is not \(\operatorname{cw}_{F}\)-expansive, so the study of \(\operatorname{cw}_{F}\)-hyperbolicity on surfaces seems to be the perfect first step in the theory. In our first result, we prove that local stable/unstable continua of \(\operatorname{cw}_{F}\)-hyperbolic surface homeomorphisms are arcs (see Proposition 2.12). \(\operatorname{Cw}_{F}\)-hyperbolicity is important in this step since in [2] there is a cw-expansive surface homeomorphism with non locally connected local stable continua. But even in the case they are locally connected, they are only assured to be contained in dendritations as proved in [3]. Section 2 is devoted to prove that they are arcs and this is done in a few important steps that are based in the ideas of Lewowicz and Hiraide. In our second result we relate bi-asymptotic sectors and spines in a way that every regular sector contains a single spine and every spine is contained in a regular bi-asymptotic sector. The notion of regular bi-asymptotic sector is defined and pictures of non-regular sectors are presented. We give a complete description of the structure of local stable and local unstable continua inside a regular bi-asymptotic sector. We also prove that every bi-asymptotic sector contains a regular bi-asymptotic sector and, hence, a spine, and that non-regular sectors contain at least two distinct spines. All these results on sectors and spines are proved in Section 3 and allow us to conclude that the spines of a \(\operatorname{cw}_{F}\)-hyperbolic surface homeomorphism are isolated from other spines and, hence, we obtain that there is at most a countable number of them. In Section 4 we prove our main result using all the techniques developed in the previous sections. The hypothesis of the existence of at most a finite number of spines is important and will be discussed. We do not know examples of cw-hyperbolic surface homeomorphisms with an infinite number of spines, so this hypothesis also seems reasonable. The following is the main result of this article: **Theorem 1.1**.: _If a \(\operatorname{cw}_{F}\)-hyperbolic surface homeomorphism has only a finite number of spines, then it is cw2-hyperbolic. Cw3-hyperbolic surface homeomorphisms have at most a finite number of spines, and are cw2-hyperbolic._ We note that in [4] it is proved that the product of \(n\) copies of the pseudo-Anosov diffeomorphism of \(\mathbb{S}^{2}\) is \(\operatorname{cw}_{2^{n}}\)-hyperbolic but is not \(\operatorname{cw}_{2^{n}-1}\)-expansive. Thus, the hypothesis on the space being a surface is important for our main result. We state the following questions that follow naturally from our results: **Question 1**.: _Does there exist a \(cw\)-hyperbolic surface homeomorphism that is not \(cw_{2}\)-hyperbolic?_ **Question 2**.: _Does \(cw_{F}\)-hyperbolicity on surfaces imply finiteness on the number of spines and, hence, \(cw_{2}\)-hyperbolicity?_ **Question 3**.: _Does \(cw\)-hyperbolicity on surfaces imply local connectedness of \(C^{s}_{\varepsilon}(x)\)?_ **Question 4**.: _Can we adapt the techniques of this paper to prove that 3-expansive surface homeomorphisms are 2-expansive?_ ## 2. Local stable/unstable continua are arcs We begin this section with some precise definitions. Let \((X,d)\) be a compact metric space and \(f\colon X\to X\) be a homeomorphism. We consider the _c-stable set_ of \(x\in X\) as the set \[W^{s}_{c}(x):=\{y\in X;\ d(f^{k}(y),f^{k}(x))\leq c\ \ \text{for every}\ \ k\geq 0\}\] and the _c-unstable set_ of \(x\) as the set \[W^{u}_{c}(x):=\{y\in X;\ d(f^{k}(y),f^{k}(x))\leq c\ \ \text{for every}\ \ k\leq 0\}.\] We consider the _stable set_ of \(x\in X\) as the set \[W^{s}(x):=\{y\in X;\ d(f^{k}(y),f^{k}(x))\to 0\ \ \text{when}\ \ k\to\infty\}\] and the _unstable set_ of \(x\) as the set \[W^{u}(x):=\{y\in X;\ d(f^{k}(y),f^{k}(x))\to 0\ \ \text{when}\ \ k\to-\infty\}.\] We denote by \(C^{s}_{c}(x)\) the \(c\)-stable continuum of \(x\), that is the connected component of \(x\) on \(W^{s}_{c}(x)\), and denote by \(C^{u}_{c}(x)\) the \(c\)-unstable continuum of \(x\), that is the connected component of \(x\) on \(W^{u}_{c}(x)\). **Definition 2.1**.: We say that \(f\) is \(cw\)-expansive if there exists \(c>0\) such that \[W^{s}_{c}(x)\cap W^{u}_{c}(x)\ \ \ \text{ is totally disconnected}\] for every \(x\in X\). A \(cw\)-expansive homeomorphism is said to be \(cw_{F}\)-expansive if there exists \(c>0\) such that \[\#(C^{s}_{c}(x)\cap C^{u}_{c}(x))<\infty\ \ \ \ \text{for every}\ \ \ \ x\in X.\] Analogously, \(f\) is said to be \(cw_{N}\)-expansive if there is \(c>0\) such that \[\#(C^{s}_{c}(x)\cap C^{u}_{c}(x))\leq N\ \ \ \ \text{for every}\ \ \ \ x\in X.\] We say that \(f\) satisfies the \(cw\)-local-product-structure if for each \(\varepsilon>0\) there exists \(\delta>0\) such that \[C^{s}_{\varepsilon}(x)\cap C^{u}_{\varepsilon}(y)\neq\emptyset\ \ \ \text{ whenever}\ \ \ \ d(x,y)<\delta.\] The \(cw\)-expansive homeomorphisms (resp. \(cw_{F}\), \(cw_{N}\)) satisfying the \(cw\)-local-product-structure are called \(cw\)-hyperbolic (resp. \(cw_{F}\), \(cw_{N}\)). The main examples of cw-hyperbolic surface homeomorphism are the Anosov diffeomorphisms (or more generally the topologically hyperbolic homeomorphisms), and the pseudo-Anosov diffeomorphism of \(\mathbb{S}^{2}\). The sphere \(\mathbb{S}^{2}\) can be seen as the quotient of \(\mathbb{T}^{2}\) by the antipodal map, and thus any 2x2 hyperbolic matrix \(A\) with integer coefficients and determinant one induces diffeomorphisms \(f_{A}\) on \(\mathbb{T}^{2}\) and \(g_{A}\) on \(\mathbb{S}^{2}\). The diffeomorphism \(f_{A}\) is Anosov and, hence, cw1-hyperbolic, while \(g_{A}\) is cw2-hyperbolic but not cw1-expansive (see [4, 5, 9] for more details). We recall some known properties of local stable/unstable continua for cw\({}_{F}\)-hyperbolic homeomorphisms. Most of them hold assuming that \(X\) is a Peano continuum, that is a compact, connected, and locally connected metric space, but we assume from now on that \(f\colon S\to S\) is a cw\({}_{F}\)-hyperbolic homeomorphism of a closed surface \(S\). We will use the symbol \(\sigma\) to denote both \(s\) and \(u\). Since this will appear several times in what follows, we will not write in all of them that some statement holds for \(\sigma=s\) and \(\sigma=u\), we will simply say that it holds for \(\sigma\). **Lemma 2.2** ([13] Thm. 1.6).: _For every \(\varepsilon>0\), there exists \(\delta>0\) such that_ \[\operatorname{diam}(C^{\sigma}_{\varepsilon}(x))>\delta\quad\text{for every} \quad x\in S,\] _where \(\operatorname{diam}(A)=\sup\{d(a,b);a,b\in A\}\) denotes the diameter of the set \(A\)._ **Corollary 2.3**.: \(\operatorname{Int}C^{\sigma}_{\varepsilon}(x)=\emptyset\) _for every \(x\in S\) and \(\varepsilon\in(0,\frac{c}{2})\)._ Proof.: The following is the proof for \(\sigma=s\), but for \(\sigma=u\) the proof is analogous. By contradiction, assume that \(y\in\operatorname{Int}C^{s}_{\varepsilon}(x)\neq\emptyset\) for some \(x\in X\) and \(\varepsilon<\frac{c}{2}\). Since \(\operatorname{diam}C^{u}_{\varepsilon}(y)>\delta\) for some \(\delta>0\), then \(C^{s}_{\varepsilon}(x)\cap C^{u}_{\varepsilon}(y)\) contains a non-trivial continuum. By the choice of \(\varepsilon\), we have that \(C^{u}_{\varepsilon}(y)\subset C^{u}_{\varepsilon}(y)\), then \(C^{s}_{c}(x)\cap C^{u}_{c}(x)\) contains a non-trivial continuum, which contradicts the \(cw\)-expansiveness. Let \(\mathcal{C}\) denote the space of all sub-continua of \(S\) and \(\mathcal{C}_{\delta}\) denote the set of all sub-continua of \(S\) with diameter smaller than \(\delta\). Let \(\mathcal{C}^{s}\) and \(\mathcal{C}^{u}\) denote the set of all stable and unstable continua of \(f\), respectively. More precisely, \[\mathcal{C}^{s}=\{C\in\mathcal{C};\ \operatorname{diam}(f^{n}(C))\to 0,\ \ n\to\infty\}\] \[\mathcal{C}^{u}=\{C\in\mathcal{C};\ \operatorname{diam}(f^{n}(C))\to 0,\ \ n\to-\infty\}.\] The following lemma is actually a characterization of cw-expansiveness with respect to local stable/unstable continua. **Lemma 2.4** ([3] Prop. 2.3.1).: 1. _There exists_ \(\varepsilon^{*}>0\) _such that_ \(C^{\sigma}_{\varepsilon^{*}}\subset C^{\sigma}\)_,_ 2. _For all_ \(\varepsilon>0\) _there exists_ \(\delta>0\) _such that_ \(C^{\sigma}\cap C_{\delta}\subset C^{\sigma}_{\varepsilon}\)_._ The following lemma is similar to Lemma 4.1 in [11] but assuming cw-expansiveness. Let \(B_{\delta}(x)\) denote the ball of radius \(\delta\) centered at \(x\), that is the set of points whose distance to \(x\) is less than \(\delta\). **Lemma 2.5**.: _For each \(0<\varepsilon<\frac{\varepsilon^{*}}{4}\) there exists \(\delta\in(0,\varepsilon)\) such that_ \[C^{\sigma}_{\varepsilon}(x)\cap B_{\delta}(x)=C^{\sigma}_{2\varepsilon}(x) \cap B_{\delta}(x)\] _for every \(x\in S\)._ Proof.: For each \(0<\varepsilon<\frac{\varepsilon^{*}}{4}\) let \(\delta^{*}\in(0,\varepsilon)\) be given by Lemma 2.4 and \(\delta=\frac{\delta^{*}}{2}\). Since \(\varepsilon<\frac{\varepsilon^{*}}{4}\), it follows that \[C^{\sigma}_{\varepsilon}(x)\subset C^{\sigma}_{2\varepsilon}(x)\in C^{ \sigma}.\] The choice of \(\delta\) ensures that \(C^{\sigma}_{2\varepsilon}(x)\cap B_{\delta}(x)\subset C^{\sigma}_{\varepsilon}(x)\), and, hence, \[C^{\sigma}_{2\varepsilon}(x)\cap B_{\delta}(x)=C^{\sigma}_{\varepsilon}(x) \cap B_{\delta}(x).\] In the following lemma, the hypothesis of \(\operatorname{cw}_{F}\)-expansiveness will be necessary. It was first proved in [3] for \(\operatorname{cw}_{F}\)-expansive homeomorphisms, but we include a different proof using \(\operatorname{cw}_{F}\)-hyperbolicity. In this article, an arc is a subset of \(S\) homeomorphic to \([0,1]\). **Lemma 2.6** ([3] Thm. 6.7.1).: _There exists \(\varepsilon>0\) such that \(C^{\sigma}_{\varepsilon}(x)\) is locally connected for every \(x\in S\)._ Proof.: We prove for \(\sigma=s\) but the proof for \(\sigma=u\) is similar. Let \(0<\varepsilon<\frac{c}{4}\) and by contradiction assume that \(C^{s}_{\varepsilon}(x)\) is not locally connected for some \(x\in S\). Then we can consider a sequence of arcs \((P_{n})_{n\in\mathbb{N}}\subset C^{s}_{\varepsilon}(x)\) (as in the proof of Proposition 3.1 in [11]) such that \(P_{i}\cap P_{j}=\emptyset\) if \(i\neq j\) and \(d(P_{n},P^{*})\to 0\) for some non-trivial arc \(P^{*}\subset C^{s}_{\varepsilon}(x)\) (here we also denote by \(d\) the Hausdorff distance on the space of continua). Let \(y\in P^{*}\) be an interior point of \(P^{*}\) and consider \(r\in(0,\varepsilon)\) such that 1. \(P^{*}\) separates \(B_{r}(y)\), and 2. \(P_{n}\) separates \(B_{r}(y)\) for every \(n>n_{0}\) and for some \(n_{0}\in\mathbb{N}\). We can assume that every \(P_{n}\) is contained in the same component \(A\) of \(B_{r}(y)\setminus P^{*}\) taking a sub-sequence if necessary. The cw-local-product-structure ensures the existence of \(\delta\in(0,\frac{r}{4})\) such that \[C^{s}_{\frac{1}{4}}(a)\cap C^{u}_{\frac{1}{4}}(b)\neq\emptyset\quad\text{ whenever }\quad d(a,b)<\delta.\] Since \(\operatorname{Int}C^{s}_{2\varepsilon}(y)=\emptyset\), there exists \(z\in A\cap B_{\delta}(y)\) such that \(z\notin C^{s}_{2\varepsilon}(y)\), and, hence, \[C^{s}_{r}(z)\cap C^{s}_{\varepsilon}(y)=\emptyset.\] In particular, \(C^{s}_{r}(z)\cap P_{n}=\emptyset\) for every \(n\in\mathbb{N}\). Choose \(n_{1}>n_{0}\) such that \(P_{n_{1}}\cap B_{r}(y)\) separates \(P^{*}\cap B_{r}(y)\) and \(C^{s}_{\frac{1}{4}}(z)\). The choice of \(\delta\) ensures that \(C^{u}_{\frac{1}{4}}(y)\cap C^{s}_{\frac{1}{4}}(z)\neq\emptyset\). In particular, \(C^{u}_{\frac{1}{4}}(y)\cap P_{n_{1}}\neq\emptyset\), and, hence, \(C^{u}_{\varepsilon}(y)\) intersects an infinite number of distinct \(P^{\prime}_{n}s\) (see Figure 1). This contradicts \(\operatorname{cw}_{F}\)-expansiveness and finishes the proof. Figure 1. The following corollary is a consequence of this result. A subset \(A\subset S\) is arcwise connected if for each pair of distinct points \(x,y\in A\) there exists an arc \(h\colon[0,1]\to A\) such that \(h(0)=x\) and \(h(1)=y\). **Corollary 2.7**.: _There exists \(\varepsilon>0\) such that \(C^{\sigma}_{\varepsilon}(x)\) is arcwise connected and locally arcwise connected for every \(x\in S\). Moreover, for each pair of distinct points \(y,z\in C^{\sigma}_{\varepsilon}(x)\), there is a unique arc \(\sigma(y,z;x)\) in \(C^{\sigma}_{\varepsilon}(x)\) connecting \(y\) and \(z\)._ Proof.: From Lemma 2.6 and Theorem 5.9 of [10], it follows that \(C^{\sigma}_{\varepsilon}(x)\) is a Peano space. Hence, Theorem 6.29 of [10] ensures that \(C^{\sigma}_{\varepsilon}(x)\) is arcwise connected and locally arcwise connected. The uniqueness comes from the observation that two distinct arcs connecting \(y\) and \(z\) would create an open set bounded by a local stable (in the case \(\sigma=s\)) or local unstable (in the case \(\sigma=u\)) curve, which contradicts cw-expansiveness on surfaces. From now on we choose \(\varepsilon>0\) given by Corollary 2.7 and \(\delta\in(0,\varepsilon)\) satisfying the Lemmas 2.2, 2.4, and 2.5. Following the steps of Hiraide in [11], we define an equivalence relation in the set of arcs starting on \(x\) and contained in \(C^{\sigma}_{\varepsilon}(x)\). **Definition 2.8**.: Let \(x\in S\), and \(y,z\in C^{\sigma}_{\varepsilon}(x)\). We write \(y\sim z\) if \[\sigma(x,y;x)\cap\sigma(x,z;x)\supsetneq\{x\}.\] We define the number of stable/unstable separatrices at \(x\) as \[P^{\sigma}(x)=\#(C^{\sigma}_{\varepsilon}(x)/\sim).\] Lemma 2.5 ensures that the number of separatrices at \(x\) does not depend on the choice of \(\varepsilon<\frac{\varepsilon^{*}}{4}\). This explains the notation \(P^{\sigma}(x)\) without \(\varepsilon\) being mentioned. The following lemma follows an idea present in the works of Lewowicz/Hiraide: stable and unstable separatrices must be alternated as in Figure 2. In this step, the \(\operatorname{cw}_{F}\)-hyperbolicity will also be necessary. Let \(\partial B_{\delta}(x)\) denote the boundary of the ball \(B_{\delta}(x)\), that is the set of points whose distance to \(x\) equals \(\delta\). **Lemma 2.9**.: _For each \(x\in S\), there exists \(r_{0}>0\) such that if \(r\in(0,r_{0})\), \(y,z\in\partial B_{r}(x)\) are in different classes of \(\sim\), \(s(x,y;x)\) and \(s(x,z;x)\) are arcs intersecting \(\partial B_{r}(x)\) at one point and \(A\) is a component of \(B_{r}(x)\setminus(s(x,y;x)\cup s(x,z;x))\), then there is \(a\in C^{u}_{\varepsilon}(x)\cap A\) such that \(u(x,a;x)\subset A\)._ Proof.: If \(C^{s}_{\varepsilon}(x)\cap C^{u}_{\varepsilon}(x)=\{x\}\), let \(r_{0}=\varepsilon\), otherwise, \[C^{s}_{\varepsilon}(x)\cap C^{u}_{\varepsilon}(x)\supsetneq\{x\},\] Figure 2. Lemma 2.9 and we let \[r_{0}=\text{\it d}(x,(C^{s}_{\varepsilon}(x)\cap C^{u}_{\varepsilon}(x))\setminus \{x\}),\] which is a positive number by \(cw_{F}\)-expansiveness. Let \(x\), \(r\), \(y\), \(z\), and \(A\) as above, and suppose there is no unstable arc \(u(x,a;x)\subset A\). Choose \(\delta_{r}\in(0,\frac{r}{4})\), given by the cw-local-product-structure, such that \[C_{\frac{r}{4}}(x)\cap C_{\frac{r}{4}}(y)\neq\emptyset\ \ \text{ whenever}\ \ d(x,y)<\delta_{r}.\] Lemma 2.3 assures the existence of \[b\in A\setminus C^{s}_{\varepsilon}(x)\ \ \text{ with }\ \ \ d(b,x)<\frac{\delta_{r}}{2}.\] It follows that \[C^{s}_{\frac{r}{4}}(b)\subset A.\] If \(C^{s}_{\varepsilon}(x)\cap C^{u}_{\varepsilon}(x)=\{x\}\), then \(C^{u}_{\frac{r}{4}}(x)\cap A=\emptyset\) since there is no unstable arc \(u(x,a;x)\subset A\); otherwise, \(C^{s}_{r}(x)\cap C^{u}_{r}(x)=\{x\}\) since \(r<r_{0}\), and, hence, \(C^{u}_{\frac{r}{4}}(x)\cap A=\emptyset\). In both cases, we obtain \[C^{s}_{\frac{r}{4}}(b)\cap C^{u}_{\frac{r}{4}}(x)=\emptyset.\] This contradicts the choice of \(\delta_{r}\) since \(\text{\it d}(b,x)<\frac{\delta_{r}}{2}\) (see Figure 3) and proves the existence of an unstable arc \(u(x,a;x)\subset A\) and finishes the proof. The following corollary is a direct consequence of the previous Lemma. **Corollary 2.10**.: \(P^{s}(x)=P^{u}(x)\) _for every \(x\in S\)._ We recall that \(C^{\sigma}_{\varepsilon}(x)\) is locally connected but the case \(P^{\sigma}(x)=\infty\) is still not ruled out since we could have an infinite number of separatrices with diameter converging to zero. Also, in the expansive case, examples of pseudo-Anosov homeomorphisms with singularities containing a number of stable/unstable separatrices greater than two can be constructed. In the following lemma, we observe that these two scenarios do not occur in the case of \(\text{\rm cw}_{F}\)-hyperbolic homeomorphisms. Indeed, the existence of bifurcation points contradicts \(\text{\rm cw}_{F}\)-hyperbolicity. **Lemma 2.11**.: \(P^{\sigma}(x)\leq 2\) _for every \(x\in S\)._ Proof.: Suppose, by contradiction, that there exists \(x\in S\) with \(P^{\sigma}(x)\geq 3\). Let \(r>0\) and \(x_{1},x_{2},x_{3}\) be in different classes of \(C^{s}_{\varepsilon}(x)\) such that \(s(x,x_{i};x)\subset B_{r}(x)\) and intersects \(\partial B_{r}(x)\) only at \(x_{i}\) for every \(i=1,2,3\). Then \(\bigcup_{i=1}^{3}s(x,x_{i};x)\) separates \(B_{r}(x)\) in exactly three components, \(A_{1},A_{2},A_{3}\) as in Figure 4. Figure 3. Using Lemma 2.9, we can find \(y_{i}\in A_{i}\) such that \(u(x,y_{i};x)\subset A_{i}\). If \(C^{s}_{\varepsilon}(x)\cap C^{u}_{\varepsilon}(x)=\{x\}\), let \[r_{0}=\min(\operatorname{diam}u(x,y_{i};x))\] otherwise, let \[r_{0}=\min(\operatorname{\mathit{d}}(x,(C^{s}_{\varepsilon}(x)\cap C^{u}_{ \varepsilon}(x))\setminus\{x\}),\min(\operatorname{diam}u(x,y_{i};x))),\] which is a positive number by \(cw_{F}\)-expansiveness. It follows that \(\bigcup_{i=1}^{3}u(x,z_{i};x)\) divides \(B_{r_{0}}(x)\) in three components, \(B_{1},B_{2},B_{3}\), for some \(z_{i}\in u(x,y_{i};x)\) (see Figure 5). Since the stable and unstable arcs are alternating, then \[A_{i}\cap B_{j}=\emptyset.\] for some \(i,j\in\{1,2,3\}\) (see Figure 6). Figure 4. Figure 5. Figure 6. Choose \(\delta_{r_{0}}\in(0,\frac{r_{0}}{4})\), given by cw-local-product-structure, such that \[C^{s}_{\frac{r_{0}}{4}}(x)\cap C^{u}_{\frac{r_{0}}{4}}(y)\neq\emptyset\ \ \text{ whenever}\ \ d(x,y)<\delta_{r_{0}}.\] Lemma 2.3 assures the existence of \[a\in A_{i}\setminus C^{s}_{\varepsilon}(x)\ \ \text{ with }\ \ \ d(a,x)<\frac{\delta_{r_{0}}}{2}\] and \[b\in B_{j}\setminus C^{u}_{\varepsilon}(x)\ \ \text{ with }\ \ \ d(b,x)<\frac{\delta_{r_{0}}}{2}.\] It follows that \[C^{s}_{\frac{r_{0}}{4}}(a)\subset A_{i}\ \ \text{ and }\ C^{u}_{\frac{r_{0}}{4}}(b)\subset B_{j},\] and, hence, \(C^{s}_{\frac{r_{0}}{4}}(a)\cap C^{u}_{\frac{r_{0}}{4}}(b)=\emptyset\) (see Figure 7). This contradicts the choice of \(\delta_{r_{0}}\) and finishes the proof. There are two possible cases: either \(P^{\sigma}(x)=1\) and \(x\) is said to be a spine, or \(P^{\sigma}(x)=2\) and \(x\) is said to be a regular point. Let \(\operatorname{Spin}(f)\) denote the set of all spines of \(f\). The following proposition gathers all results we obtained so far. We prove that \(C^{\sigma}_{\varepsilon}(x)\) is an arc for every \(x\in S\). **Proposition 2.12**.: _If \(x\in\operatorname{Spin}(f)\), then there is a homeomorphism \(h^{\sigma}:[0,1]\to C^{\sigma}_{\varepsilon}(x)\) with \(h^{\sigma}(0)=x\). Otherwise, there is a homeomorphism \(h^{\sigma}:[-1,1]\to C^{\sigma}_{\varepsilon}(x)\) with \(h^{\sigma}(0)=x\)._ Proof.: Let \(IC^{\sigma}(x)\) denote the union of all open arcs in \(C^{\sigma}_{\varepsilon}(x)\) and \[BC^{\sigma}(x)=C^{\sigma}_{\varepsilon}(x)\setminus IC^{\sigma}(x).\] Note that \(BC^{\sigma}(x)\) is always formed by two distinct points \(x_{1}\) and \(x_{2}\), since local connectedness of \(C^{\sigma}_{\varepsilon}(x)\) ensures the existence of at least two points in \(BC^{\sigma}(x)\) (as in Lemma 4.5 of [11]), and the existence of three distinct points in \(BC^{\sigma}(x)\) would imply the existence of \(y\in C^{\sigma}_{\varepsilon}(x)\) with \(P^{\sigma}(y)\geq 3\), contradicting Lemma 2.11. If \(x\in\operatorname{Spin}(f)\), then either \(x=x_{1}\) or \(x=x_{2}\), and the arc connecting \(x_{1}\) to \(x_{2}\) gives us a homeomorphism \(h^{\sigma}\colon[0,1]\to C^{\sigma}_{\varepsilon}(x)\) such that \(h^{\sigma}(0)=x\). If \(x\notin\operatorname{Spin}(f)\), then \(x\notin BC^{\sigma}(x)\) and the arc connecting \(x_{1}\) to \(x_{2}\) gives us a homeomorphism \(h^{\sigma}:[-1,1]\to C^{\sigma}_{\varepsilon}(x)\) with \(h^{\sigma}(0)=x\). Now we exhibit two important consequences of above results that will be important in the proofs of Section 3. In the first lemma, we prove that either a local stable/unstable continuum separates a small ball, or it contains a spine in this ball. The notation \(c.c_{x}(A)\) is used to denote the connected component of \(x\) in the set \(A\) Figure 7. **Lemma 2.13**.: _For each \(0<\varepsilon<\frac{\varepsilon^{*}}{4}\), there exists \(\delta\in(0,\varepsilon)\) such that for each \(x\in S\) one of the following holds:_ 1. \(c.c_{x}(C_{\varepsilon}^{\sigma}(x)\cap B_{\delta}(x))\) _separates_ \(B_{\delta}(x)\)_,_ 2. \(c.c_{x}(C_{\varepsilon}^{\sigma}(x)\cap B_{\delta}(x))\) _contains a spine._ Proof.: If \(x\in\operatorname{Spin}\), we are in case (2). Then we assume that \(x\in S\setminus\operatorname{Spin}(f)\). For each \(0<\varepsilon<\frac{\varepsilon^{*}}{4}\), let \(\delta\in(0,\varepsilon)\) be given by Lemma 2.5 such that \[C_{\varepsilon}^{\sigma}(x)\cap B_{\delta}(x)=C_{2\varepsilon}^{\sigma}(x) \cap B_{\delta}(x).\] Let \(h^{\sigma}:[-1,1]\to C_{\varepsilon}^{\sigma}(x)\) be a homeomorphism as in Proposition 2.12 with \(h^{\sigma}(0)=x\). Lemma 2.2 ensures the existence of \(z\in C_{\varepsilon}^{\sigma}(x)\cap\partial B_{\delta}(x)\). Without loss of generality, we can assume that \(z\sim h(-1)\). We assume item (1) is false and prove item (2). Suppose that \(c.c_{x}(C_{\varepsilon}^{\sigma}(x)\cap B_{\delta}(x))\) does not separate \(B_{\delta}(x)\). Since \(z\in C_{\varepsilon}^{\sigma}(x)\cap\partial B_{\delta}(x)\) and \(z\sim h(-1)\), then \(\sigma(x,y;x)\subset\operatorname{Int}B_{\delta}(x)\), where \(y=h(1)\). Since \(y\in C_{\varepsilon}^{\sigma}(x)\), then \(C_{\varepsilon}^{\sigma}(y)\subset C_{2\varepsilon}^{\sigma}(x)\) and, hence, \[C_{\varepsilon}^{\sigma}(y)\cap B_{\delta}(x)\subset C_{2\varepsilon}^{\sigma }(x)\cap B_{\delta}(x)=C_{\varepsilon}^{\sigma}(x)\cap B_{\delta}(x).\] Thus, \(y\in c.c_{x}(C_{\varepsilon}^{\sigma}(x)\cap B_{\delta}(x))\) and \(P(y)=1\), that is, \(y\in\operatorname{Spin}(f)\). In the last result of this section, we observe that local stable/unstable sets intersect transversely. First, we state a precise definition for topological transversality. **Definition 2.14**.: Let \(\alpha,\beta\) be arcs in \(S\) meeting at \(x\). We say that \(\alpha\) is topologically transversal to \(\beta\) at \(x\) if there exists a disk \(D\) such that 1. \(\alpha\cap\beta\cap D=\{x\}\), 2. \(\beta\) separates \(D\), and 3. the connected components of \((\alpha\setminus\beta)\cap D\) are in different components of \(D\setminus\beta\) (See Fig. 8). **Lemma 2.15**.: _If \(x,y\in S\) and \(z\in C_{\varepsilon}^{s}(x)\cap C_{\varepsilon}^{u}(y)\setminus(Spin(f))\), then \(C_{\varepsilon}^{s}(x)\) intersects \(C_{\varepsilon}^{u}(y)\) transversely at \(z\)._ Proof.: Since \(z\notin Spin(f)\), then \(P^{s}(z)=P^{u}(z)=2\). If the intersection is not transversal we find \(a_{1},a_{2}\in C_{2\varepsilon}^{s}(z)\) and a disk \(D\) around \(z\) such that \(s(a_{1},a_{2};z)\) separates \(D\) and there is a component of \(D\setminus s(a_{1},a_{2};z)\) containing \(b_{1}\not\sim b_{2}\in C_{\varepsilon}^{u}(z)\). This is a contradiction with 2.9 because \(P_{\varepsilon}(z)=2\) Figure 8. Two arcs topologically transverse at the point \(x\). ## 3. Bi-asymptotic sectors and spines Bi-asymptotic sectors were introduced in [6] for \(N\)-expansive homeomorphisms on surfaces. These sectors were defined as being a disk bounded by the union of a local stable and a local unstable arc (see Figure 9a). In the case of 2-expansive homeomorphisms, a consequence of the arguments of [6] is that both intersections \(a_{1},a_{2}\) of a bi-asymptotic sector are not only transversal, but point outside the disk (see Figure 9a). Indeed, 2-expansiveness and non-existence of wandering points imply the non-existence of spines inside by-asymptotic sectors (see Proposition 3.5 in [6]), and this ensure the existence of a third intersection between the stable and unstable arcs bounding the sector if the intersection points inward. For \(\operatorname{cw}_{F}\)-hyperbolic homeomorphisms, it is not possible to ensure the intersections points outward the disk. First, because there will be spines inside the sector, but also because this case allows more intersections between local stable/unstable arcs. The goal is to prove that every bi-asymptotic sector contains a spine and that every spine is contained in a bi-asymptotic sector. To prove this, we first understand the case of bi-asymptotic sectors with intersections pointing outward the sector (these sectors will be called regular). We will characterize the structure of stable/unstable arcs inside a regular bi-asymptotic sector obtaining a single spine inside it. **Definition 3.1**.: We say that \(C^{s}_{\varepsilon}(x)\) and \(C^{u}_{\varepsilon}(x)\) form a bi-asymptotic sector if there exists a pair of sub-arcs \(a^{s}\), \(a^{u}\) contained in \(C^{s}_{\varepsilon}(x)\) and \(C^{u}_{\varepsilon}(x)\), respectively, such that \(a^{s}\cup a^{u}\) bounds a disk \(D\). In this case, \(D\) is called the bi-asymptotic sector. Let \(a_{1}\) and \(a_{2}\) be the end points of \(a^{s}\) and \(a^{u}\). A bi-asymptotic sector \(D\) is said to be regular if it satisfies the following: (Regularity condition) There exists neighborhoods \(V_{a_{1}}\) of \(a_{1}\) and \(V_{a_{2}}\) of \(a_{2}\) such that \(C^{\sigma}_{\varepsilon}(x)\cap V_{a_{1}}\cap\operatorname{Int}D=\emptyset\) and \(C^{\sigma}_{\varepsilon}(x)\cap V_{a_{2}}\cap\operatorname{Int}D=\emptyset\). Without the regularity condition, the bi-asymptotic sectors can contain more than one spine, and, hence, a more complicated structure of stable/unstable arcs, as in Figure 10a, 10b and 10c. Figure 9. Note that both the stable and unstable continua enter the disk passing through \(a_{2}\), that is, for every neighborhood \(V\) of \(a_{2}\) we have \(C^{\sigma}_{\varepsilon}(a_{2})\cap V\cap\operatorname{Int}D\neq\emptyset\). Thus, the sector in Figure 9(b) formed considering the stable arc from \(a_{1}\) and \(a_{2}\) does not satisfy the regularity condition. The same happens with the sector bounded by the stable and unstable arcs connecting \(a_{3}\) and \(a_{4}\). Also, note that the sectors formed by the stable arc from \(a_{1}\) to \(a_{4}\) and from \(a_{2}\) to \(a_{3}\) satisfy the regularity condition. Inside these sectors, the structure of stable/unstable arcs is the same: there is a single spine and all stable/unstable arcs turn around this spine. We prove in this section that this is exactly the structure of stable/unstable arcs inside any regular bi-asymptotic sector. Let \(D\) be a regular bi-asymptotic sector bounded by \(a^{s}\) and \(a^{u}\) with \(\operatorname{diam}D<\delta\) (given by Lemma 2.13), and let \(a_{1}\) and \(a_{2}\) be the end points of \(a^{s}\) and \(a^{u}\). For \(p\in D\), define \(C^{u}_{D}(p)\) and \(C^{s}_{D}(p)\) as the connected component of \(C^{u}(p)\cap D\) and \(C^{s}(p)\cap D\) containing \(p\) respectively. We remark that Lemma 2.13 also holds changing the ball \(B_{\delta}(x)\) for the sector \(D\), that is, for each \(p\in D\), either \(C^{\sigma}_{D}(p)\) separates \(D\) or \(C^{\sigma}_{D}(p)\) contains a spine. The hypothesis of regularity is important to ensure the following result. **Lemma 3.2**.: \(C^{\sigma}_{D}(p)=a^{\sigma}\) _for every \(p\in a^{\sigma}\)._ Proof.: It is clear that \(C^{\sigma}_{D}(p)\supset a^{\sigma}\). By contradiction, assume that there exist \(p\in a^{\sigma}\) and \(y\in C^{\sigma}_{D}(p)\setminus a^{\sigma}\). This means that either \(\sigma(a_{1},y;x)\) or \(\sigma(a_{2},y;x)\) is contained in \(\operatorname{Int}D\) and contradicts the regularity condition. Note that in Figure 9(b), the arc \(a^{u}\) connects \(a_{1}\) to \(a_{2}\), while \(C^{u}_{D}(a_{1})\) contains \(a^{u}\) and also an arc from \(a_{2}\) to a spine in the interior of \(D\). Also, \(a^{s}\) is the stable arc from \(a_{1}\) to \(a_{2}\), while \(C^{s}_{D}(a_{1})\) contains the arc connecting \(a_{2}\) to \(a_{4}\) (see Figure 9(a)). The following lemma is a consequence of the previous lemma and the transversality explained in Lemma 2.15. The following results also hold changing the roles of Figure 10. and \(u\) but we will not use \(\sigma\) as before since it would make the presentation more complicated. **Lemma 3.3**.: \(1\leq\#(C_{D}^{s}(p)\cap a^{u})\leq 2\) _for every \(p\in D\). If \(\#(C_{D}^{s}(p)\cap a^{u})=1\), then \(C_{D}^{s}(p)\) contains a spine._ Proof.: If \(p\in a^{s}\), then \(C_{D}^{s}(p)=a^{s}\) since \(D\) is regular (see Lemma 3.2). This implies that \(\#(C_{D}^{s}(p)\cap a^{u})=\#(a^{s}\cap a^{u})=2\) (See Fig. 9a). If \(p\in D\setminus a^{s}\), then \(C_{D}^{s}(p)\cap a^{s}=\emptyset\), by Lemma 2.11, which implies that \(C_{D}^{s}(p)\cap a^{u}\neq\emptyset\) since Lemma 2.2 ensures that \(C_{D}^{s}(p)\cap(a^{s}\cup a^{u})\neq\emptyset\). Therefore \(\#(C_{D}^{s}(p)\cap a^{u})\geq 1\). Note that every intersection between \(C_{D}^{s}(p)\) and \(a^{u}\) is transversal and this together with Lemma 2.11 ensure that \(\#(C_{D}^{s}(p)\cap a^{u})\leq 2\). The last part of the statement is obtained using Lemma 2.13, since \(\#(C_{D}^{s}(p)\cap a^{u})=1\) implies that \(C_{D}^{s}(p)\) does not separate \(D\), and, hence, contains a spine. Note that in the non-regular sectors of Figure 10a, while \(a^{s}\cap a^{u}=\{a_{1},a_{2}\}\), we have \(C_{D}^{s}(p)\cap a^{u}=\{a_{1},a_{2},a_{3}\}\). Following [6], we define an order in the set \(\mathcal{F}^{s}=\{C_{D}^{s}(x):x\in D\}\) as follows: \(C_{D}^{s}(x)<C_{D}^{s}(y)\) if \(a^{s}\) and \(C_{D}^{s}(y)\) are separated by \(C_{D}^{s}(x)\), i.e., \(a^{s}\) and \(C_{D}^{s}(y)\) are in different components of \(D\setminus C_{D}^{s}(x)\). Note that \(a^{s}\) is a minimal element for the order and if \(y\in\operatorname{Int}D\cap\operatorname{Spin}\), then \(C_{D}^{s}(y)\) does not separate \(D\) and \(C_{D}^{s}(y)\) cannot be smaller than \(C_{D}^{s}(x)\) for any \(z\neq y\) in \(D\). The following lemma is based on Lemma 3.2 in [6]. The regularity condition on the sector allows us to basically follow the original proof. **Lemma 3.4**.: _The order \(<\) in \(\mathcal{F}^{s}\) is total._ Proof.: Let \(C_{D}^{s}(x)\) and \(C_{D}^{s}(y)\) be different elements of \(\mathcal{F}^{s}\) and suppose by contradiction that neither \(C_{D}^{s}(x)<C_{D}^{s}(y)\) nor \(C_{D}^{s}(y)<C_{D}^{s}(x)\). We assume that \(x\) and \(y\) are spines since in the other cases we obtain the result similarly. Consider \(\gamma_{1},\gamma_{2},\gamma_{3}\subset a^{u}\) sub-arcs as in Figure 11. Since \(x\) and \(y\) are spines, it follows that \(E=D\setminus(C_{D}^{s}(x)\cup C_{D}^{s}(y))\) is connected. For \(1\leq i<j\leq 3\), define \[A_{ij}=\{x\in E:C_{D}^{s}(x)\cap\gamma_{i}\neq\emptyset,C_{D}^{s}(x)\cap \gamma_{j}\neq\emptyset\}.\] By the definition of the subarcs, we have that \(A_{ij}\) is non-empty for all \(1\leq i<j\leq 3\). In addition, these sets are closed and cover \(E\). Since \(E\) is connected, we can find \(z\) that belongs to all of them. Hence \(\#(C_{D}^{s}(z)\cap a^{u})\geq 3\) and this contradicts Lemma 3.3. In the other cases we just need to choose the appropriate arcs \(\gamma_{i}\) and change the definition of the set \(E\) accordingly. Figure 12 illustrates these choices. Figure 11. Non-comparable stable arcs. Note that the regularity condition is important to conclude the order is total since non-regular sectors can contain points \(z\in\operatorname{Int}D\) such that \(\#(C_{D}^{s}(z)\cap a^{u})\geq 3\), so the existence of a point in the intersection of \(A_{ij}\) would not imply a contradiction. Lemma 3.4 ensures that inside a regular bi-asymptotic sector there is at most one spine, but does not necessarily prove the existence of a spine. This will be consequence of the following result that proves continuity for the variation of the arcs inside a regular bi-asymptotic sector. It is based on Lemma 3.3 in [6]. Lemma 3.3 ensures that \(\#(C_{D}^{s}(x)\cap a^{u})\leq 2\) for every \(x\in a^{u}\). Then we consider a map \(g:a^{u}\to a^{u}\) (see Figure 13) defined as \[C_{D}^{s}(x)\cap a^{u}=\{x,g(x)\}.\] Note that if \(C_{D}^{s}(x)\cap a^{u}=\{x\}\), then \(g(x)=x\) and Lemma 3.3 ensures that \(C_{D}^{s}(x)\) contains a spine. Note that we could have problems to define \(g\) in sectors that are not regular since in these cases \(C_{D}^{s}(a_{1})\) may not coincide with \(a^{s}\) and intersect \(a^{u}\) in three different points. **Lemma 3.5**.: _The map \(g:a^{u}\to a^{u}\) is continuous._ Proof.: As \(a^{u}\) is homeomorphic to \([0,1]\), we can induce an ordering in \(a^{u}\) such that \(a_{1}<a_{2}\). We prove that \(g\) is decreasing with this order, and since \(g\) is bijective, we conclude continuity. Suppose, by contradiction, that \(g\) is not decreasing, so there exist \(x<y\) such that \(g(x)<g(y)\). Note that \(g(x)\neq x\) since \(x<y\) and \(x=g(x)<g(y)\) ensure the arcs \(s(x,g(x);x)\) and \(s(y,g(y);y)\) are not comparable, contradicting Lemma 3.4. The same reason ensures that \(g(y)\neq y\). If \(x<y<g(x)<g(y)\), then there is an intersection between \(s(x,g(x);x)\) and \(s(y,g(y);y)\), contradicting Lemma 2.11. If \(x<g(x)<y<g(y)\) the arcs \(s(x,g(x);x)\) and \(s(y,g(y);y)\) are not comparable, contradicting Lemma 3.4. Other cases are obtained from these cases interchanging \(x\) and \(g(x)\) or \(y\) and \(g(y)\), leading to the same contradictions. Figure 12. Figure 13. The following proposition gathers all results we obtained so far about bi-asymptotic sectors. It is one of the directions of the equivalence between the existence of sectors and spines that we want to prove. **Proposition 3.6**.: _If \(D\) is a bi-asymptotic sector with diameter less than \(\delta\), then \(\operatorname{Int}D\cap\operatorname{Spin}(f)\neq\emptyset\). Moreover, if \(D\) is regular, then \(\#(\operatorname{Int}D\cap\operatorname{Spin}(f))=1\)._ Proof.: First, we note that every regular bi-asymptotic sector \(D\) contains a unique spine in its interior. Since \(a^{u}\) is homeomorphic to \([0,1]\) and \(g\colon a^{u}\to a^{u}\) is continuous, it follows that \(g\) has a fixed point, and since \(g\) is decreasing, this fixed point is unique. Clearly, \(a_{1}\) and \(a_{2}\) are not fixed points of \(g\). This and the transversality proved in Lemma 2.15 ensure that this single spine is contained in \(\operatorname{Int}D\). Now let \(D\) be a non-regular bi-asymptotic sector. Without loss of generality, let us assume that \(D\) does not satisfy the regularity condition at \(a_{2}\), that is, \(C^{s}_{\varepsilon}(x)\) enters \(D\) through \(a_{2}\). If \(C^{s}_{\varepsilon}(x)\) does not intersect the open arc \(a^{u}\setminus\{a_{1},a_{2}\}\), then the connected component of \(C^{s}_{\varepsilon}(x)\) containing \(a_{2}\) does not separate \(D\), and by Lemma 2.13 we find a spine in \(\operatorname{Int}D\). If \(C^{s}_{\varepsilon}(x)\) intersects \(a^{u}\setminus\{a_{1},a_{2}\}\) at a point \(y\), then the transversality of the intersection ensures that \[s(a_{2},y;x)\cup u(a_{2},y;x)\] bounds a regular bi-asymptotic sector contained in \(D\) (see Figure 14). Thus, there exists a spine in \(\operatorname{Int}D\). Now we prove the other direction of the equivalence. **Lemma 3.7**.: _If \(x\in\operatorname{Spin}(f)\), then there exists \(D_{x}\) a regular bi-asymptotic sector such that \(x\in\operatorname{Int}D_{x}\)._ Proof.: We begin choosing \(y\in C^{s}_{\varepsilon}(x)\cap B_{\frac{s}{2}}(x)\) such that \(s(x,y;x)\subset B_{\frac{s}{2}}(x)\). Since \(C^{u}_{\frac{s}{2}}(y)\) is an arc transversal to \(C^{s}_{\varepsilon}(x)\) at \(y\) and \(y\notin\operatorname{Spin}(f)\), we can choose a small neighborhood \(U\) of \(C^{s}_{\varepsilon}(x)\) and \(t_{1},t_{2}\in C^{u}_{\frac{s}{2}}(y)\cap\partial U\) satisfying: 1. \(t_{1}\not\sim t_{2}\), 2. \(u(t_{1},t_{2};y)\) intersects \(\partial U\) only at \(t_{1}\) and \(t_{2}\), 3. \(u(t_{1},t_{2};y)\cap C^{s}_{\varepsilon}(x)=\{y\}\). Since \(u(t_{1},t_{2};y)\) is transversal to \(C^{s}_{\varepsilon}(x)\) at \(y\), and \(u(t_{1},t_{2};y)\cap C^{s}_{\varepsilon}(x)=\{y\}\), then \(u(t_{1},t_{2};y)\) divides both \(U\) and \(C^{s}_{\varepsilon}(x)\) in exactly two components (see Figure 15). Figure 14. The semi-continuity of the map \(a\to C^{s}_{\frac{s}{2}}(a)\) (see page 15 and Theorem 6.7.1 of [3]) allows us to choose a disk \(V\) centered at \(x\) such that \[C^{s}_{\frac{s}{2}}(z)\subset U\quad\text{for every}\quad z\in V\] and \(C^{s}_{\frac{s}{2}}(x)\cap\partial V=\{\tau\}\). In particular, \(\partial V\setminus\{\tau\}\) is connected. For \(i=1,2\), let \(c_{i}=u(y,t_{i},y)\) and \[\mathcal{E}_{i}=\{z\in\partial V\setminus\{\tau\}:C^{s}_{\frac{s}{2}}(z)\cap c _{i}\neq\emptyset\}.\] Since \(\mathcal{E}_{i}\) is closed and \(\partial V\setminus\{\tau\}\) is connected, if \(\mathcal{E}_{1}\) and \(\mathcal{E}_{2}\) are not empty, then \(\mathcal{E}_{1}\cap\mathcal{E}_{2}\neq\emptyset\). Hence, by Lemma 2.11 there is \(z\in\partial V\setminus\{\tau\}\) such that \(C^{s}_{\frac{s}{2}}(z)\) intersects \(c_{1}\setminus\{y\}\) and \(c_{2}\setminus\{y\}\). Choose intersections \(a_{1},a_{2}\), respectively, such that \(s(z,a_{1};z)\setminus\{a_{1}\}\) and \(s(z,a_{2};z)\setminus\{a_{2}\}\) are contained in the component of \(x\) in \(U\setminus u(t_{1},t_{2};y)\). Hence these arcs bound a disk \(D\) with \[\partial D=s(z,a_{1};z)\cup s(z,a_{2};z)\cup u(a_{1},a_{2};y).\] Since \(a_{1},a_{2}\in C^{s}_{\frac{s}{2}}(z)\), it follows that \(a_{2}\in C^{s}_{\varepsilon}(a_{1})\). Also, \(a_{1},a_{2}\in C^{u}_{\frac{s}{2}}(y)\) ensures that \(a_{2}\in C^{u}_{\varepsilon}(a_{1})\). Thus, \[a_{2}\in C^{s}_{\varepsilon}(a_{1})\cap C^{u}_{\varepsilon}(a_{1}).\] Figure 16. Figure 15. The regularity condition follows from the fact that \(\operatorname{Int}D\) is contained in one connected component of \(U\setminus u(t_{1},t_{2};y)\) and the intersections in \(a_{1}\) and \(a_{2}\) being transversal ensure that \(C^{s}_{\varepsilon}(z)\) enters the other component of \(U\setminus u(t_{1},t_{2};y)\) through \(a_{1}\) and \(a_{2}\). This proves that \(D\) is a regular bi-asymptotic sector and \(x\in\operatorname{Int}D\). Now assume that \(\mathcal{E}_{1}\neq\emptyset\) and \(\mathcal{E}_{2}=\emptyset\). In this case, \(\partial V\setminus\{\tau\}\subset\mathcal{E}_{1}\). Choose \[y^{\prime}\in C^{s}_{\varepsilon}(x)\cap\operatorname{Int}V\] and \(u(t^{\prime}_{1},t^{\prime}_{2};y^{\prime})\) a sub-arc of \(C^{u}_{\frac{x}{2}}(y^{\prime})\) such that \(t^{\prime}_{1}\not\sim t^{\prime}_{2}\) and \(u(t^{\prime}_{1},t^{\prime}_{2};y^{\prime})\) is contained in \(\operatorname{Int}U\) except, possibly, at \(t^{\prime}_{1}\) and \(t^{\prime}_{2}\). Since \(\operatorname{diam}(C^{u}_{\frac{x}{2}}(y^{\prime}))\geq\delta\), we can assume that either \(t^{\prime}_{1}\) or \(t^{\prime}_{2}\) belongs to \(\partial U\). Let us assume that \(t^{\prime}_{1}\in\partial U\). Then we choose a sequence \((z_{n})_{n\in\mathbb{N}}\subset\partial V\setminus\{\tau\}\) such that 1. \(z_{n}\to\tau\) as \(n\to\infty\), 2. \(z_{n}\) and \(c_{1}\) are in different components of \[U\setminus(u(y^{\prime},t^{\prime}_{1};y^{\prime})\cup s(y^{\prime},y;x)\cup u (y,t_{2};y))\] for every \(n\in\mathbb{N}\), and 3. \(C^{s}_{\frac{x}{2}}(z_{n})\cap c_{1}\neq\emptyset\) for every \(n\in\mathbb{N}\). In particular, (3) implies that \[C^{s}_{\frac{x}{2}}(z_{n})\cap u(y^{\prime},t^{\prime}_{1};y^{\prime})\neq \emptyset\quad\text{for every}\quad n\in\mathbb{N}.\] This ensures the existence of \(n_{0}\in\mathbb{N}\) such that \[C^{s}_{\frac{x}{2}}(z_{n})\cap u(y^{\prime},t^{\prime}_{2};y^{\prime})\neq \emptyset\quad\text{for every}\quad n\geq n_{0},\] since the intersection between \(C^{s}_{\varepsilon}(x)\) and \(u(t^{\prime}_{1},t^{\prime}_{2};y)\) is transversal at \(y^{\prime}\), and the semi-continuity again ensure that \(C^{s}_{\frac{x}{2}}(z_{n})\) converge to a subset of \(C^{s}_{\varepsilon}(x)\). As in the previous case, the regularity condition is assured by the transversality of these intersections and we obtain a regular bi-asymptotic sector \(D\) with \(x\in\operatorname{Int}D\). Figure 17. A direct consequence of the previous lemma and Proposition 3.6 is the following: **Corollary 3.8**.: _Every spine is isolated from other spines, and \(\operatorname{Spin}(f)\) is at most countable._ In the proof of Lemma 3.7, the diameter of the sector \(D\) containing the spine \(x\) depends on the spine and is not necessarily uniform over all spines, i.e., for a sequence of distinct spines the diameters of the respective sectors could become arbitrarily small. The following Lemma ensures the existence of bi-asymptotic sectors with uniform diameter close to the local stable/unstable continua of every spine. However, these sectors may not contain the associated spines in its interior. **Lemma 3.9**.: _Let \(x\in\operatorname{Spin}(f)\) and \(y\in C^{s}_{\frac{\pi}{2}}(x)\) with \(0<\operatorname{diam}(s(x,y;x))<\frac{\delta}{4}\). If \(u(a_{1},a_{2};y)\) is a sub-arc of \(C^{u}_{\varepsilon}(y)\) containing \(y\) in its interior, then there exists a neighborhood \(V\) of \(x\) and \(z\in\partial V\) such that \(\#(C^{s}_{\varepsilon}(z)\cap u(a_{1},a_{2};y))\geq 2\)._ Proof.: Let \(U\) be a small neighborhood of \(C^{s}_{\varepsilon}(x)\) and \(u(a^{\prime}_{1},a^{\prime}_{2};y)\) be a sub-arc of \(u(a_{1},a_{2};y)\) contained in \(U\) such that 1. \(u(a^{\prime}_{1},a^{\prime}_{2};y)\cap C^{s}_{\varepsilon}(x)=\{y\}\), 2. \(u(a^{\prime}_{1},a^{\prime}_{2};y)\cap U=\{a^{\prime}_{1},a^{\prime}_{2}\}\), 3. \(d(x,u(a^{\prime}_{1},a^{\prime}_{2};y))<\frac{\delta}{2}\). The existence of this sub-arc is ensured by \(cw_{F}\)-expansiveness. Thus, \(u(a^{\prime}_{1},a^{\prime}_{2};y)\) separates \(U\) in two components and the diameter of the component \(W\) of \(x\) is less than \(\delta\). As in the proof of Lemma 3.7, let \(V\) be an open disk centered at \(x\) such that \(C^{s}_{\varepsilon}(z)\subset U\) for all \(z\in V\), and let being \(\tau\) be the first intersection of \(\partial V\) with \(C^{s}_{\varepsilon}(x)\). There exists \(r>0\) such that if \(z\in\partial V\setminus\{\tau\}\) is \(r\)-close to \(\tau\), then \(C^{s}_{\frac{\pi}{2}}(z)\) intersects \(u(a^{\prime}_{1},a^{\prime}_{2};y)\). Since \(\operatorname{Spin}(f)\) is at most countable (see Corollary 3.8), there exists \[z\in B_{r}(\tau)\cap\partial V\setminus\{\tau\}\] such that \(C^{s}_{\varepsilon}(z)\) does not contain spines. Since \(\operatorname{diam}(W)<\delta\), Lemma 2.13 ensures that \(C^{s}_{\varepsilon}(z)\) separates \(W\). Since \(C^{s}_{\varepsilon}(z)\subset U\), it follows that \[\#(C^{s}_{\frac{\pi}{2}}(z)\cap u(a^{\prime}_{1},a^{\prime}_{2};y))\geq 2.\] Figure 18. ## 4. \(Cw_{F}\)-hyperbolicity and \(cw_{2}\)-hyperbolicity In this section we prove Theorem 1.1. We first prove that a \(\operatorname{cw}_{F}\)-hyperbolic surface homeomorphism that is not \(\operatorname{cw}_{2}\)-expansive must contain infinitely many spines, and then we prove the finiteness of the set of spines for a \(cw_{3}\)-hyperbolic homeomorphism using the long bi-asymptotic sectors we constructed at the end of Section 3. These two results ensure that \(cw_{3}\)-hyperbolicity implies \(cw_{2}\)-hyperbolicity on surfaces. If a \(\operatorname{cw}_{F}\)-hyperbolic surface homeomorphism is not \(\operatorname{cw}_{1}\)-hyperbolic, then there exists arbitrarily small bi-asymptotic sectors, but this does not necessarily imply the existence of an infinite number of spines, since all these sectors can converge to the same single spine, as in the pseudo-Anosov diffeomorphism of \(\mathbb{S}^{2}\). If the homeomorphism is not \(cw_{2}\)-expansive, then for each \(\varepsilon>0\) there exists \(x\in S\) such that \(\#(C^{s}_{\varepsilon}(x)\cap C^{u}_{\varepsilon}(x))\geq 3\). We note below that either there exists two bi-asymptotic sectors with disjoint interiors and connected by their boundaries (see Figure 20) or there exists a non-regular bi-asymptotic sector. In the first case, Lemma 3.6 ensures that each of these sectors must contain a spine, and in the second case we will prove that inside any non-regular sector there exist at least two distinct spines. This ensures the existence of an infinite number of spines, since these sectors cannot accumulate in a pair of two spines when \(\varepsilon\) converges to zero. **Proposition 4.1**.: _If \(a^{s}\) and \(a^{u}\) bound a non-regular bi-asymptotic sector \(D\) with \(\operatorname{diam}(D)\leq\delta\), then there exist at least two distinct spines in \(\operatorname{Int}D\)._ Proof.: Let \(a_{1}\) and \(a_{2}\) be the end points of \(a^{s}\) and \(a^{u}\) and assume that \[C^{s}_{D}(a_{1})\cap\operatorname{Int}D\neq\emptyset\quad\text{and}\quad C^{u}_ {D}(a_{1})\cap\operatorname{Int}D\neq\emptyset.\] Lemma 2.13 ensures that both \(C^{s}_{D}(a_{1})\setminus\{a^{s}\}\) and \(C^{u}_{D}(a_{1})\setminus\{a^{u}\}\) either separate \(D\) or contain a spine in \(\operatorname{Int}D\). If both separate \(D\), then there exist two bi-asymptotic sectors with disjoint interiors and connected by their boundaries, and Lemma 3.6 ensures the existence of two distinct spines in \(\operatorname{Int}D\) (see Figure 21). Figure 19. Long bi-asymptotic sectors close to \(C^{s}_{\varepsilon}(x)\). Figure 20. Examples of bi-asymptotic sectors and their spines. This figure illustrates the case where \(C_{D}^{s}(a_{1})\setminus\{a^{s}\}\) does not intersect \(C_{D}^{u}(a_{1})\setminus\{a^{u}\}\). In the case they intersect, in only a finite number of points by \(\mathrm{cw}_{F}\)-expansiveness, then the existence of an intersection \(a_{n}\) such that \(s(a_{2},a_{n};a_{1})\) and \(u(a_{2},a_{n};a_{1})\) bound a regular sector, ensures the existence of two sectors with disjoint interiors and connected by their boundaries. Indeed, if \(C_{D}^{s}(a_{1})\setminus\{a^{s}\cup s(a_{2},a_{n};a_{1})\}\) does not intersect \(u(a_{2},a_{n};a_{1})\), then \[C_{D}^{s}(a_{1})\setminus\{a^{s}\cup s(a_{2},a_{n};a_{1})\}\quad\text{and} \quad C_{D}^{u}(a_{1})\setminus\{a^{u}\cup u(a_{2},a_{n};a_{1})\}\] bound a sector with interior disjoint from the first one (see Figure 22). If \(C_{D}^{s}(a_{1})\setminus\{a^{s}\cup s(a_{2},a_{n};a_{1})\}\) intersects \(u(a_{2},a_{n};a_{1})\) at \(a_{m}\), then it also forms a sector with disjoint interior from the first one since the regular intersection in \(a_{n}\) ensures that \(s(a_{n},a_{m};a_{1})\) is outside the interior of the first sector. In this case, the sector bounded by \(s(a_{2},a_{m};a_{1})\) and \(u(a_{2},a_{m};a_{1})\) is not regular at \(a_{m}\) (see Figure 23). Since both \(C_{D}^{s}(a_{1})\setminus\{a^{s}\}\) and \(C_{D}^{u}(a_{1})\setminus\{a^{u}\}\) separate \(D\), there exists an intersection \(a_{j}\) such that \(s(a_{2},a_{j};a_{1})\) and \(u(a_{2},a_{j};a_{1})\) bound a regular sector. Indeed, if \(a_{m}\) is a non-regular intersection as above, then \(C_{D}^{s}(a_{1})\setminus\{a^{s}\cup s(a_{2},a_{m};a_{1})\}\) intersects \(u(a_{2},a_{m};a_{1})\) at \(a_{j}\), and the sector formed by \(s(a_{2},a_{j};a_{1})\) and \(u(a_{2},a_{j};a_{1})\) is regular since the intersection at \(a_{j}\) comes from inside the non-regular sector bounded by \(s(a_{2},a_{m};a_{1})\) and \(u(a_{2},a_{m};a_{1})\) (see Figure 23). If both \(C_{D}^{s}(a_{1})\setminus\{a^{s}\}\) and \(C_{D}^{u}(a_{1})\setminus\{a^{u}\}\) do not separate \(D\), then both end in spines. If these spines are actually the same spine and \(C_{D}^{s}(a_{1})\setminus\{a^{s}\}\) and \(C_{D}^{u}(a_{1})\setminus\{a^{s}\}\) are disjoint, then the intersection \(a_{j}\) is a non-regular intersection. \(\{a^{u}\}\) do not intersect before the spine, then they bound a bi-asymptotic sector with this spine as one of the end points of the sector, so Lemma 3.6 ensures the existence of a spine in the interior of this sector that is, hence, distinct from the first spine (see Figure 24). If \(C^{s}_{D}(a_{1})\setminus\{a^{s}\}\) and \(C^{u}_{D}(a_{1})\setminus\{a^{u}\}\) intersect before the spine in the end, then either there is an intersection bounding a regular sector and we argument as in the case above to create two sectors with disjoint interiors, or there are only non-regular intersections and we create a sector between the last one before the spine and the spine. In both cases a second spine appears. Now assume that \(C^{s}_{D}(a_{1})\setminus\{a^{s}\}\) separates \(D\) but \(C^{u}_{D}(a_{1})\setminus\{a^{u}\}\) does not. If \(C^{s}_{D}(a_{1})\setminus\{a^{s}\}\) does not intersect \(C^{u}_{D}(a_{1})\setminus\{a^{u}\}\) in \(\operatorname{Int}D\), then \(C^{s}_{D}(a_{1})\setminus\{a^{s}\}\) forms a bi-asymptotic sector with a sub-arc of \(a^{u}\) that does not contain the spine in \(C^{u}_{D}(a_{1})\setminus\{a^{u}\}\). Then Lemma 3.6 ensures the existence of a spine in this sector that is, hence, distinct from the first spine (see Figure 14). If \(C^{s}_{D}(a_{1})\setminus\{a^{s}\}\) intersects \(C^{u}_{D}(a_{1})\setminus\{a^{u}\}\) at \(a_{i}\in\operatorname{Int}D\), and \(s(a_{2},a_{i},a_{1})\) and \(u(a_{2},a_{i},a_{1})\) bound a regular sector, then there is a spine at the interior of this sector that is distinct from the spine in \(C^{u}_{D}(a_{1})\setminus\{a^{u}\}\) (see Figure 10a). If \(s(a_{2},a_{i},a_{1})\) and \(u(a_{2},a_{i},a_{1})\) bound a non-regular sector, then the spine in \(C^{u}_{D}(a_{1})\setminus\{a^{u}\}\) is inside this sector, but since \(C^{s}_{D}(a_{1})\setminus\{a^{s}\}\) separates \(D\), it must intersect \(C^{u}_{D}(a_{1})\setminus\{a^{u}\}\) an additional time creating a sector that does not contain the spine in \(C^{u}_{D}(a_{1})\setminus\{a^{u}\}\) in its interior (see Figure 25). We are ready to prove our main theorem. Proof of Theorem 1.1.: Assume that \(f\) is a \(\operatorname{cw}_{F}\)-hyperbolic homeomorphism with a finite number of spines. If \(f\) is not \(cw_{2}\)-expansive, then for each \(\alpha\in(0,\delta)\) there exists \(x\in S\) such that \(\#(C^{s}_{\alpha}(x)\cap C^{u}_{\alpha}(x))\geq 3\). Using an order \(<\) in \(C^{s}_{\alpha}(x)\) we can choose three consecutive points \(a_{1},a_{2},a_{3}\in C^{s}_{\alpha}(x)\cap C^{u}_{\alpha}(x)\), that is, \(a_{1}<a_{2}<a_{3}\) and there are no points of \(C^{s}_{\alpha}(x)\cap C^{u}_{\alpha}(x)\) in \((a_{1},a_{2})\) and \((a_{2},a_{3})\). This ensures that the stable/unstable arcs connecting \(a_{1}\) to \(a_{2}\), and also \(a_{2}\) to \(a_{3}\), form bi-asymptotic Figure 24. Figure 25. sectors (this is not true in the case there are intersections in \((a_{1},a_{2})\) or \((a_{2},a_{3})\) as in Figure 23). If the by-asymptotic sector formed by the stable/unstable arcs from \(a_{1}\) to \(a_{2}\) is regular in \(a_{2}\), then the stable and unstable arcs from \(a_{2}\) to \(a_{3}\) form a bi-asymptotic sector with interior disjoint from the interior of the sector from \(a_{1}\) to \(a_{2}\). This ensures the existence of two distinct spines \(\alpha\)-close. If the intersection in \(a_{2}\) is not regular, then Proposition 4.1 ensures the existence of two distinct spines inside the non-regular sector. This proves that for each \(\alpha\in(0,\delta)\) there exist two distinct spines \(\alpha\)-close, and, hence, we obtain an infinite number of distinct spines for \(f\), contradicting the assumption. This proves that \(f\) is \(\mathrm{cw}_{2}\)-hyperbolic. Now we prove that \(\mathrm{cw}_{3}\)-hyperbolicity implies finiteness of the number of spines. This is the only step of the proof that we do not know how to prove assuming \(\mathrm{cw}_{F}\)-hyperbolicity. Let \(f\) be a \(cw_{3}\)-hyperbolic homeomorphism and assume the existence of an infinite number of distinct spines. For each \(\alpha\in(0,\varepsilon)\) choose \(\delta_{\alpha}\in(0,\alpha)\) satisfying the Lemmas 2.2, 2.4, and 2.5. Consider \(x_{1}\) and \(x_{2}\) spines such that there exists \(y\in C^{s}_{\alpha}(x_{1})\cap C^{u}_{\alpha}(x_{2})\neq\emptyset\) with \[\operatorname{diam}s(x_{1},y;x_{1})<\frac{\delta_{\alpha}}{4}\quad\text{and} \quad\operatorname{diam}u(x_{2},y;x_{2})<\frac{\delta_{\alpha}}{4}.\] Note that \(y\) is not a spine, since it is contained in the local stable continua of a spine. Lemma 3.9 ensures the existence of long bi-asymptotic sectors close to \(C^{s}_{\varepsilon}(x_{1})\) and \(C^{u}_{\varepsilon}(x_{2})\) intersecting in four distinct points (see Figure 26). Since this can be done for any \(\alpha>0\), it follows that \(f\) is not \(\mathrm{cw}_{3}\)-hyperbolic. ## Acknowledgements Bernardo Carvalho was supported by Progetto di Eccellenza MatMod@TOV grant number PRIN 2017S35EHN, and by CNPq grant number 405916/2018-3. Rodrigo Arruda and Alberto Sarmiento were also supported by Fapemig grant number APQ-00036-22.
2303.04851
Lexical Complexity Prediction: An Overview
The occurrence of unknown words in texts significantly hinders reading comprehension. To improve accessibility for specific target populations, computational modelling has been applied to identify complex words in texts and substitute them for simpler alternatives. In this paper, we present an overview of computational approaches to lexical complexity prediction focusing on the work carried out on English data. We survey relevant approaches to this problem which include traditional machine learning classifiers (e.g. SVMs, logistic regression) and deep neural networks as well as a variety of features, such as those inspired by literature in psycholinguistics as well as word frequency, word length, and many others. Furthermore, we introduce readers to past competitions and available datasets created on this topic. Finally, we include brief sections on applications of lexical complexity prediction, such as readability and text simplification, together with related studies on languages other than English.
Kai North, Marcos Zampieri, Matthew Shardlow
2023-03-08T19:35:08Z
http://arxiv.org/abs/2303.04851v1
# Lexical Complexity Prediction: An Overview ###### Abstract. The occurrence of unknown words in texts significantly hinders reading comprehension. To improve accessibility for specific target populations, computational modelling has been applied to identify complex words in texts and substitute them for simpler alternatives. In this paper, we present an overview of computational approaches to lexical complexity prediction focusing on the work carried out on English data. We survey relevant approaches to this problem which include traditional machine learning classifiers (e.g. SVMs, logistic regression) and deep neural networks as well as a variety of features, such as those inspired by literature in psycholinguistics as well as word frequency, word length, and many others. Furthermore, we introduce readers to past competitions and available datasets created on this topic. Finally, we include brief sections on applications of lexical complexity prediction, such as readability and text simplification, together with related studies on languages other than English. Keywords:**Wer An ML model trained to identify complex words would recognize the word "_folly_" within the original extract as being complex. Such models would come to this decision based on a number of engineered or inferred features. For instance, these models would likely consider the word "_folly_" as being archaic, as having a low frequency within everyday speech, as being unfamiliar to its target demographic or to a general populace, as being acquired later during adolescence, and so forth. Having identified "_folly_" as being complex, these models may then pass this information downstream so that it can be simplified to "_foolishness_" resulting in the simplification shown in Table 1. Readers may then use this simplification to better understand the meaning of the original sentence or the target word. Alternatively, the simplification of "_folly_" to "_foolishness_" may serve to improve machine translation, since _foolishness_ is likely to have a more synonymous equivalent in a target language than "_folly_" [170]. Another use case of identifying complex words is for authorship identification, whereby identifying the number of complex words within a text can serve as a means of measuring vocabulary richness which has traditionally been used as a linguistic fingerprint, hence authorship marker [1]. The task of identifying complex words is commonly referred to as Complex Word Identification (CWI) [115]. In recent years, CWI has been extended to Lexical Complexity Prediction (LCP) [146; 148]1. This survey introduces the reader to LCP by providing a comprehensive overview of LCP literature, with a particular focus on the work carried out in the last 10 years that has primarily dealt with English; however, research investigating other languages has also been included and their contributions acknowledged2. Footnote 1: In this paper, we will be using LCP as the overarching term and CWI specifically when we refer to the binary task of complexity prediction (Section 4) [146; 148]. Footnote 2: We hope that this paper helps to encourage the ongoing development of LCP systems for other languages (Section 9.2) This survey comes at a time of unprecedented demand for LCP research motivated by recent developments in education technology and accessibility, such as the widespread use of virtual learning platforms in distance learning [107]. It also comes at a time of diversification, with LCP interacting with other topics in NLP, such as machine translation [170] and authorship identification [1; 156]. To the authors' knowledge, this survey fills a gap in the current LCP literature. It provides new researchers, as well as those who are already familiar with the field, with the most up-to-date key references, main research questions, advancements, and baselines needed to develop LCP further. This survey has the following structure. Section 2 gives prior definitions of complexity and explains what complexity is, what difficulty is in relation to complexity, and what is meant by the term _complex_ in LCP literature. Section 3 briefly describes the origin of complexity prediction research within lexical simplification. Section 4 outlines the different types of lexical complexity prediction, ranging from comparative, binary, continuous, and personalized to predicting the complexity of multi-word and numerical expressions. It also discusses whether systems designed for a target demographic outperform those for a generic population as well as whether predicting the lexical complexity of multi-word expressions is advantageous for LCP (Sections 4.4.1 and 4.5.1). Section 5 presents the evaluation metrics used to measure the performance of LCP systems, such as accuracy, precision, recall, F1-score, G-score, mean absolute error, mean squared error, Pearson's correlation, and Spearman's rank. Section 6 details the international competitions that challenged participating teams with the development of LCP systems: CWI-2016 [115], CWI-2018 [185], ALexS-2020 [192], and LCP-2021 [147]. Section 7 provides a historical overview of the models used for LCP, including feature engineering approaches, neural networks \begin{table} \begin{tabular}{l c c c c c} \hline Original: & **Folly** & is & set & in & great & dignity \\ Simplified: & **Foolishness** & is & set & in & great & dignity \\ \hline \end{tabular} \end{table} Table 1. Example extract with an identified complex word from the CompLex dataset [146].The complex word and its simplified version are in bold. to state-of-the-art transformer-based models. It also describes the best linguistic features for predicting lexical complexity along with the effect including context has on LCP systems' performance (Sections 7.1.4 and 7.4.1). Section 8 demonstrates LCP's place within the text simplification pipeline and several of its use cases and applications. Section 9 gives an overview of the English datasets and resources used for LCP together with several studies that have investigated languages other than English. It also discusses whether transfer learning is possible for predicting lexical complexity across multiple languages (Section 9.2.7). Section 10 ends by briefly outlining the future of LCP research, including its future opportunities and challenges. ## 2. Defining Complexity Within Linguistics, there exists two approaches to defining complexity, when being used to describe the "_complexity_" of a target word: (1) absolute, and (2) relative. ### Absolute Complexity Absolute complexity, otherwise known as objective complexity [47, 122], refers to a form of complexity that is established by the objective linguistic properties of a word [33, 122]. These linguistic properties include morpho-syntactic, semantic, as well as phonological factors that make a word appear to be complicated, advanced, or convoluted in comparison to a simpler alternative. For instance, having a high number of morphemes, the presence of derivational or inflectional affixes, having multiple meanings, or having multiple vowels or diphthongs, are all characteristic of absolute complexity [33, 114, 122]. ### un-believ-able _or_ engag-ed The words, "_unbelievable_" and "_engaged_" both contain two or more morphemes. The word "_engaged_" also has the diphthong e within its first morpheme: /eniget/ which is known to cause production errors for language learners [109, 141]. "_Engaged_" also has multiple meanings, with one referring to the act of being involved in an activity, and another being pledged to be married [105]. When used in ambiguous contexts, polysemous words can be troublesome as they hinder a sentence's readability with an example being: "_Do you know if he is available as I think he is engaged?_" [95]. Words, such as "_unbelievable_" and "_engaged_", are therefore words with a high degree of absolute complexity since their linguistic properties make them hard to reproduce or understand. ### Relative Complexity Relative complexity, also know as agent-related complexity [47, 122] or simply referred to as "_difficulty_" [33], refers to a type of complexity that is informed by the individual experience or psycholinguistic factors of the individual. For instance, experiences such as the cognitive load, or demand, acquisition difficulty, along with an individual's level of familiarity associated with a particular word or typography, may determine a word's level of relative complexity [33, 114, 122]. ### Likewise _or_ gothic Chen and Xiao [37] and Liu et al. [93] show that capitalized words are hard for Chinese learners of English to decipher and therefore are cognitively demanding. This is since they have less variance in their overall shape as well as less variance between the shape and size of their individual letters in comparison to words presented entirely in lowercase or Chinese characters which differ greatly in their form: LIKEABLE versus likeable or (Mandarin for likeable or popular). Words in reference to a particular art, culture, pop-culture, or historical group are also hard for second-language learners to acquire, especially if no cognate or similar cultural knowledge is available in their native language [157; 187]. Words, such as "_LIKEABLE_" and "_gothic_", are subsequently words with a high degree of relative complexity as factors more associated with the individual, such as typographical unfamiliarity or lack of cultural knowledge, make these words hard to decipher. ### Complexity in LCP Within LCP research, a more generalized notion of complexity is used. In most cases, the term "_complex_" is simply used as a "synonym for difficulty" [100] and is specifically applied to the word-level, hereby referred to as lexical complexity or complexity. In this field of research, complexity therefore refers to the difficulty an individual may have in acquiring, understanding, or reproducing a particular target word which is often a result of a target word's linguistic properties as well as factors belonging to the individual. Take the following words for example: \[\textbf{unbelievable}\textit{or}\ \textbf{gothic} \tag{3}\] Both "_unbelievable_" and "_gothic_" have been rated as having a neutral to high degree of lexical complexity within LCP research, regardless of the type of complexity they exhibit, be it either relative, absolute, or both [97]. As such, LCP adopts defining characteristics from absolute and relative complexity as determining a word's generalized level of complexity. This generalized notion of complexity is used throughout this paper when referring to the prediction of lexical complexity. ## 3. Origin of Complexity Prediction Predicting the lexical complexity of a target word originated as a sub-task of lexical simplification (LS) [155]. LS aims to replace complex words and expressions with simpler alternatives whilst maintaining the meaning of the original text as exemplified within Table 1[115]. To achieve this, LCP is used by a LS system for two purposes: (1) to identify complex words that are in need of simplification, and (2) to rank the suitability of simpler alternatives. Devlin and Tait [53] and Carroll et al. [35] were the first to adopt an LCP precursor within their LS systems' pipelines. They used WordNet [106] as well as Kucera-Francis's frequency norms, calculated using the Oxford Psycholinguistic Database [177], to rank their synonymous and simplified word candidates on what they believed to be their level of complexity. By doing so, their systems provided the most appropriate simplifications for their target complex words allowing for the creation of easier to read texts for aphasic readers; LCP's place within the text simplification (TS) pipeline is described in greater detail within Section 8.2. LS-2012 [155] is arguably the first shared-task that contained an LCP element. It tasked five participating teams to design systems to "rank a set of [candidate] words, from the simplest to the most difficult" [152]. Participating teams took into consideration a variety of features to conduct complexity prediction. The most common of these features being simplified word candidates' frequency [11; 92; 152], n-grams [92; 152], morpho-syntactic characteristics including context [11; 77], and psycholinguistic properties [77]. \begin{table} \begin{tabular}{c|c c} \hline **Target Word** & **Rank** & **Candidate Replacement** \\ \hline Folly & \#1 & Foolishness \\ & \#2 & Recklessness \\ & \#3 & Silliness \\ & \#4 & Craziness \\ & \#5 & Stupidity \\ \hline \end{tabular} \end{table} Table 2. Examples of candidate replacements generated by an LS system. ## 4. Types of Complexity Prediction ### Comparative Complexity Using LCP to rank words in terms of their complexity gives rise to a unique type of complexity prediction: comparative complexity. This type of complexity prediction provides a value that is used to distinguish whether a target word is more or less complex than another target word. As a result, comparative complexity prediction is most often found as a sub-task of LS, rather than its own stand-alone task [152; 155]. For instance, several studies [23; 77; 112; 117] have trained various models at comparative complexity prediction with aim of improving LS. Gooding et al. [66] investigated the effect that comparative judgement labelling had on inter-annotator agreement. They discovered that annotators tasked with ranking the complexity of several target words presented in context agreed more consistently on their chosen labels than compared with annotators tasked with purely identifying complex words without ranking. With a higher rate of inter-annotator agreement comes a higher quality of complexity label, since the true complexity of a target word is more likely to be captured. As a result, systems trained on such data, or that likewise make comparative judgements, can be highly effective at distinguishing between complex and non-complex words. ### Binary Complexity From 2012 to 2018, complexity prediction research primarily focused on binary complexity prediction. Binary complexity prediction is what is referred to as complex word identification (CWI). CWI is the task of assigning a target word with a binary complexity value of either 1, marking that word as complex, or 0, denoting that word as non-complex. CWI is therefore unlike comparative complexity prediction as it purely identifies complex words rather than making comparative judgements or ranking the complexity of simplified word candidates. Shardlow [142] was the first to treat CWI as a standalone task separate from LS. He experimented with a support vector machine (SVM) for CWI and detailed the construction of a binary CWI dataset (Section 9.1.1) together with the impact several features had on his CWI system's performance (Section 7). CWI-2016 [115] was the first shared-task that challenged teams directly with binary CWI. This shared-task increased the popularity of complexity prediction research (Section 6.1). However, CWI's modeling as a binary classification task presented a few shortcomings during CWI-2016 [115]. The most notable is that CWI systems were unable to accurately and consistently classify target words on the decision boundary, being those words with an uncertain and often debated level of complexity [193]. #### 4.2.1. Issue with Binary Complexity Studies have demonstrated that since lexical complexity is subjective and dependent on an individual's experience and a-priori knowledge, binary CWI is prone to low inter-annotator agreement [97; 193]. Annotators from different demographics, such as first language or region, have different opinions of what classifies as a complex word, with perceived complexity often changing on an individual-to-individual basis [97; 193]. It is this disagreement in whether a word is either a complex or a non-complex word during the annotation process, that creates target words with an uncertain level of complexity that degrades CWI performance. \begin{table} \begin{tabular}{l c c c c c} \hline Extract: & **Fully** & is & set & in & great & **dignity** \\ \hline Complexity Value: & 1 & 0 & 0 & 0 & 0 & 0 \\ \hline \end{tabular} \end{table} Table 3. Example of a sentence annotated with binary complexity values by a CWI classifier taken from the CompLex dataset [146]. Target words of interest are in bold. Take the words "_frontier_" and "_Milwaukee_" displayed in Table 4 as an example. Within the Word Complexity Lexicon [97] that used a 6-point likert scale ranging from very simple (1) to very complex (6), "_frontier_" was given non-complex labels by 4 annotators and complex labels by 3 annotators, whereas "_Milwaukee_" was labeled with non-complex and complex labels 3 times each respectively. Averaging these words' labels we are left with average complexity values that depict an uncertain, or neutral level of complexity given their proximity to the median threshold of 3.5. Converting these annotations to binary complexity values is therefore problematic. The target word "_frontier_" would no longer be considered as being neutral, but rather as being complex as its average complexity value is now over that of the median threshold. This is regardless of the fact that the majority of the labels assigned to "_frontier_" are non-complex. The target word "_Milwaukee_", on the other hand, is a word on the decision boundary, meaning that it can be either labeled as non-complex or complex by a CWI classifier even though its typography may be evidently complex to those whom are unfamiliar with North American loanwords or proper nouns 3. Being trained on such examples that have been potentially mislabeled results in CWI systems misclassifying unseen target words. For instance, features used to distinguish non-complex words may be inevitably associated with complex words or vice-versa. This, in turn, hinders overall CWI performance [146, 193]. Footnote 3: It is important to mention that Maddela and Xu [97] may have attempted to avoid such neutral labeling by recruiting an uneven number of annotators, being 11 in total. However, not all annotators labeled each instance. ### Continuous Complexity LCP was introduced to deal with target words with an uncertain level of complexity along with target words on the decision boundary [97, 146]. Unlike CWI, LCP alternatively provides a continuous complexity value that is not used to assign a binary complex or non-complex label. Instead, LCP models complexity on a continuum with varying degrees of difficulty with which it then attempts to predict. For instance, it assigns target words with a complexity label ranging from very easy to very hard that are linked directly to certain thresholds: very easy (0), easy (0.25), neutral (0.5), difficult (0.75), or very difficult (1). By modeling complexity on a continuum, LCP provides a more fine-grained representation of the complexity of a target word as it allows for the prediction of more than two levels of difficulty [146]. For example, the \begin{table} \begin{tabular}{c|c c c c c c c|c c c} \hline \hline \multicolumn{10}{c}{**Annotations**} \\ \hline **Target Word** & **A** & **B** & **C** & **D** & **E** & **F** & **G** & Avg. & BC & BC Label \\ \hline frontier & **3** & 4 & 5 & **3** & 4 & **3** & **3** & 3.57 & 1 & Complex \\ Milwaukee & 3 & 4 & 4 & 5 & 3 & 2 & N/A & 3.5 & 0 or 1 & Unknown \\ \hline \hline \end{tabular} \end{table} Table 4. Example of annotator disagreement of the complexity of two target words annotated using a 6-point likert scale. Annotations (or labels) ranged from very simple (1), moderately simple (2), simple (3), to complex (4), moderately complex (5) or very complex (6) with the issue being the even distribution between simple and complex labels. Target words, annotations, and complexity values were taken from the Word Complexity Lexicon [97]. **BC** refers to binary complexity value. Annotations of interest are in bold. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Extract: & **Folly** & is & set & in & great & **dignity** \\ \hline BC & 1 & 0 & 0 & 0 & 0 & 0 \\ CC & 0.57 & 0 & 0.18 & 0 & 0.15 & 0.42 \\ \hline \hline \end{tabular} \end{table} Table 5. Example of a sentence annotated with binary complexity values (**BC**), and continuous complexity values (**CC**) by a LCP regressor taken from the CompLex dataset [146]. Target words of interest are in bold. target word "_folly_" can be more accurately predicted as a neutral to difficult complex word, whereas the target word "_diguity_" is no longer incorrectly classified as being entirely non-complex (Table 5). An LCP system is thus a linear regressor rather than a binary classifier and for this reason can classify target words that were problematic for prior CWI systems, including such words as "_frontier_" and "_Milwaukee_". It is worth mentioning that LCP was not the first to predict lexical complexity as a continuous value. Probabilistic complexity prediction was also a regression based task. However, different from LCP, probabilistic complexity prediction used continuous complexity values to make binary predictions [185]. This meant that its continuous complexity values were not used to predict varying degrees of complexity like LCP, but rather to indicate the probability of that target word being either complex or non-complex. CWI-2018 [185] developed systems for binary as well as probabilistic complexity prediction. It was the first shared-task that moved away from binary CWI and subsequently laid the foundations for what is known as LCP. CWI-2018 [185] is described in more detail within Section 6.2. ### Personalized Complexity Complexity prediction researchers have also been interested in personalizing lexical simplification [89; 163]. Lee and Yeung [89] argued that prior LCP systems are unable to account for "variations in vocabulary knowledge among their users" [89], including other forms of idiosyncrasies, such as cross-linguistic influence 4. Several other researchers [89; 163; 195] have also suggested that the previous "one size-fits-all" approach to LCP fails to accurately model varying perceptions of lexical complexity and as a result, personalized CWI was introduced [89; 163; 195]. This approach creates personalized CWI systems that cater for the individual user or a specific target demographic. These systems are engineered with, or are built to learn, user demographic features that they use to make predictions on an individual basis. These demographic features may include language proficiency, "native language, race, job, age, ethnicity, or education" [89; 163; 195]. Footnote 4: Cross-linguistic influence being defined as the effects a bilingual speaker’s first language (L1) has on their second language (L2) production, hence complexity assignment [182] #### 4.4.1. Is personalized complexity prediction worthwhile? Personalized complexity prediction systems have been found to outperform LCP systems designed for a generic population when tasked with predicting the lexical complexities of a target demographic. Zeng et al. [195] discover that demographic features, such as native language, race, job, and so on, can improve CWI performance when predicting the complexity of medical terminology. Tack et al. [163] created a system designed to predict how well learners of French understood the meaning of a French word by incrementally training their system on features representative of their user's lexical competency. Lee and Yeung [89] and Tack [161] have since implemented personalized CWI models trained on language proficiency and native language [89; 161]. Both studies demonstrated their personalized CWI systems as outperforming their non-personalized baseline models. Tack [161] also included contextual features and found that her combined personalized and contextual model outperformed other models that did not take demographic or contextual features into consideration. Personalized complexity is therefore a promising area of complexity prediction research as it is seen to outperform more generalized approaches. Further details regarding a personalized LS dataset are presented in Section 9.1.4. ### Multi-word Expressions LCP as well as other types of complexity prediction are not restricted to predicting the complexity values of single words. Multi-word expressions (MWEs) have also been studied and their complexity values predicted [147; 185]. However, there exists little research into the complexity prediction of MWEs. #### 4.5.1. Is predicting the lexical complexity of multi-word expressions advantageous? According to Gooding et al. (2017), assigning complexity values to both single words and MWEs would undeniably improve the performance of LCP systems and, as a consequence, the performance of other downstream NLP-related tasks, such as LS. Gooding et al. (2017) provide "_ballot stuffing_" as an example. For instance, if complexity values were assigned individually to "_ballot_" and then to "_stuffing_", this MWE would either not be simplified, as individually "_ballot_" and "_stuffing_" may not be considered to be complex words, or simplified into an expression that would be "nonsensical or semantically different" (Krishnan et al., 2017), such as "_ballot filling_" or "_vote stuffing_". Another example can be seen in Table 6. As shown in Table 6, "_great_" and "_dignity_", when taken into consideration separately are not considered to be complex words. However, if these two words were presented to an annotator as one MWE, they may subsequently have been assigned a higher combined complexity value resulting in them as being identified as complex. In turn, a LS system may then provide a more appropriate simplification, such as "_pride_", that would further improve the readability of this extract. For this reason, the CompLex dataset (Lorda et al., 2017) provides 1800 MWEs with preassigned complexity values. LCP-2021 (Lorda et al., 2017) was the first shared-task that challenged teams to develop LCP systems to predict the complexity values of single words and MWEs as two separate sub-tasks. LCP-2021 (Lorda et al., 2017) is described further within Section 6.4. ### Numerical Complexity Complexity prediction research has also included the identification and simplification of complex numerical expressions. Complex numerical expressions refer to "dates, measurements, quantities, percentages, or ratios" (Kolle et al., 2017), that children, individuals with poor numeracy, or a learning disability may find to be difficult to interpret (Lorda et al., 2017; Kryrych et al., 2017; Krych et al., 2017). These numerical expressions can be presented either numerically, for instance, "_25%_", "_25_" or "\(\frac{1}{4}\), or lexically, as is the case for "_twenty five percent_", "_greater than 25_", or "_a quarter_" (Kolle et al., 2017). The purpose of numerical complexity prediction is to identify which numerical expressions are considered complex and are therefore in need of simplification for a specific target demographic. Rello et al. (2017) conducted an eye-tracking study to gauge the cognitive load associated with numerical expressions when presented as digits compared to when presented as lexical items. They discovered that digits were easier to read for people with dyslexia than compared to words describing numerical expressions. Bautista and Saggion (Kolle et al., 2017) have since created a rule-based system for automatically identifying and simplifying complex numerical expressions in Spanish. They hand-crafted numerous rules that utilized regular expressions to identify and then simplify complex numerical expressions within 59 sentences. Their system achieved an F1-score of 0.93 on a manually annotated gold-standard dataset and was subsequently considered to have an acceptable level of performance. Bautista et al. (2018) later incorporated this system within a more generic TS model. ## 5. Evaluation Metrics The performance of complexity prediction systems is measured using a variety of evaluation metrics. These evaluation metrics depend on the task, with the most common tasks being: (1) binary classification performed \begin{table} \begin{tabular}{l l l l l l} \hline Original: & **Folly** & is & set & in & great dignity \\ Simplified: & **Foolishness** & is & set & in & great dignity \\ MWE Simplified: & **Foolishness** & is & set & in & **pride** \\ \hline \hline \end{tabular} \end{table} Table 6. Example extract with an identified and simplified MWE taken from the CompLex dataset (Lorda et al., 2017). Complex words are in bold. by prior CWI systems [115; 155; 185], or (2) regression conducted by LCP systems [147], as described within Section 4. The following evaluation metrics were used in the international competitions listed within Section 6. ### Evaluating CWI Systems The performance of binary CWI systems was normally measured using accuracy, precision, recall, F1-score, and G-score. Accuracy is simply the fraction of positive predictions made over the total number observations within the dataset, precision is "the fraction of positive predictions made that are correct" [70], whereas recall is "the fraction of the truly positive instances that the classifier recognizes" [70]. **F1-Score.** F1-score is the harmonic average of the accuracy and recall scores [70]. It is subsequently far more informative for evaluating CWI performance as it penalizes those systems that demonstrate either low precision and recall or a high imbalance between the two [70]. Per class F1-scores are then used to calculate macro and weighted F1-scores for all systems. Macro F1-score being the arithmetic mean of all per-class F1-scores, and weighted F1-score being the mean of all per-class F1-scores whilst taking into consideration the number of actual occurrences of each class within the dataset 5. F1-score is calculated using the equation below (Equation 4). Footnote 5: See Hackeling [70] for further details regarding macro and weighted F1-scores as well as how to calculate accuracy, precision, and recall. \[F1=2\frac{Precision\cdot Recall}{Precision+Recall} \tag{4}\] Finally, in CWI 2016 [115], the organizers used G-scores which, unlike F1-score, takes into account accuracy and recall rather than precision and recall. ### Evaluating LCP Systems Recent LCP systems designed to predict continuous instead of binary complexity values are commonly evaluated using mean absolute error, mean squared error, Pearson Correlation, and Spearman's Rank. **Mean Absolute Error.** Mean absolute error (MAE) is the average absolute difference between the predicted observations and the actual observations made. It is calculated using the following equation (Equation 5). \[MAE=\frac{\sum_{i=1}^{n}|y_{i}-x_{i}|}{n} \tag{5}\] where \(n\) is the total number of observations, \(i\) is the current observation, \(y\) is the predicted observation, and \(x\) is the actual observation seen. The closer a MAE value is to zero, the greater the system's performance. **Mean Squared Error.** Mean squared error (MSE) is the average squared difference between the predicted observations and the actual observations made. MSE is used to understand the variance and the bias of the predicted observations. Variance refers to the spread of the predicated observations. Bias refers to spread of the predicted observations compared to that of the actual observations. MSE is produced by the following equation (Equation 6). \[MSE=\frac{\sum_{i=1}^{n}(y_{i}-x_{i})^{2}}{n} \tag{6}\] where \(n\) is once again the total number of observations, \(i\) is the current observation, \(y\) is the predicted observation, and \(x\) is the actual observation seen. An MSE closer to zero may indicate the presence of less outliers within the provided dataset. **Pearson's Correlation.** Pearson's Correlation (R) was the primary means of evaluation in LCP-2021 (See Section 6.4) (Kal Most teams who participated in the shared-task used simple probabilistic models trained on features such as n-grams, word frequency, and word length. The approaches used by the top-3 systems in CWI-2016, being PLUJAGH (Krishnan et al., 2017), LTG (Tang et al., 2017), and MAZA (Krishnan et al., 2017), also relied on probabilistic classifiers and on the aforementioned features. The F1-scores achieved by the top-3 systems were 0.353, 0.312, and 0.308 respectively which were considered rather low compared to the baselines and the post-competition analysis presented in Zampieri et al. (Zampieri et al., 2017). According to Zampieri et al. (Zampieri et al., 2017), this indicated that CWI-2016 was a particularly challenging task due to the data annotation protocol and the training/test split, since 40 times more testing data was available compared to the training data. ### CWI-2018 at BEA The second edition of the CWI shared-task,7 referred to as CWI-2018, was organized at the Workshop on the Innovative Use of NLP for Building Educational Applications (BEA). CWI-2018 was a multilingual shared-task featuring datasets containing English, French, German, and Spanish data. A total of three tracks were available, namely English, German, and Spanish monolingual, with a fourth additional track being made available at a later date. Furthermore, training and testing data from the multi-domain _CWIG3G2_ dataset (Krishnan et al., 2017) was available for each initial language. The fourth track was the French multilingual track where only a French test set was available and the participants had to use the data made available for the other three languages to make predictions in French (Section 9.2.6)8. Footnote 7: [https://sites.google.com/view/cwisharedtask2018/](https://sites.google.com/view/cwisharedtask2018/). The CWI-2018 datasets were split on training, development and testing partitions. The English dataset contained 27,299 instances for training, 3,328 for development, and 4,252 for testing. The Spanish dataset featured 13,750 instances for training, 1,622 for development, and 2,233 for testing. The German dataset included 6,151 for training, 795 for development, and 959 for testing. Finally, the French dataset only included a testing partition with 2,251 instances. The three main new aspects of CWI-2018 compared to CWI-2016 were: (1) its multilingual nature compared to the English-only CWI-2016, (2) the presence of both target single words and multiple consequent words, and (3) two sub-tasks, one modelled as a binary classification task, and one modelled as a probabilistic classification task. CWI-2018 received submissions by 12 teams in multiple task and track combinations. At the end of the competition, 10 teams wrote system description papers presented at the BEA workshop. In Table 10 (Appendices), we present the approaches by teams who submitted systems to the CWI-2018 English binary classification task and who also wrote system description papers. An observed trend was that more teams tried deep neural networks in CWI-2018 compared to CWI-2016, a trend also observed in other areas of AI and NLP research (Section 7). In CWI-2018's binary classification task, being sub-task 1, the organizers reported the performance from all teams in each of the three domains, namely _News, WikiNews_, and _Wikipedia_. As discussed in the CWI-2018 report (Krishnan et al., 2018), the performance obtained by all teams on the News domain was generally substantially higher than the performance obtained in the two other domains. ### AlexS-2020 at Sepln AlexS-2020 (Zampieri et al., 2017), referring to the lexical analysis shared-task at the Intentional Conference of the Spanish Society on Natural Language Processing (SEPLN), was the first shared-task to look at CWI for Spanish educational texts. The shared-task included a Spanish dataset consisting of 9,175 words, with 723 of these words being identified by 430 student annotators as complex. These words were taken from transcripts of academic videos in Spanish made within the University of Guayaquil, Ecuador. Teams were challenged with creating a system to automatically identify which of these 9,175 were labeled as complex. Three teams participated at ALexS-2020. Each team was presented with the entire dataset, with only the total number of complex words being revealed. As such, no development or training partitions were provided. This encouraged the development of several models as shown within Table 7. The performances achieved at ALexS-2020 were considered to be poor. The best performing system by UDLAP [160] attained a macro F1-score of 0.272, whereas the best performing systems of VIcomtech [197] and HULAT [5] achieved macro F1-scores of 0.176 and 0.164 respectively. These low performances indicated the overall difficulty of the task, since not being presented with a training or development set lead to the teams having no idea what was considered to be characteristic of a complex word within the particular domain of Spanish educational texts. ### Lcp-2021 at SemEval The 2021 Lexical Complexity Prediction Task [147], referred to as LCP-2021, was also held at SemEval and attracted 58 teams across its 2 sub-tasks as shown within Table 11 (Appendices). The dataset [146] was developed using crowd sourcing. 10,800 instances were selected from three corpora covering the Bible [39], biomedical articles [82] and europarl [16]. LCP-2021's dataset contained single words (9,000 instances) and MWEs (1,800 instances). The MWEs were limited to pairs of nouns, or adjective-noun collocations. The annotated tokens were presented in context to both the original annotators and the participating teams. This meant that the complexity assignments were not only for the token, but instead for the token in its contextual usage. Multiple instances of tokens were included in different contexts, each receiving differing contextual complexity assignments. As such, systems that took context into account fared well in the final evaluation. The organizers split the dataset into trial, train and test sets, stratifying the data for the token type, token instance, complexity and genre. This meant that even distributions of MWEs and single words were available in each subset as well as an even distribution across genres. Complexity labels were also evenly distributed between the subsets with each having a similar spread of labels. The repeated occurrences of tokens were grouped together in each subset, such that no subset shared any tokens with another subset to prevent information bleed between subsets. The shared-task allowed participants to submit to one of two sub-tasks. The first sub-task permitted systems to only predict the complexity values of the single word instances within the CompLex dataset [146]. The second sub-task asked participants to predict the complexity values for the entire dataset, forcing them to develop a methodology for adapting their single word models to the MWE use case. The organizers did not evaluate solely on MWEs due to the smaller size of the subset. All data was collected via CodaLab and the systems were ranked according to their Pearson's Correlation with the held-back gold standard labels on the test sets. \begin{table} \begin{tabular}{l l l} \hline \hline **Team** & **Classifiers** & **Features** \\ \hline **UDLAP** & Threshold-based & General lexicon, specialized lexicon of internet-related terms, n-grams, frequency. \\ **Vicomtech** & Gaussian Mixture Models (GMM) & Lemma length, lemma frequency in subject documents, number of synsets in WordNet, lemma frequency in domain corpora, lemma probability in domain corpora, word frequency in Wikipedia and word probability in Wikipedia. \\ **HULAT** & Support Vector Machine & Word length, a boolean determining whether only capital letters were used, a boolean determining a target words inclusion in an easy-to-read lexicon, Word2Vec vectors and BERT vectors. \\ \hline \hline \end{tabular} \end{table} Table 7. Systems submitted to the ALexS–2020 in alphabetical order as summarized by [192]. Several of the top-ranking systems for LCP-2021's sub-task 1 used transformer-based models [167]. However, systems that used hand-crafted features [108, 168, 181] also performed well with the top performing system [183] in this category having achieved third place on the official ranking table. This is discussed further within Section 7.4. Sub-task 2 saw fewer participants than sub-task 1 (37 teams in total). Systems used similar models to those in sub-task 1, with the key difference being the strategy for combining MWEs. Feature-based systems were able to average the features [118, 150, 181] or predictions [108] for each token in an MWE to give the overall value. Deep learning based systems were typically able to encode the MWE as part of their existing training scheme by supplying the transformer architecture with two encoded tokens instead of one. ## 7. Approaches to Predicting Lexical Complexity in English Texts Various ML models have been used for LCP. These range from support vector machines (SVMs), decision trees (DTs), random forests (RFs), neural networks to state-of-the-art transformers, such as BERT [52], RoBERTa [94] and ELECTRA [40]. Many of these models have also been used in unison to form ensemble-based models. Prior to more recent transformer-based models, ensemble-based models that utilized multiple DTs, RFs, or neural networks, were state-of-the-art in predicting lexical complexity [115, 185]. This section describes in detail the various models used for LCP. It demonstrates the evolution of LCP systems by providing their model's architecture and performance. ### Machine Learning Classifiers #### 7.1.1. **Support Vector Machines.** SVMs are statistical classifiers. They use labeled training data and engineered features to predict the class of unseen inputs [44, 142]. SVMs are well suited for binary classification. They achieve exceptional performance when there exists a clear distinction between two classes. SVMs work less well when dealing with multiple classes or a large number of features as this reduces the uniqueness of each class. SVMs were popular within early LCP research which focused on binary complexity prediction [77, 155]. Jauhar and Specia [77] were one of the first to adopt an SVM for complexity prediction. They trained their SVM on three types of features: morphological, contextual, and psycholinguistic. Morphological features were generated through the use of character n-grams. Contextual features were obtained through a bag-of-words approach, whereby n-grams were used to select neighbouring words. Psycholinguistic features were in relation to a target word's degree of concreteness, imageability, familiarity, and age-of-acquisition. Their SVM outperformed a prior baseline CWI model trained on word frequencies. Shardlow [142] created a complex word corpus (the CW Corpus) consisting of 731 complex words in context [143] (Section 9.1.1). He then experimented with a variety of simplification techniques, including the use of a SVM for binary complexity prediction. His SVM was trained on several features. These features being word frequency, syllable count, word senses, and synonyms associated with the target word. His SVM achieved a higher recall over its precision. This indicated that his SVM was good at identifying complex words, yet often missclassified non-complex words as being complex. It was subsequently prone to the word boundary misclassification problem that is associated with binary CWI systems (Section 4.2.1). Kuru [86] was interested in the use of Glove word embeddings [124] for capturing the contextual information of a target word. Building on Jauhar and Specia [77]'s bag-of-words approach in extracting contextual information, Kuru [86] investigated how effective Glove word-embeddings, or vectors representations, were at CWI when used as features. They trained two SVM models, referred to as AIKU native and AIKU native1, which they submitted to CWI-2016 [115]. The first model: AIKU native, was trained on the "word embedding of the target word and its substrings as features" [86]. The second model: AIKU native1, was trained on the word embedding of the target word, its substrings, as well as the embeddings of the target word's neighbouring words. They discovered that both of their models performed equally well having attained matching G-scores of 0.545 at CWI-2016 [115]. This led Kuru [86] to conclude that contextual information, such as a target word's neighbouring words, was not a useful feature in improving the CWI performance of a SVM model. Sanjay et al. [140] experimented with Word2vec word embeddings alongside statistical, POS-tag, and similarity features. They trained four SVM models. Their first model was trained on Word2vec word embeddings. Their second model was trained on Word2vec word embeddings, word length, number of syllables, ambiguity count, and frequency. Their third model was trained on Word2vec word embeddings and the similarities between the target word and its neighbouring words. Their fourth model was trained on all of the above features, taking into consideration word embeddings, along with statistical and contextual features. The fourth model was found to be the best. Submitted as AmritaCEN (w2vecSim) to CWI-2016, it achieved a F1-score of 0.109 and a G-score of 0.547 [115, 140]. It comes as no surprise that given their reliance on word-embeddings and contextual information, Sanjay et al. [140]'s AmritaCEN (w2vecSim) and Kuru [86]'s AIKU (native1) have both achieved similar performances. However, an interesting observation is that Sanjay et al. [140]'s fourth model with the addition of POS-tags: AmritaCEN (w2vecSimPos), performed less well. This would suggest that POS-tags are less important for CWI than previously theorized. This is supported by the performance of POS-tags as a feature for LCP as shown by Desai et al. [51]. 1.2. **Decision Trees.** DTs make predictions based on a set of learned sequential or hierarchical rules housed in decision nodes, or leafs. They apply a top-down approach, filtering labeled data through various decision nodes, or branches, until that data is separated as accurately as possible in accordance to class. As such, DTs are often found to surpass the performance of SVMs at LCP [115]. This may be due to DTs being better suited in dealing with features that overlap between classes, given their reliance on learned rules rather than prototypical features, such as support vectors. Throughout CWI-2016, as detailed in Section 6.1, the most common and arguably the most successful CWI systems consisted of either a DT or a Random Forest (RF) model [115]. This marked LCP's transition to DTs and RFs. These models maintained state-of-the-art status until LCP-2021 [147]. This is partly due to these models being trained on a greater number of varied and unique features related to lexical complexity. The use of these additional features was inspired by Shardlow [142], Jauhar and Specia [77], and others' success at surpassing previous baseline performances. It is also partly due to the use of DTs and RFs within ensemble-base models; this is described in greater detail within Section 7.2. Choubey and Pateria [38] investigated the performance of both a SVM and a DT at CWI. They discovered that their "SVM seemed to be less effective for CWI" [38, 115]. Their SVM attained a F1-score of 0.179 and a G-score of 0.508, whereas their DT produced a F1-score of 0.181 and a G-score of 0.529 [38]. They reasoned that their SVM's slightly worst performance was due to it having "overlapping decision boundaries" [38]. Again, this refers to the decision boundary misclassification problem that is commonly faced by CWI systems (Section 4.2.1). The systems submitted by Quijada and Medero [129], referred to as team HMC, were among the top performing systems at CWI-2016 [115, 129]. One of HMC's systems consisted of a DT, known as HMC-DecisionTree25, whereas the another consisted of a regression tree (RT), named HMC-RegressionTree05. These models outperformed their SVM counterpart, with the DT model achieving a F1-score of 0.298 and a G-score of 0.765. Both models were set to have a maximum depth of three meaning that only three decision nodes, or rules, were learned. These rules were learned from several inputted features. These features belonged to two main categories: statistical, and psycholinguistic 9. Their statistical features included unigram and lemma frequencies, word, stem and lemma length, probability of a word's character sequence, and lastly, number of synsets, whereas their psycholinguistic features included age-of-acquisition, perceived word concreteness and the number of differing pronunciations associated with a target word (Kosseim et al., 2018). They claimed that their models' success was due to their use of corpus-based features, especially their use of unigram and lemma frequencies. #### 7.1.3. **Random Forests** RFs consist of multiple DTs. Each DT is trained on a random subset of the training data. From their limited input, each DT then learns a sequence of hierarchical rules for classification. A RF's final output is generated through a plurality voting system. Since each DT only observes a small fraction of the training data, it results in RFs being less prone to overfitting. Each DT learns to distinguish its inputted classes without making sweeping generalizations across the entire dataset. This means that each DT becomes specialized at identifying the distinguishing features of its limited input. Pooling these DTs together subsequently makes for a RF that is more adaptable to unseen data than a stand-alone DT. A RF is, therefore, better suited at dealing with a large dataset with a large number of features compared to a single DT. Ronzano et al. (Ronzano et al., 2019) submitted a RF to CWI-2016 that outperformed other DT models (Kosseim et al., 2018). Their RF, referred to as TALN (RandomForest_WEI), was taken from the Weka machine learning framework (Kosseim et al., 2018). Being a RF, it consisted of several DTs trained on multiple features, many of which being similar to the features used by the two HMC systems (Kosseim et al., 2018). However, like other models submitted to CWI-2016, additional features were also exploited, such as contextual features (Kosseim et al., 2018). These contextual features took into consideration the position of the target word within a sentence, the number of tokens within that sentence, and the frequencies of both the target word and its context words within the British National Corpus (BNC) (Kosseim et al., 2018; Kosseim et al., 2018) and the 2014 English Wikipedia Corpus (Kosseim et al., 2018) 10. The use of such contextual features, together with its RF architecture, may explain TALN's superior performance in comparison to HMC's DT and RT models (Kosseim et al., 2018). TALN (RandomForest_WEI) achieved an F1-score of 0.268 and a G-score of 0.772. This was respectively -0.02 less than the F1-score and +0.006 better than the G-score achieved by the best performing HMC system (Kosseim et al., 2018; Ronzano et al., 2019). Footnote 10: The presence of low or high frequency context words was believed to be an indicator of a target word’s degree of complexity. If on average, a target word was surrounded by more highly frequent context words, then that target word was believed to be non-complex, whereas if it were surrounded by less frequent words, then that target word was believed to be complex. Zampieri et al. (Zampieri et al., 2019) created a CWI system, referred to as MACSAAR (RFC), with a particular focus on Zipfian features. Zipf's Law implies that words that appear less frequently within a text are longer and as a result are likely to be considered more complex than words that are more frequent and shorter (Kosseim et al., 2018; Kosseim et al., 2018). To test this assumption, they trained a SVM, RF, and nearest neighbor classifier (NNC) using a variety of Zipfian features. These features included word frequency, word and sentence length, and the sum probabilities of the character trigrams belonging to the target word or to the sentence. Their RF model was their best performing model. It attained a F1-score of 0.270 and a G-score of 0.754 at CWI-2016 (Kosseim et al., 2018) giving it a greater F1-score of +0.002, yet an inferior G-score of -0.018 compared to TALN (Ronzano et al., 2019). Per their model's performance, Zampieri et al. (Zampieri et al., 2019) concluded that Zipfian features are good baseline indicators of lexical complexity. Davoodi and Kosseim (Davoodi and Kosseim, 2019) experimented with several models for CWI-2016 (Kosseim et al., 2018). These models were a naive bayes, a neural network, a DT, and a RF. Their best performing model was their RF, referred to as CLacEDLK (CLacEDLK-RF_0.6). This model was trained on several features. Davoodi and Kosseim (Davoodi and Kosseim, 2019) had a particular interest in psycholinguistic features, namely abstractness. They believed there existed a correlation between "the degree of abstractness of a word and its perceived complexity"11(Davoodi and Kosseim, 2019). They developed two RF models. Their first RF had a threshold of 0.5, whereas their second RF had a threshold of 0.6. This meant that for a target word to be classified as being complex, these RFs' sub-DTs' output would have on average a complexity value above 0.5 for their first RF and above 0.6 for their second RF. Their second RF was found to outperform their first by a G-score of +0.028. As such, having a higher threshold for complexity assignment would appear to improve CWI performance. #### 7.1.4. What are the best linguistic features for predicting lexical complexity? The SVMs, DTs, and RFs described above have all so far utilized a common set of features that can be separated into four categories: statistical, morpho-syntactic, psycholinguistics, and contextual. Work by Desai et al. [51], Tack [161], as well as Shardlow et al. [148] have since demonstrated that such statistical features, such as word length, word frequency and syllable count, psycholinguistic features, including prevalence (average familiarity), age-of-acquisition, and concreteness, together with contextual features, the likes of character or word-level n-grams, continue to be good predictors of lexical complexity. Recently, Desai et al. [51] went as far as to rank the effectiveness of several features using a RF trained on the CompLex dataset [146] (Section 6.4). They discovered that prevalence, age-of-acquisition, and concreteness achieved the first, second, and third best performances respectively and POS-tags and prior complexity labels achieved the worst performances. However, apart from the use of character-level bigrams, Desai et al. [51] failed to investigate the effect contextual features would have had on their model's performance. Contextual features have also been exploited in ensemble-based models, neural networks, and state-of-the-art transformers. The impact of these models' use of contextual features is discussed in Section 7.4.1. ### Ensemble-based Models A RF is an ensemble-based model. An ensemble-based model is any model that is made up of multiple sub-models and that produces a final output through some form of plurality voting. These sub-models can be of the same type, as is the case for an RF, or of differing types. The main advantages of ensemble-based models are brought about through their diversity. An ensemble-base model can utilize the strengths of various models, be it either SVMs, DTs, RFs, neural networks, or even transformers, whilst simultaneously mitigating the disadvantages associated with using only one type of model. As a consequence, ensemble-based models are state-of-the-art for LCP. However, throughout the years, differing combinations of sub-models have been used. From CWI-2016 [115] to CWI-2018 [185], the best performing ensemble-based models consisted of a combination of DTs, RFs, or neural networks. Since LCP-2021, this has changed. State-of-the-art ensemble-based models now consist of various transformers (Section 7.3.1). Malmasi and Zampieri [100] built upon the use of multiple DTs, hence a RF for binary CWI. They adopted a meta-classifier architecture. A meta-classifier architecture is a unique type of ensemble-based model. It "is generally composed of an ensemble of base classifiers that each make predictions for all of the inputted data" [100]. These base classifiers then input their output into a second set of classifiers. This second set of classifiers, or meta-classifiers, take as features the output of the first set of base-classifiers. They then produce their own output through "a plurality voting process" [100]. Malmasi and Zampieri [100] submitted two ensemble-based models to CWI-2016: MAZA A and MAZA B [115]. Both of these models' base classifiers were decision stumps, which are different from DTs as they are trained on a single feature and subsequently only have one decision node, thus giving them the appearance of a tree stump rather than of an entire tree. Bootstrap aggregation was then applied to the output of each decision stump. This bagged output was then inputted into a second level of meta-classifiers consisting of "200 bagged decision trees"[100]. MAZA B was trained using additional contextual features that were not utilized by MAZA A [100]. These contextual features were also different from those used by other aforementioned systems. Together with word frequencies, MAZA B also incorporated two types of probability scores as contextual features. The first being conditional probabilities, being the probability of a target word appearing next to its neighbouring one or two words. The second being joint probabilities, being the probability of a target word occurring in conjunction with its surrounding words within a sentence. As such, MAZA B was found to outperform MAZA A. It achieved a F1-score of +0.116 greater than MAZA A [100, 115]. Malmasi and Zampieri [100] contributed this superior performance to MAZA B's use of contextual features, highlighting the importance to which they believed context influences a word's perceived level of complexity12. Footnote 12: Malmasi and Zampieri [100] would appear to contradict Kuru [86], as Kuru [86] found context to be uninfluential on his SVM’s performance. Early LCP research debated the importance of context. However, context is now more firmly believed to be an influential factor within current LCP literature [112, 183] (See Section 7.4.1 for further details). Choubey and Pateria [38] constructed two ensemble-based models for CWI-2016 [115]. The first, referred to as GARUDA (HSVM&DT), had a meta-classifier architecture which comprised of five SVMs and five DTs. In this model, the SVMs were the base classifiers tasked with the binary classification task of CWI. Its second set of meta-classifiers were its DTs. These meta-classifiers identified whether the predictions made by its SVMs were correct or incorrect. Choubey and Pateria [38]'s second ensemble-based model contained twenty SVMs. Unlike their first model, their second model did not employ meta-classifiers. Instead, each of the 20 SVMs were tasked with predicting the labels of the entire training set. The best performing SVMs then had the most impact in calculating the model's final output labels through a performance oriented voting system. Interestingly, their first ensemble-based model was found to perform worst than individual SVM or DT models, whereas their second ensemble-based model achieved average performance. They blamed this poor performance on the "overlapping decision boundaries" [38] of their SVM sub-models. This once again demonstrates the inferiority of SVMs for CWI compared to other models. The SV000gg systems, created by Paetzold and Specia [115], were the best performing systems submitted to CWI-2016 [115, 116]. Paetzold and Specia [115] adopted ensemble-based models that utilized a variety of sub-models. They believed that model diversity would result in greater CWI performance. They experimented with ensemble-based models that consisted of a lexicon-based model, a threshold-based model to SVMs, DTs, RFs and other machine learning classifiers. Their lexicon-based model identified whether a target word was a complex or a non-complex word by searching for that word within a given dictionary of pre-labeled lexemes. Their threshold-based model separated complex and non-complex words by seeing whether a target word had a particular feature above a certain threshold and that was also found to be a defining characteristic of that word type; see Section 8.2 for more information regarding lexicon-based and threshold-based approaches to predicting lexical complexity. The predictions made by their diverse set of sub-models were counted and then used to determine the system's final output through hard or soft voting. As such, there were two versions of the SV000gg system: Hard SV000gg and Soft SV000gg. Hard SV000gg used hard voting to produce the final output label by counting how many times in total the target word was labeled as being either complex or non-complex by all of its contained sub-models. Soft SV000gg used a form of performance-oriented soft voting. Traditional soft-voting generates a summed confidence estimate in regards to how likely a target word belongs to a particular class. The final label assigned to this word is then resulted from this summed confidence estimate. Performance-orientated soft voting determines the final label of a target word by examining the performances of each sub-model "over a certain validation set such as precision, recall, and accuracy" [116]. The most common label produced by these sub-models with the highest overall performance, is then chosen as the final output label. Soft SV000gg achieved the best performance with an F1-score of 0.246 and a G-score of 0.774. Hard SV000gg attained a slightly worst F1-score and G-score of 0.235 and 0.773 respectively. However, Hard SV000gg still outperformed all of the other systems submitted to CWI-2016 in regards to its G-score, including those mentioned above [115]. As a result, both models demonstrated the superiority of diverse ensemble-based models for binary CWI in comparison to other models. Gooding and Kochmar [64] were inspired by the performance of prior ensemble-based models at CWI-2016 [115]. Their system, referred to as Camb, ranked first on both of CWI-2018's sub-tasks: binary CWI and probabilistic complexity prediction, when dealing with English monolingual data (Section 6) [185]. Camb used a boosting classifier: AdaBoost, with 5000 estimators followed by a RF bootstrap aggregation model [64]. They experimented with differing sub-models, each being trained on a set of given features similar to those used by prior CWI systems [115]. They concluded that an ensemble-based model that combines both AdaBoost and a RF with equal weights, consistently produced the best performance [64]. Aroyehun et al. [14] experimented with the tree learner model provided by KNIME [21], along with other combinations of DTs, RFs, and gradient boosted tree learners for CWI-2018's sub-task 2: probabilistic complexity prediction [14, 115]. They found that their KNIME tree learner model obtained good results when set to contain 600 models. It achieved a mean macro F1-score of 0.818 across the three datasets provided by CWI-2018 (Section 6). Therefore, Gooding and Kochmar [64] and Aroyehun et al. [14] have demonstrated that ensemble-based models achieve good performance at binary as well as probabilistic complexity prediction. ### Neural Networks Deep learning is highly popular within NLP and Computational Linguistics having achieved state-of-the-art performance in various NLP-related tasks [54, 180]. Neural networks attempt to mimic human learning by artificially replicating the neuroplasticity of the human brain. They achieve this by manipulating weight values (synaptic strength) between nodes (neurons) that contain characteristic information, or learned features, related to the input (or environmental experience as is the case with the human brain). These weight values are adjusted through a loss function applied after each epoch, or iteration. This process is repeated until these weight values are fully optimized and the most optimum output is produced. Neural networks can be either supervised or unsupervised. This means that they can learn such characteristic information, or features associated with a complex word, independently. However, within LCP research, neural networks have consistently under-performed in comparison to other more traditional feature engineered models, such as DTs or RFs. This is especially true when such traditional models have been combined within ensemble-based models [115, 185]. It was not until the introduction of continuous complexity prediction in the form of probabilistic complexity (Section 4.3), that some neural networks were shown to perform well, and on occasion, on par with more traditional models [14, 185]. Gillin [60] was one of the first to investigate the performance of a recurrent neural network (RNN) at binary CWI. Within their RNN, they included a gated recurrent unit (GRU). A GRU is designed to safeguard against the vanishing gradient problem. The vanishing gradient problem arises during back-propagation, when the neural network adjusts its loss function in accordance to its current prediction. The vanishing gradient problem refers to when the gradient of the loss becomes excessively small overtime, thus, inhibiting the weight values of earlier nodes from being accurately updated [60, 61]. This impairs a neural network's ability to retain information learned at earlier stages. A GRU counters this problem by acting as a "memory" device [60]. It controls what new information should be learned, what prior information should be remembered, and what previous information should be forgotten, when updating a weight value. Gillin [60] created an RNN model with a GRU as well as a ensemble-based model with a meta-classifier architecture. Referred to as Sensible (Combined), their ensemble-based model was built up of five RNNs as base classifiers and a single RF as a meta-classifier. Out of all of the neural network models submitted to CWI-2016, their RNN model with a GRU, referred to as Sensible (baseline), achieved the best performance [60, 115]. Nevertheless, in comparison to other more traditional models, Sensible (baseline) performed poorly. It attained an F1-score of 0.140 and a G-score of 0.646. Gillin [60] claimed it was the small size of CWI-2016's training set that caused their RNN model to perform less well than expected (Section 6.1). Aroyehun et al. [14] were the first to experiment with a convolutional neural network (CNN) for binary CWI. A CNN is different from a RNN. It contains an additional convolutional layer that takes as input the output of its first layer and then transforms said input before passing it onto a further layer. However, CNN models lack the temporal capabilities of an RNN with an embedded GRU. Regardless of this limitation, the CNN introduced by Aroyehun et al. (2018), referred to as NLP-CIC-CNN, slightly outperformed their ensemble-based model, consisting of various KNIME tree learners, on one out of the three datasets provided by CWI-2018 (Kurakin et al., 2018) (Section 7.2). It attained a macro F1-score of 0.855 and an accuracy rating 0.863. This surpassed the macro F1-score and accuracy achieved by their ensemble-based model by +0.003 and +0.004 respectively. Hartmann and dos Santos (2018) compared models that adopted feature engineering to neural networks at CWI-2018 (Kurakin et al., 2018). They trained a variety of models, such as DTs, Gradient Boosting, Extra Trees, AdaBoost and XGBoost methods, on numerous features including statistical features, such as word length, number of syllables, numbers of senses, hypernyms and hyponyms, along with n-gram log probabilities; again, being similar to those features previously used by prior CWI systems (See Tables 9 & 10). These models were compared to a shallow neural network that used word embeddings, and a Long Short-Term Memory (LSTM) language model capable of handling the vanishing gradient problem through its use of a forget gate along with a additive gradient structure; being parallel to the use of a GRU. For binary CWI (Kurakin et al., 2018), Hartmann and dos Santos (2018)'s feature engineered XGBoost model outperformed their neural network models. It attained an F1-score of 0.8606, whereas their shallow neural network and LSTM models achieved F1-scores of 0.8467 and 0.8173 respectively. Nevertheless, for CWI-2018's second sub-task of probabilistic complexity prediction, their LSTM model, referred to as NILC, was superior to all of the other models, having achieved a F1-score of 0.588. Their feature engineered XGBoost model, and their shallow neural network model, achieved less impressive F1-scores of 0.2978 and 0.2958 respectively. Both Aroyehun et al. (2018) and Hartmann and dos Santos (2018), therefore, proved the viability of using neural networks for probabilistic complexity prediction. #### 7.3.1. **Transformers** The best performing systems of LCP-2021 (Kurakin et al., 2018) used transformer-based models. Transformer-based models were introduced to overcome the limitations associated with prior neural networks, such as RNNs, and LSTM models (Kurakin et al., 2018; Li et al., 2018; Vaswani et al., 2017). Vaswani et al. (2017) outlines several advantages of transformers, namely their self-attention mechanism and their ability to more effectively capture long-term dependencies. Just Blue by Yaseen et al. (2018), achieved the highest Pearson's Correlation at LCP-2021's sub-task 1 of 0.7886 (Kurakin et al., 2018). It was inspired by the prior state-of-the-art performance of ensemble-based models together with the recent headway in various NLP-related tasks made by transformers (Kurakin et al., 2018). Just Blue consisted of an ensemble of BERT (Kurakin et al., 2018) and RoBERTa (Kurakin et al., 2018) transformers. This system contained two BERT models as well as two RoBERTa models. Bert1 and RoBERTa1 were fed target words, whereas Bert2 and RoBERTa2 were fed the target words' corresponding sentences, hence context. These models then predicted the lexical complexities of their inputted target words or sentences, whereby their outputted complexity values were determined by weighted averaging. Models 1 had a weight of 80% and models 2 had a weight of 20%. This meant that the complexity of target words was considered to be more important than the complexity of their surrounding words. Howbeit, each sentence was still taken into consideration when calculating the weighted average, as prior studies have shown context to be an influential factor on continuous complexity prediction (Kurakin et al., 2018; Vaswani et al., 2017). Once a weighted average was returned by either set of models: BERT and RoBERTa, Just Blue's final output was produced as a simple average of these returned weighted averages. Yaseen et al. (2018) experimented with different models as well as different weight splits between their target word and sentence level inputs. They discovered that between SVM, RF, BERT, and RoBERTa models, along with a BERT and RoBERTa hybrid model, a BERT and RoBERTa hybrid model achieved the highest performance. They also found that between a 90/10, 80/20, and a 70/30 split between target word and sentence level input, a 80/20 weight split, being in favor of the target word, produced the most accurate complexity values. As such, Just Blue's success is likely a result of its diverse ensemble of varying models, as well as its use of, but not over-reliance on, a target word's context. DeepBlueAI developed by Pan et al. [123] achieved second place at LCP-2021's sub-task 1 and first place at sub-task 2 [123; 147]. It attained a Pearson's Correlation of 0.7882 for sub-task 1 and a Pearson's Correlation of 0.8612 for sub-task 2. It used a variety of pre-trained language models, such as the transformers BERT [52], RoBERTa [94], ALBERT [87], and ERNIE [196]. DeepBlueAI was subsequently an ensemble-based model that used model stacking with five layers. All of its aforementioned transformers were utilized within its first layer. Its second layer then adjusted the transformers' hyperparameters. It manipulated dropout, the number of hidden layers, and the loss function. The third layer then conducted 7-fold cross-validation to check for overfitting or selection bias, with the fourth layer then having adopted training strategies, such as data augmentation and pseudo-labelling. Data augmentation is the training strategy of adding new data to a training set by copying and slightly modifying existing data, in this instance, data from CWI-2018 was used, and for sub-task 2, data from sub-task 1 was used after having gone through "synonym replacement, random insertion, random swap, and random deletion" [123; 176]. Pseudo-labelling is the training strategy of predicting labels for unlabeled data and then adding the newly labeled data back into the training set. The fifth layer contained DeepBLueAI's final estimator in the form of a simple linear regression model. This estimator returned the final predicted complexity values (\(\hat{y}\)) through the following equation (Equation 9): \[\hat{y}=\sum_{j=1}^{N}W_{j}\hat{y}_{j} \tag{9}\] where \(N\) is the total number of transformers with different hyperparameters, \(W_{j}\) is the weight of each transformer, and \(\hat{y}_{j}\) is each transformers' predicted complexity value. Pan et al. [123] attributed their model's good performance in both sub-tasks to its use of multiple transformers and training strategies. With model diversity also being an influential factor in regards to Just Blue's high performance [183], it would appear that current state-of-the-art LCP systems consist of an ensemble of differing transformers-based models. RG_PA, created by Rao et al. [130], was the second highest performing system at LCP-2021's sub-task 2 having achieved a Pearson's Correlation of 0.8575 [130; 147]. Unlike Just Blue [183] and DeepBlueAI [123], it did not contain an ensemble of diverse transformers. Alternatively, RG_PA consisted of a single RoBERTa attention based model. It used Byte-Pair Encoding (BPE) to firstly tokenize all of its inputted sentences. BPE compresses a given sentence so that its most frequent character pairs, or bytes, are replaced with a single character. This shortens the inputted sentence into a sequence of character representations that help to mitigate the out-of-vocabulary problem 13. Each of their RoBERTa's hidden layers applied token pooling that creates a vector representation of a target word based on the average of all of the token embeddings of that target word found throughout the training set. The attention weight between the target vector and context tokens, i.e. context words, is then calculated and the returned context vector is concatenated with the target vector. The concatenated vector representation of each target word is then used to predict the complexity values of the unseen words within the test set. Its use of BPE together with its use of concatenated context and target word vectors, may explain RG_PA's high performance in sub-task 2, despite it not being an ensemble-based model. Footnote 13: The out-of-vocabulary problem refers to the problem that arises when a model is presented with a word that was not observed within its training set. ### Other State-of-the-Art Models The third best performing system at LCP-2021's sub-task 1, deviated from the use of transformer-based models [108]. Mosquera [108] approached sub-task 1 from a more traditional feature engineering approach. Much like prior CWI sytems, Mosquera [108] utilized a combinations of lexical, contextual, and semantic features (Section 4.2). However, unlike previous CWI systems, these features were extensive with 51 features in total being used to rate lexical complexity. These features included SUBTLEX features, word etymology, and several readability indices. SUBTLEX features are those features that are embedded within film subtitles, such as the number of films whose subtitles depict the word in lowercase, target word frequency per million subtitled words, as well as the percentage of films where the target word appeared within the SUBTLEX-US corpus (Kriz et al., 2017). Features related to a word's etymology included the number of Greek or Latin affixes that belong to the target word, and readability index features included Flesch score (Stein ## 8. Use Cases and Applications LCP has many potential use cases and applications [115; 147; 185]. LCP systems can be utilized within a variety of assistive technologies, such as computer-assisted language learning (CALL) applications or intelligent tutoring systems (ITSs) to improve the readability of given texts (Section 8.1). This is most often achieved by implementing text simplification (TS) that benefits from a LCP component. ### Improving Readability CALL is the use of any computer related technology, be it either a word processing document, social media, or other online medium, for language learning. ITSs are "computer learning environments designed to help students master difficult knowledge and skills" [68]. CALL applications subsequently include ITSs that specialize in language learning and have been found to improve second language (L2) acquisition [166]. These applications include multiple designs, are based on differing pedagogical practices, and allow for varying degrees of learner-computer interaction [8]. A common approach among CALL is to simplify a text to make it more accessible for the L2 learner [191; 133]. Alhawiti [7] states that TS can be beneficial to language learners and therefore, an ITS or CALL application that incorporated TS would likewise be beneficial. This is since TS has been found to increase the literacy [125] as well as advance the vocabulary development of L2 learners [162; 133]. Rets and Rogaten [133] tested 37 participants on their ability to memorize and process the ideas presented within two texts: (1) an authentic text, and (2) a simplified text with less complex vocabulary and syntax. Memory was measured by asking the participants to rewrite the observed texts, whereas text processing was gauged through the use of eye tracking. Participants were found to achieve greater memorization and were shown to fixate less on the simplified text than compared to the authentic text. This led Rets and Rogaten [133]) to conclude that TS results in better textual comprehension which correlates with a greater learning potential [128; 133]. ITSs that use TS are not restricted to aiding L2 learners. TS improves the readability of texts and thus enhances the literacy development of other target demographics. TS may help an ITS designed for people diagnosed with autism "by reducing the amount of figurative expressions in a text" [151]. It may also increase the effectiveness of ITSs created for people with dyslexia or aphasia. This is by replacing long words with short words, or substituting words with challenging character combinations for those which are easier to identify [131; 35]. ITSs developed for children may likewise use TS in order to reduce the amount of high-level jargon, or uncommon words, within a text [49]. TS is, therefore, useful in improving the vocabulary and literacy development of L2 learners [7], people with autism [151], dyslexia [131; 35], or aphasia [35], as well as children [49]. However, Crossley et al. [46] presents arguments for and against the use of simplified texts within L2 classrooms with Gooding [63] pointing out that the usefulness of simplified texts may vary between target demographics and in some instances may be inferior to alternative reading strategies. Despite this, throughout the years TS systems have assessed lexical complexity through a number of ways. ### LCP's Place in the Text Simplification Pipeline Prior to the LCP systems outlined within Section 7, TS assessed lexical complexity through several approaches: (1) a simplify everything approach, (2) a threshold-based approach, and (3) a lexicon-based approach [120]. However, each of these approaches had their limitations that led to the development of more dynamic LCP systems. #### 8.2.1. **Simplify Everything.** The simplify everything approach simplified all of the words within a given text [53]. This approach subsequently had no means of identifying complex words. Instead, systems that adopted this approach often used a form of comparative complexity prediction to compare and find the most suitable word replacements for every single word within a provided text. A disadvantage to this approach is that not all words are in need of simplification [144]. As such, systems that adopted this approach often simplified already easy to understand words into equally easy to understand alternatives that were not as well suited as the original word for that particular context (Sutskever et al., 2017; Krizhevsky et al., 2018). The simplify everything approach was therefore found to produce ungrammatical and nonsensical simplifications. #### 8.2.2. Threshold-Based Threshold-based approaches required the presence of a feature over a set value in order for a target word to be identified as complex. Systems that adopted this approach often used a single feature-threshold, such as having \(x\) number of characters, or \(x\) frequency in a certain corpus, as a means of gauging the complexity of a target word (Krizhevsky et al., 2018). However, this approach was found to be insufficient in identifying all instances of complex words within a given text. For instance, Bott et al. (Bott et al., 2017) and Shardlow (Shardlow, 2018) discovered that using word length as a standalone feature-threshold failed to identify complex words which were uncharacteristically short, whilst incorrectly classifying simple words that were over 5 characters long. An example being incorrectly classifying _folly_ as non-complex yet _foolishness_ as complex, since the former may be considered a short word and the latter a long word. As such, the reliance of a single, or sometimes multiple, feature-thresholds lost popularity as an accurate means of assessing lexical complexity. #### 8.2.3. **Lexicon-Based** Lexicon-based approaches utilized a predefined list of words as a means of distinguishing between complex and non-complex words within a given text (Krizhevsky et al., 2018). Systems that adopt lexicon-based approaches are often found to perform well in identifying complex words for their intended target demographic or domain. However, when identifying complex words for individuals outside of their intended target population or domain, lexicon-based approaches perform less well (Krizhevsky et al., 2018). For example, FACILITA (Shardlow, 2018) is designed to distinguish between Portuguese complex and non-complex words for Brazilian children using three dictionaries: (1) consisted of frequent words extracted from Brazilian newspapers, (2) contained concrete words, and (3) housed simple words that were believed to be "common to youngsters" (Krizhevsky et al., 2018). FACILITA is very effective in helping young low literacy readers in Brazil. However, FACILITA may be less helpful for other demographics, such as second language learners, individuals suffering from a reading disability, or older individuals with low literacy. This is since the words used to make FACILITA's predefined dictionaries may not be considered as easy to understand for these demographics as they were for Brazilian children. ### Other Use Cases LCP can aid other downstream NLP-related tasks, such as machine translation (Sutskever et al., 2017) and authorship identification (Bauer et al., 2017; Krizhevsky et al., 2018), and is also likely to be beneficial to other downstream tasks within the the future. Two alternative use cases of LCP are exemplified in the following sections (8.3.1 to 8.3.2). #### 8.3.1. **Machine Translation** Before TS shifted to improving the readability of texts, its primary focus was to aid machine translation (MT) (Bauer et al., 2017). MT is the task of automatically translating a source language into a target language (Sutskever et al., 2017). MT systems are limited by the lack of parallel corpora that contain identical texts in more than one language. MT systems are also hindered by the morpho-syntactic complexities of the languages that they are tasked to translate. Studies have proven that TS can aid MT (Bauer et al., 2017; Krizhevsky et al., 2018; Sutskever et al., 2017). TS achieves this by reducing the ambiguity of the inputted texts in the source language (Sutskever et al., 2017). For instance, by replacing complex words in the source language with simpler alternatives, it increases the probability of an MT system finding a suitable translation in the target language. \begin{table} \begin{tabular}{c c} \hline \hline \multicolumn{2}{c}{Sentence} \\ Original & A dozen Chinese fishing **boats** had taken **refuge** in a lagoon of Huangyan Island \\ Simplified & A dozen Chinese fishing **ships** had taken **shelter** in a lagoon of Huangyan Island \\ \hline \hline \end{tabular} \end{table} Table 8. Example of a simplified sentence shown in Stajner and Popovič (Sutskever et al., 2017). Target words of interest are in bold. Stajner and Popovic (170) demonstrated that a TS system that utilized both LS and syntactic simplification components improved the performance of a English-to-Serbian MT system. Their system, being assessed on the adequacy (meaning preservation) and fluency (grammatical correctness) of its output, achieved this by translating simplified sentences rather than translating the original sentences directly. According to Stajner and Popovic (170), the simplified sentence shown in Table 8, resulted in a English-to-Serbian translation that was both easier to understand and more grammatically correct to a group of Serbian annotators than compared to a translation of the original sentence. Without a LCP component, the simplified words: _boats_ and _refuge_, may not have been recognized as being complex and as a consequence, would not have been simplified resulting in a less accurate translation. This demonstrates that the inclusion of an LCP component within the TS pipeline can improve MT. 3.2. **Authors Identification.** Authorship identification is the task of identifying the author of a given text (26). A text's vocabulary richness is a common feature used for authorship identification. Vocabulary richness is used to capture an individual's linguistic fingerprint, in other words, their idiocet. It is normally measured through the use of the type-token ratio (TTR). The TTR is "a simple ratio between the number of types and tokens within a text" (Ross et al., 2017). The TTR, therefore, shows the diversity of a author's vocabulary. It has been used in such situations as helping to differentiate between authors of highly similar texts (Srivastava et al., 2017) as well as to identify the author of online messages (Srivastava et al., 2017). LCP provides an additional measurement of vocabulary richness. Adding to the TTR, it provides an average lexical complexity marker that depicts, on average, how complex the author writes. Average lexical complexity can be inputted into an authorship identification system as a feature that may, in turn, enhance its performance. Tanguy et al. (164) experimented with such a feature, in the form of morphological (lexical) complexity, for the authorship identification of various extracts taken from fictional books. Alternative examples are using lexical complexity to differentiate between authors belonging to different time-periods, authors with different levels of education, or authors of different ages; under the assumption that discrepancies exist between their writing styles. For instance, past literature may contain vocabulary considered to be more archaic and complex than modern literature, individuals with a higher level of education may use more jargon-related and complex words than those with a lower level of education, and adults may use more unfamiliar and less common words than children. ## 9. Resources ### Additional English Datasets and Resources The shared-tasks of CWI-2016 (Srivastava et al., 2017), CWI-2018 (Srivastava et al., 2018), and LCP-2021 (Srivastava et al., 2018), tested participating teams on three datasets that have since contributed significantly to LCP research (Section 6). Nevertheless, these are not the only influential datasets that contain words with lexical complexity ratings. All of the current LCP datasets that deal with English and that are known to the authors' are provided in Table 12 located within the Appendices. Apart from the CWI-2016, CWI-2018, and the CompLex datasets already discussed within Section 6, the remaining datasets are introduced throughout the following sections (9.1.1 to 9.1.4). #### 9.1.1. **Cw Corpus.** The CW Corpus contains 731 complex words in context (Srivastava et al., 2018). It was constructed using wikipedia edits. Edits are often made to Wikipedia entries in order to simplify their vocabulary. Using Wikipedia's edit history, it is possible to see the simplified edit as well as the original text. To determine which of these edits contained true lexical simplifications, Shardlow (Srivastava et al., 2018) looked at the editor's comments for the word "simple", and calculated Tf-idf vector representations to check for lexical discrepancies between the original and simplified texts. Those texts which were found to contain true lexical simplifications, were then subject to a set of further tests to guarantee the validity of the CW corpus. Hamming distance was calculated to ensure that only one word differed between the original and simplified texts. Reality and inequality checks were conducted to make sure that the target words were known yet different English words, and not just variations of the same word. Lastly, non-synonym pairs were discarded and simplified candidate words were verified. Through these series of checks, 731 complex words were provided with context. #### 9.1.2. Horn et al. [74] Horn et al. [74] created a corpus of 25,000 simplified word candidates for comparative complexity prediction. They acquired 50 annotators. These annotators were required to live in the US in an attempt to control their English proficiency. They were asked to give a simpler alternative for each target complex word within 500 sentences. They achieved this by using Amazon's Mechanical Turk (MTurk) that is popular among NLP-related tasks [74]. Similar to Shardlow [143], the sentences presented to the annotators were taken from a sentence-aligned Wikipedia corpus. This corpus provided original and simplified Wikipedia entries of the same texts. On average, annotators provided 12 differing simplifications per target word. This makes the corpus introduced by Horn et al. [74] a valuable resource for investigating comparative complexity. #### 9.1.3. Word Complexity Lexicon Maddela and Xu [97] recognized the limitations of prior CWI datasets, namely, the limitations associated with using binary complexity labels rather than continuous complexity values (Section 4.2.1) [97]. As a response to these limitations, they constructed the Word Complexity Lexicon (WCL). The WCL is a dataset made up of "15,000 English words with word complexity values assessed by human annotators" [97]. These 15,000 words were the most frequent 15,000 words found within the Google 1T Ngram Corpus [28]. Their assigned word complexity values were continuous since these values were assigned by 11 non-native yet fluent English speakers using a six-point likert scale. They assigned each word with a value between 1 and 6, with 1 denoting that word as being very simple, and 6 defining that word as being very complex. To determine the final complexity value of each word, complexity values were averaged. Those complexity values which were greater than 2 from the mean of the rest of the ratings, were discarded from the final average. This improved the WCL's inter-annotator agreement to 0.64. The remaining disagreements between the annotators were believed to be due to the differing characteristics of their native languages, hence caused by cross-linguistic influence. #### 9.1.4. Personalized LS Dataset Lee and Yeung [89] constructed a dataset of 12,000 words for personalized CWI. These words were ranked on a five-point likert scale. 15 learners of English, who were native Japanese speakers, were tasked with rating the complexity of each of the 12,000 words. The five labels that they could choose from ranged between (1) "never seen the word before", to (5) "absolutely know the word's meaning" [89]. Lee and Yeung [89] converted these multi-labeled ratings into binary labels. They considered words ranked 1 to 4 as being complex, and words ranked 5 as being non-complex. However, their use of a multi-labeled likert scale means that this dataset can be used for continuous complexity prediction. The 15 annotators chosen for data annotation were split into two groups of English proficiency. Thus, two subsets of the dataset were created: the low English proficiency subset, and the high English proficiency subset. The low English proficiency subset was annotated by learners whom knew less than 41% of the 12,000 words. The high English proficiency subset was annotated by learners whom knew more than 75% of the 12,000 words. As such, the Personalized LS Dataset [89] is an ideal resource for future personalized LCP research. ### Lexical Complexity Prediction in Languages Other than English Since CWI-2018 [185] (Section 6.2), LCP for other languages has began to receive more attention in the form of monolingual, multilingual, and cross-lingual LCP [55, 184]. Monolingual LCP refers to the the task of predicting the complexity values of words in a single language. Multilingual LCP refers to the task of creating a LCP system that can be trained on and used to predict the lexical complexities of multiple languages. Cross-lingual LCP refers to the task of training a LCP system on one or multiple languages and then using that system to predict the lexical complexities of a language previously unseen within the training set. #### 9.2.1. French, Spanish and German As previously mentioned in Section 6.2, the CWI-2018 shared-task at BEA (Peng et al., 2018), contained datasets in French, Spanish, and German. It was discovered that systems generally performed well across these languages, with high performance in one language correlating with high performance in another. The organizers of CWI-2018 saw this as evidence in support of cross-lingual LCP (Section 9.2.6). Billami et al. (Billami et al., 2018) was interested in the perceived lexical complexity of French words and as a result created the Reyf lexicon. This lexicon contains French synonyms that have been ranked in regards to their reading difficulty using a SVM ranker trained on the Manulex resource (Soler et al., 2018). Gari Soler et al. (Gari et al., 2018) investigated the performance of word embeddings at predicting the lexical complexity of French words. They discovered that word embeddings outperformed statistical features, such as word length, number of phonemes, or log frequency when used in isolation. However, when these statistical features were used in unison, they outperformed word embeddings. Other studies interested in French, such as Tack et al. (Tack et al., 2018) and Tack (Tack et al., 2018), have already been described in the Personalized Complexity Section (4.4). The ALexS-2020 (Ale et al., 2020) shared-task and its submitted systems (Billami et al., 2018; Sorel et al., 2018; Sorel et al., 2018) sought to predict Spanish lexical complexity and have been introduced within Section 6.3. Merejildo (Merejildo, 2018) has since detailed that construction of a Spanish CWI corpus. A group of 40 native-speaking Spanish university students were tasked with identifying which words they believed to be complex within 3,887 academic texts. Merejildo (Merejildo, 2018) conducted feature extraction on the identified complex words and found that word length and frequency were common markers of Spanish lexical complexity. Apart from several researchers that have participated in CWI-2018 (Peng et al., 2018) or that have later utilized the CWI-2018 dataset (Billami et al., 2018; Sorel et al., 2018), little stand-alone research has been conducted on German LCP. #### 9.2.2. **Chinese** Lee and Yeung (Lee and Yeung, 2018) created a SVM designed to identify Chinese complex words. Their monolingual LCP model was then further developed by Yeung and Lee (Yeung and Lee, 2018). They tasked eight learners of Chinese to rank 600 Chinese words using a five-point likert scale. If the annotator assigned a complexity value of 1 to 3, then that word was labeled as complex. If, however, the word was assigned a complexity value of 4 or 5, then that word was labeled as being either challenging or non-complex respectively. Their SVM classifier was trained on a number of features parallel to Lee and Yeung (Lee and Yeung, 2018). These being, the target word's ranking in a Chinese proficiency test known as the Hanyu Shuiping Kaoshi (Hanyu, 2018), along with word length, word frequency in the Chinese Wikipedia Corpus (Lee and Yeung, 2018), and character and word frequency in the Jinan Corpus of Learner Chinese (Yeung and Lee, 2018). They discovered that their logistic regression models outperformed their prior SVM (Lee and Yeung, 2018). They also found that their model was better at predicting the lexical complexities of their annotators with low Chinese L2 proficiency compared to those with high Chinese L2 proficiency. #### 9.2.3. **Japanese** Nishihara and Kajiwara (Nishihara and Kajiwara, 2011) used a SVM to predict the lexical complexities of Japanese words. They created a new dataset that expanded upon the Japanese Education Vocabulary List (JEV). JEV contains 18,000 Japanese words divided into three levels of difficulty: easy, medium, or difficult. Nishihara and Kajiwara (Nishihara and Kajiwara, 2011) also rated the complexity of words from Japanese Wikipedia, the Tsukuba Web Corpus (Billami et al., 2018), and the Corpus of Contemporary Written Japanese (Yeung and Lee, 2018). This increased the size of their dataset to 40,605 Japanese words. They trained a monolingual SVM to predict the level of complexity associated with each target word. To achieve this, they used a variety of features that were also used by prior English CWI systems, such as POS tags, character and word frequencies, and word embeddings. However, they discarded other popular features, such as word length, due to the topological and morphological differences between English and Japanese. Unlike English, Japanese "is composed of three types of characters: Hiragana, Katakana, and Kanji" (Nishihara and Katahana, 2011). The characters Hiragana and Katakana are considered simple characters, whereas Kanji are ideographic and are therefore considered more difficult to interpret. As such, in Japanese, the proportion of complex to simple characters within a word is a good indicator of a word's complexity. Nishihara and Kajiwara (Nishihara and Kajiwara, 2011) concluded that the use of such language specific features was responsible for their model's good performance. #### 9.2.4. Swedish Smolenska (Smolenska, 2017) experimented with a variety of models for binary CWI: SVM, RF, naive bayes, gradient boosting, logistic regression, and stochastic descent models. These models were tested on one of two datasets consisting of Swedish words labeled with complexity ratings. The first dataset contained 4,305 Swedish words marked with labels from the Common European Reference Framework (CERF). These labels ranged from A1, elementary proficiency, to C2, advanced proficiency. The second dataset consisted of 4,238 manually extracted Swedish words from a variety of dictionaries and textbooks that were also labeled with CERF ratings. Whilst evaluating the quality of the two datasets, Smolenska (Smolenska, 2017) discovered that the second dataset correlated better with the judgements of two human evaluators. Results showed that the RF model achieved the best performance on this dataset having been trained on a number of features, including morpho-syntactic, contextual, conceptual, and frequency based features. #### 9.2.5. Multilingual LCP Sheang (Shenang, 2018) saw the advantages of adopting a feature engineering approach as well as a CNN model for multilingual CWI. As a result, Sheang (Shenang, 2018) developed a semi-supervised CNN model trained on word embeddings and common CWI features, such as word frequency, word length, syllable and vowel count, term frequency, POS tags, syntactic dependency, and stop words. Being trained on the English, Spanish, and German datasets of CWI-2018 (Smolenska, 2017) (Section 6.2), this multilingual model was found to outperform the best performing model of CWI-2018 (Smolenska, 2017) on the Spanish and German datasets. Aprosio et al. (Aprosio et al., 2017) created a LCP system that caters for the native language of the user. As previously discussed within Section 4.4, an annotator's or user's native language influences their perception of lexical complexity through what is known as cross-linguistic influence. As such, their system was designed with the ability to identify the false friends as well as the cognates between the user's native language and the language of the annotated or inputted text. False friends are "those pairs of words in two different languages that are similar in form but semantically divergent" (Aprosio et al., 2017). Cognates, on the other hand, are those pairs of words with the same meaning and similar spelling in two or more languages. Their system firstly identified those words within the inputted text that may be considered cognate. It achieved this by taking into consideration three similarity metrics: XXDICE (Smolenska, 2018), Normalized Edit Distance (Datta et al., 2018), and Jaro/Winkler (Jaro and Winkler, 2018). Once potential cognates had been identified, their system used an SVM to classify which of these cognates may, in fact, be false friends. Their SVM was trained on the cosine similarity between the candidate words, and the cosine similarity between these words' synonyms. Those words which were found to be false friends were labeled as complex, whereas those words which were considered to be cognates and not false friends were labeled as non-complex. Thus, by taking the language of the user into consideration, Aprosio et al. (Aprosio et al., 2017) created a LCP system that can recognize and exploit language-dependent features to improve its performance. Aprosio et al. (Aprosio et al., 2017) is, therefore, another good example of personalized LCP. #### 9.2.6. Cross-Lingual LCP Finnimore et al. (Finnimore et al., 2017) continued working on CWI-2018's sub-task 1 (Smolenska, 2017) (Section 6.2). They focused on the development of a cross-lingual CWI model with a particular focus on discovering which monolingual or multilingual CWI features would also perform well in a cross-lingual setting. They discarded previous features that they believed to be language-dependent, hence not transferable from one language to another. They state that the use of word-level n-grams is an example of such a language-dependent feature, since word-level n-grams denote the unique collocations of a particular language. Instead, Finnimore et al. (Finnimore et al., 2017) experimented with a variety of features that they believed to be cross-lingual. These features being the number of syllables, tokens, and complex punctuation marks, along with the sentence length and character-level probabilities associated with a target word. They found that training their linear regression model on languages that belonged to the same language family as the target language improved its macro F1-score. However, the inclusion of languages unrelated to that of the target language had the opposite effect. Overall, their cross-lingual model achieved good performance. They go on to state that this is remarkable given its relatively simplistic set of features, thus proving the viability of cross-lingual LCP. Bingel and Bjerva [24] provide further evidence in favor of cross-lingual LCP. Their cross-lingual CWI system achieved the best F1-score in predicting the lexical complexities of an unseen language, being French. Consistent with other high performing CWI systems, it was an ensemble-based model that contained "a number of RFs as well as feed-forward neural networks with hard parameter sharing" [24]. Their RFs were trained on a number of features, whereby they discovered that word length and frequency were good cross-lingual predictors of lexical complexity. Zaharia et al. [189] experimented with several transformer-based models, such as Multilingual BERT (mBERT) [126] and XLM-RoBERTa [41], for cross-lingual CWI. Both mBERT and XLM-RoBERTa are multilingual masked language models that are pretrained on numerous languages. mBERT is pretrained on "Wikipedia pages of 100 languages with a shared word piece vocabulary" [126]. XLM-RoBERTa is also pretrained on 100 languages, yet with more data [41]. Zaharia et al. [189] tested these models performance on the WikiNews datasets provided by CWI-2018 [185]. They found that XLM-RoBERTa was the best performing model. It achieved a higher F1-score than mBERT on the WikiNews datasets when tasked with predicting the lexical complexities of unseen German or French target words. These F1-scores being +0.02 and +0.04 respectively greater than that achieved by mBERT. They attributed XLM-RoBERTa's superior performance to its larger pretrained multilingual corpus [41, 189]. #### 9.2.7. Is transfer learning possible for predicting lexical complexity across multiple languages? The studies detailed in Section 9.2.6 provide evidence in favor of transfer learning for cross-lingual LCP. Numerous features, for instance, number of syllables, tokens, complex punctuation marks, and sentence length, have been proven to work well when trained on one language and then used to predict lexical complexities in another [24, 55]. Models, such as mBERT and XLM-RoBERTa have also been shown to achieve good performances for cross-lingual LCP [189]. With the availability of LCP datasets in high-resource languages (Section 9) and with LCP research gaining traction in languages other than English (Section 9.2), we suspect cross-lingual LCP will become increasingly popular. ## 10. Summary This paper has presented an overview of LCP research with a specific focus on research conducted on English. It has defined what is meant by "_complexity_" within LCP and has described types of computational modelling applied to its prediction, such as comparative, binary, continuous, and personalized complexity (Sections 2 to 4). It has provided the evaluation metrics used to evaluate LCP performance and has discussed the international shared-tasks that have inspired the creation of numerous LCP systems (Sections5 to 6): CWI-2016 [115], CWI-2018 [185], and LCP-2021 [147]. It has explained the architecture, development, and evolution of these LCP systems, ranging from feature engineering approaches, neural networks to the most recent state-of-the-art transformer-based models whilst discussing relevant research questions within the field (Section 7). It has presented various use cases and applications of LCP, including for other NLP-related tasks such as machine translation (Section 8.3.1) and author identification (Section 8.3.2). It has collected and summarized English datasets (Section 9.1) and has also briefly presented work on languages other than English (Section 9.2). ### Opportunities and Challenges There now exists an unprecedented demand for LCP research. With distance learning becoming ever more popular and with LCP being a precursor within other NLP-related tasks, the future for LCP research would appear to be promising. LCP-2021 [147] has shown the superiority of transformer-based models for LCP, especially when a diverse set of transformers are used to form an ensemble-based model [123, 183]. CWI-2018 along with others [24, 55, 189], have proven that cross-lingual LCP is viable. LCP is now being conducted for languages other than English [88, 111, 153, 161, 185, 192]. As such, we expect to see ensemble-based models with a diverse set of transformers being used for multi-lingual and cross-lingual LCP. Furthermore, personalized LCP calls for the development of LCP systems with the ability to predict the complexity assignments made by the individual or specific target demographic, rather than belonging to a generalized population [89, 163, 195]. We expect such personalized LCP systems to become popular as their datasets are likely to contain more consistent complexity ratings due to there being less disagreement among their annotators. Research questions investigating such areas as the effect of including context on LCP performance as well as the advantages of complexity prediction of multi-word expressions, are other avenues of LCP research that have likewise proven to aid LCP [67, 183, 183]. We therefore also believe that context and MWEs will continue to be taken into consideration by future LCP systems. Future LCP research, however, is not without its challenges. A current lack of available data may have already lead to some cases of overfitting with models being unable to generalize their predictions across multiple domains or target populations. In addition, dataset quality has previously been put into question, whereby the use of a small pool of annotators, an irregular train/test split, or high levels of inter-annotator disagreement may have lead to unreliable complexity labels [193]. To overcome these challenges, we stress the importance of further research into continuous and personalized complexity prediction that takes inconsideration context and MWEs, along with the implementation of transfer-learning models for under-resourced languages. ###### Acknowledgements. The authors would like to thank Richard Evans for the valuable suggestions and feedback provided. We further thank the anonymous ACM CSUR reviewers for their insightful feedback.
2303.15784
Ideograph: A Language for Expressing and Manipulating Structured Data
We introduce Ideograph, a language for expressing and manipulating structured data. Its types describe kinds of structures, such as natural numbers, lists, multisets, binary trees, syntax trees with variable binding, directed multigraphs, and relational databases. Fully normalized terms of a type correspond exactly to members of the structure, analogous to a Church-encoding. Moreover, definable operations over these structures are guaranteed to respect the structures' equivalences. In this paper, we give the syntax and semantics of the non-polymorphic subset of Ideograph, and we demonstrate how it can represent and manipulate several interesting structures.
Stephen Mell, Osbert Bastani, Steve Zdancewic
2023-03-28T07:52:50Z
http://arxiv.org/abs/2303.15784v1
# Ideograph: A Language for Expressing and Manipulating Structured Data ###### Abstract We introduce Ideograph, a language for expressing and manipulating structured data. Its types describe kinds of structures, such as natural numbers, lists, multisets, binary trees, syntax trees with variable binding, directed multigraphs, and relational databases. Fully normalized terms of a type correspond exactly to members of the structure, analogous to a Church-encoding. Moreover, definable operations over these structures are guaranteed to respect the structures' equivalences. In this paper, we give the syntax and semantics of the non-polymorphic subset of Ideograph, and we demonstrate how it can represent and manipulate several interesting structures. ## 1 Introduction Structured data is ubiquitous: lists, trees, graphs, relational databases, and syntax trees are just a few of the structures that underpin computer science. We often want to perform operations on such objects in ways that both respect and leverage their structure. For instance, we might wish to aggregate the elements of a bag (multiset). We could represent bags as lists and fold over them as lists, but this provides no guarantee that the result is invariant to the order. Or, we might wish to manipulate syntax trees of programs. We could represent variables as names or de Brujin indices [8], but in either case operations on the representation must be shown to respect the binding structure. Other, similar, circumstance arise often in practice. Yet, there are surprisingly few formalisms for actually defining such structures, much less for defining invariant-respecting operations over them. Relational database schemas define bags of records, with certain additional structure (most notably, foreign key constraints between tables). Most widely used programming languages, like C, Java, and Python, and data interchange formats, like Google's Protocol Buffers, have limited type systems, supporting at most product, sum, and function types, but not supporting the graph structures and constraints that would be required to define bags or syntax trees with variable binding. Dependently-typed languages, like Coq, are capable of imposing complex constraints, but even simple data structures, like syntax trees with binding structure, have proven tricky to deal with in practice [5]. As a result, we resort to implementing ad-hoc solutions. For aggregating over bags, we can separately prove that our function is invariant to order, and thus is truly a function over bags rather than lists. For manipulating syntax trees, we can separately prove that our substitution operation is capture-avoiding. However, this must be done for each new structure and operation. We want a general formalism for representing and manipulating a rich class of structures. Graphs are a common formalism for encoding many kinds of data, but they don't capture everything. For example, binary trees are "graphs", but they have more structure: they have two distinct kinds of nodes ("branch" and "leaf"), two kinds of edges ("left child" and "right child"), and the requirements that (1) each branch has one left child and one right child, and (2) that every node except the root has one parent. The formalism of graphs also does not account for manipulations: given a binary tree whose leaves are themselves tagged with other binary trees, we might want to collapse this tree of trees into a single tree. While a good starting point, graphs _per se_ are not a precise enough formalism to capture these structures and operations. Church-encodings [23, 6] in polymorphic lambda calculus can precisely express many such structures, and they provide a natural notion of structure-respecting manipulation. For example, the type \(\forall X.\ (X\to X\to X)\to(Y\to X)\to X\) encodes exactly binary trees whose leaves are labeled with elements of \(Y\). (Roughly, the two arguments correspond to the two kinds of nodes in binary trees: \(X\to X\to X\) corresponds to branch nodes, with two tree-children and one parent; \(Y\to X\) corresponds to leaf nodes with one \(Y\)-child and one parent.) Further, Church-encodings of structures are themselves functions, corresponding to generalized fold operations: to use a term, you provide one function per constructor, and each occurrence of a constructor in the term is replaced by the corresponding function call. This allows the manipulation of terms in a structure-respecting way. However, standard Church-encodings [6] are over heterogeneous term algebras, but bags, relational database schemas, and syntax trees with variable binding are not term algebras. Finally, these encodings are not canonical, e.g., \(\forall X.\ (Y\to X)\to(X\to X\to X)\to X\) also encodes binary trees. As the complexity of encoded structures increases, the number of equivalent encodings may increase combinatorially. In this work, we leverage the complementary strengths of these two approaches, building a language called Ideograph, where both the terms and the types are graph-structured. By having a calculus, we are able to precisely encode many structures and define structure-respecting operations over them. By having terms that are graphs rather than trees, we are able to capture a richer set of structures. By having types that are graphs, we are able to eliminate many redundant encodings of structures. We begin by using examples to describe the terms (Section 2.1), operational semantics (Section 2.2), types (Section 2.3), and a well-formedness condition (Section 2.4), followed by the formalism (Section 2.5). We then present representations of several data structures in the language (Section 3) and demonstrate the manipulation of such structures (Section 4). We conclude with discussions of related work (Section 5) and future work (Section 6). For clarity and concision, we omit polymorphism from this presentation. ## 2 Ideograph Ideograph is a means of expressing and composing structured graphs. Its terms are "structured" in the sense of having distinct kinds of edges and nodes. Nodes have "ports", and edges connect nodes via these ports. Though we introduce additional constructs to support computation and polymorphism, the core idea is to substitute copies of a graph for certain nodes in another graph. Because the formal definitions of the syntax and semantics have many moving parts and are opaque without context, we begin by stepping through the examples in Figure 1, which demonstrate the key aspects of Ideograph. The formalism is presented in Section 2.5. ### Terms A term in Ideograph, henceforth an _ideogram_, consists of a set of _boxes_ (\(\mathcal{B}\), depicted as rounded rectangles e.g. in Figure 1), _nodes_ (\(\mathcal{N}\), gray circles or rectangles), and _ports_ (\(\mathcal{P}\), small triangles and squares, hollow or filled), with several relations among these different objects. The boxes, nodes, and ports reside in other boxes (the relation \(R_{R}\), depicted by nesting), with the boxes forming a tree. Each port is either a _receiver_ (\(\mathcal{P}_{-}\), hollow) or a _provider_ (\(\mathcal{P}_{+}\), filled) and is for either a _resource_ (\(\mathcal{P}_{R}\), square) or a _constructor_ (\(\mathcal{P}_{C}\), triangle). Ports are typically _attached_ (\(R_{A}\), depicted by contact) to a node or a box. We will introduce other relations between these objects as they arise in the following examples. Figure 4 and Figure 7 give illustrations annotated with these objects and relations. Terms are also subject to a well-formedness condition that will be discussed in Section 2.4. The comprehensive formalism is deferred to Section 2.5. Figure 1: Terms in Ideograph, along with their types, and analogues of each in a generic functional programming language. Though there are multiple Ideograph types that could represent the same functional type, these should provide the right intuition. More precisely, Ideograph’s type system is linear in the sense of Girard [12], and here we translated standard function types \(X\to Y\) as \(!(X\multimap Y)\). This makes function-typed arguments reusable, while other arguments are linear. However, there are other translations [15], and this was a purely expository choice. For simplicity of presentation, we only use one primitive (\(\mathsf{x}\)) in the types, so the types given are not necessarily the principal types of their terms. The formalism allows labeling resource fields with additional primitives. Simple functions.Consider the identity function (Example A in Figure 1). As an ideogram, it is a box with two resource ports: one resource provider port (solid square), analogous to the input x of the functional analogue; and one resource receiver port (hollow square), analogous to the function output. Because the identity function returns its input as its output, there is a wire between the two resource ports, which is captured by the bijective _resource wiring relation_ (\(R_{WR}\), depicted with thick lines) between resource provider and receiver ports. Example B is slightly more complicated. It has two resource provider ports for the two arguments, x and y, and two resource receiver ports for the two outputs, the left and right sides of the tuple. The two wires indicate which input gets returned as which output. Because the resource wiring relation is bijective, there are no terms of this type analogous to returning the pairs (x, x) or (y, y). This makes resources linear. Calling functions.Example C is the same as Example A, but with the addition of a single, unused constructor provider port (depicted as a filled triangle, sometimes just called a "constructor"), corresponding to the unused argument f with type X -> X. This lack of use is allowed because constructors are not linear in the way that resources are. Examples D and E are more interesting, as they actually use the constructor. Nodes are analogous to function invocations, and so in Example D, we have one node corresponding to the one call to f. To capture that the node was constructed by the constructor provider port, the port and node are associated by the _constructor usage relation_ (\(R_{CU}\), depicted with a thin, possibly branching line; the branching structure is not meaningful, and exists to improve readability). Each node must be associated with exactly one constructor, but constructors can be associated with any number of nodes. In Example E, the constructor is used to construct two nodes, analogous to the two calls to f. Nodes can have associated ports in the same way that boxes can. In Examples D and E, the nodes each have one resource receiver port, corresponding to the input to the f call, and one resource provider port, analogous to the output. In Example D, the wire on the left corresponds to passing the input, x, to the call to f, and the wire on the right is analogous to returning the output of f. In Example E, the wires correspond to passing x to the first call to f, passing its output to the second call to f, and finally returning its output. Note that nodes and boxes have opposite views of provider and receiver: when calling a function (analogous to a node), the function receives the input and provides the output; when defining a function (analogous to a box), the context provides the input and receives the output. Passing and returning functions.Example F is like Example D, but instead of f taking a single argument of type x, it also takes an argument of type X -> X. This is analogous to the constructor receiver port (hollow triangle) attached to the node. That we are passing the identity function corresponds to the nested box, a copy of Example A, that is connected to the constructor receiver port by the _constructor argument relation_ (\(R_{CA}\)). This relation is a bijection between constructor receiver ports and boxes (excluding the top-level box). This is the first example with non-trivial box-residence structure: there are two boxes, one (depicted as inner) being the child of the other (depicted as outer). Example G shows how a function can return a function: the function-typed output is analogous to the constructor receiver port, which, as in Example F, must be connected to a box. Note that the constructor usage relation can sometimes cross box boundaries, analogous to lexical scoping for functions: in Example G, the constructor provider port corresponding to \(\mathsf{f}\) is used in the inner box. Formally, the constructor usage relation can associate nodes to constructor provider ports in the same box or one that is higher in the residence tree. Let-bindings.So far we we have only seen values. To have terms that can take operational steps, we also have let-bindings (\(\mathcal{D}\), depicted by the contact of a constructor receiver port and a constructor provider port). In Example H, the box connected to the receiver port is analogous to the body of the let-binding, fun y => f (f y), and the two nodes connected to the provider port are analogous to the instances of \(\mathsf{g}\). Each let-binding must be attached to exactly one constructor receiver port and one constructor provider port. Each port must be associated with exactly one box, node, or let-binding. Each let-binding also has a type, discussed in Section 2.3. ### Operational Semantics Recall that the core idea of Ideograph is to substitute terms for the nodes of other terms. The let-binding construct connects a box (the binding's body) to some nodes (the occurrences of the binding's bound variable). The single reduction rule of Ideograph is the substitution of the body for the occurrence nodes. (With polymorphism, there is also a type-level let-binding, and there is a second reduction rule for substituting at the type level.) Consider Figure 2 (i). The let-binding is analogous to the definition of \(\mathsf{g}\), sequencing two nodes, each analogous to a call to \(\mathsf{f}\). This let-binding is then used to construct two nodes which are themselves sequenced. Stepping takes the contents of the box (blue) and places a copy of it at each occurrence node (orange). This intuition is depicted in Figure 2 (ii), but requires a bit of clean-up. Each adjacent pair of resource receiver and provider ports is replaced with a wire. The \(\mathsf{f}\) nodes in the body of \(\mathsf{g}\) remain as \(\mathsf{f}\) nodes even after substitution, though we now have four of them. Finally, we erase the let-binding, to get Figure 2 (iii). Though capturing the core idea of substituting terms for nodes, the previous example does not have constructor ports on the let-binding. Figure 3 does. We again copy the body of the let-binding, \(\mathsf{h}\), for its occurrence nodes, and then we erase the let-binding and replace each pair of resource receiver and provider ports with a wire. Crucially, the pair of constructor receiver and provider ports becomes a new let-binding. This means that the term can continue stepping. (Indeed, Figure 3 (iii) is the same term as Figure 2 (i).) For a formal definition of the operational semantics and a formalization of this example, see Section 2.5. Figure 2: (i) and (iii) depict ideograms and their functional analogues. (i) evaluates to (iii) in one step, by inlining the let-binding. (ii) is not a term, but depicts how inlining is done. Colors indicate analogies. ### Types and Correspondences Doing substitution as outlined above poses a challenge: how do we know the correspondence between the ports on the box and the ports on a node? Figure 2 assumes that the left ports and right ports on the nodes correspond to the left port and right port on the box, respectively. The primary role of types in Ideograph is to make this correspondence precise. We now define a _type_ and, between components of a type (\(\mathcal{I}\), \(\mathcal{F}\), defined shortly) and components of a term (\(\mathcal{B}\), \(\mathcal{N}\), \(\mathcal{P}\), \(\mathcal{D}\)), a _correspondence relation_. The counterparts of the ports in a term are the _fields_ (\(\mathcal{F}\), depicted the same as ports) of a type, and a correspondence relation maps each port to at most one field. In a term, ports are attached to a box or a node, whereas in a type, fields reside in an _interface_ (\(\mathcal{I}\), depicted as dotted, rounded rectangles), and a correspondence relation maps each box and each node to at most one interface. The residence of fields in an interface, as well as the nesting of interfaces, is captured by the _residence relation_ (\(R_{R}\), depicted by containment), much like it is for terms. Correspondences must be consistent, in that if a box or node corresponds to an interface, then the ports attached to the box or node must correspond bijectively to the fields in the interface. Like ports, fields are either received (\(\mathcal{F}_{-}\)) or provided (\(\mathcal{F}_{+}\)) and are for either constructors (\(\mathcal{F}_{C}\)) or resources (\(\mathcal{F}_{R}\)). Constructor and resource ports correspond to constructor and resource fields, respectively. When attached to nodes, receiver and provider ports correspond to receiver and provider fields, respectively. However, when attached to boxes, this is reversed: receivers correspond to providers and providers correspond to receivers. Constructor fields are bijectively associated with interfaces that reside at the same level (\(R_{I}\), depicted by contact). Correspondences must also be consistent with respect to \(R_{I}\), in that if a field is associated with an interface, ports corresponding to the field must only be connected to boxes (via \(R_{CA}\)) and nodes (via \(R_{CU}\)) that correspond to that interface. Finally, in each interface, there is a _connectivity relation_ (\(R_{C}\)) between fields, covered in Section 2.4. Example.In the illustrations, the depiction of correspondences is somewhat implicit, via the positioning of ports (either left, right, top, bottom, top-left, top-right, bottom-left, or bottom-right). In Figure 4, the correspondence \((a,A)\) is depicted by placing both the port and the field on the left of their containers, whereas \((b,B)\) is on the top left and \((c,C)\) is on the right. Since \(\mathbf{Y}\) is associated with \(B\) and \(\mathbf{y}\) is constructed by \(b\), the definition of correspondence relation forces Figure 3: (i) and (iii) depict ideograms and their functional analogues. (i) evaluates to (iii) in one step, by inlining the let-binding. (ii) is not a term, but depicts how inlining is done. Colors indicate analogies. Figure 7 contains a formally annotated version of this example. the correspondence \((\mathbf{y},\mathbf{Y})\). The placement of \(d\) on the left of the node and \(D\) on the left of the interface depicts the correspondence \((d,D)\), and similarly for \((e,E)\) on the bottom and \((f,F)\) on the right. The correspondence \((e,E)\) forces \((\mathbf{x},\mathbf{X})\), and then the left and right positionings depict \((g,G)\) and \((h,H)\). Types internal to terms.Recall the problem of associating ports between the body and occurrences of a let-binding: the solution is to give each let-binding a type, and then, for the body and each occurrence of the let-binding, give a correspondence with the type. In order to do so, terms themselves must contain types, which is accomplished via an _internal type-fragment graph_ (\(\mathcal{T}\), not depicted), containing the unions of the vertices and edges of zero or more types. In particular, its residence relation (\(R_{R}(\mathcal{T})\), not depicted), may be a forest rather than a tree. The interfaces in \(\mathcal{I}(\mathcal{T})\) that are roots of this forest are in bijection (\(R_{DI}\)) with the let-bindings. Finally, the _internal correspondence_ (\(R_{DC}\), depicted via relative port positioning) is a correspondence relating the body and occurrences of each let-binding with its associated interface. Now each port on an occurrence node is associated with a port on the body box, since they correspond to a shared field in the internal types. Types external to terms.While the components deriving from let-bindings participate in the internal correspondence, those deriving from the root box of the term do not. Given a type \(T\) and a term \(t\), \(C\) is an _external correspondence between \(t\) and \(T\)_ if \(C\) is a correspondence, if \(C\) relates the root of \(t\) to the root of \(T\), and if \(C\) is disjoint from \(R_{DC}\). When a term is paired with an external correspondence, every box, node, and port (except ports directly attached to let-bindings) corresponds to exactly one interface or field. Figure 4: An annotated illustration (i) and formalization (ii) of the type from Figure 1 F. An annotated illustration (iii) and formalization (iv) of the term from Figure 1 F. The correspondence between them (v), implicitly depicted via field and port positioning. \(\{(a,\{b,c\})\}\) is shorthand for \(\{(a,b),(a,c)\}\). See Figure 8 for the descriptions of all components of the formalism. Canonicity of terms.A term \(t\) at a type \(T\) in a functional language translates to, not just an Ideograph term, but the pair of an Ideograph term \((\!(t)\!)_{G}\) and an external correspondence \((\!(t)\!)_{C}\) between \((\!(t)\!)_{G}\) and \((\!(T)\!)\). Consider Figure 1 B. In a traditional functional language, this type has two linear terms: id := fun x, y => (x, y) and swap := fun x, y => (y, x). In Ideograph, there is only one term, which is shown, and is equal to \((\!(\!(\!(\!(\!(\!(\!(\!(\!(\!(\!(\!(\!(\!(\!(\!( (\! \! \ function and then returning the result of an independent function call; I is valid, analogous to a function that returns its input as its output. The type for the bottom row does not have a perfect analogue in functional programming, but corresponds roughly to (X -> 1) * X: a pair of a continuation accepting an X and a value of type X. D is valid, analogous to using the continuation and the value separately; F is valid, analogous to passing the value to the continuation; H is valid, analogous to constructing the continuation and value with separate function calls; J is invalid, analogous to returning the argument eventually passed to the continuation as the right side of the pair. Both E and J have prohibited cycles. Figure 6: Types (A, B) and fragments of terms (C, D, E, F, G, H, I, J). The interface in A corresponds to the nodes in C and E and to the boxes in G and I. The interface in B corresponds to the nodes in D and F and to the boxes in H and J. The dashed lines are not part of the term, but reflect \(R_{C}\) between the fields of the type, shown between the corresponding ports. Note that because dual types are used for boxes, the dashed lines in G, H, I, and J are the compliment of \(R_{C}\). E and J are ill-formed terms because of the cycles highlighted in orange. Figure 7: An annotated step of the operational semantics at type (i), from term (ii) to term (iv) (shown previously in Figure 3). An illustration of an intermediate step, which is not a term (iii). The partial formalizations of the before term (v) and the after term (vi), and their correspondences \(C\) to the type in (i). The omitted pieces of the formalizations are analogous to those in Figure 4. The internal types \(\mathcal{T}\), usually not depicted, are shown here. Term (ii) contains an internal interface \(\mathbf{X}\), which is the type of the let-binding \(\mathbf{y}\). Stepping at \(\mathbf{y}\) substitutes the contents of \(\mathbf{x}\) (the body of \(\mathbf{y}\)) for \(\mathbf{w}\) (the occurrence of \(\mathbf{y}\)) and copies \(\mathbf{W}\) to \(\mathbf{V}\) (shown in (iii)). Finally, the pairs of resource ports \(f\), \(v\) and \(x\), \(h\) are replaced with wire, while \(g\), \(w\), and \(\mathbf{V}\) are attached to a fresh let-binding, \(\mathbf{o}\), yielding term (iv). ### Formalism We now provide a precise formulation of Ideograph. We suggest referring to Figures 4 and 7 to ground definitions as they are introduced. **Definition 1** (fragment graphs).: We define _type-fragment graphs_ and _term-fragment graphs_ in Figure 8. Each consists of several sets of vertices and several edge relations with conditions. **Definition 2** (types and terms).: A type-fragment graph is a _type_ if \(R_{R}\) has a single root interface. A term-fragment graph is a _term_ if \(R_{R}\) has a single root box. **Definition 3** (cographs).: The set of _cographs on \(\mathcal{V}\)_ is the smallest set of symmetric, irreflexive graphs on vertices \(\mathcal{V}\) that contains the singleton graphs and is closed under complement and disjoint union. Intuitively, this is the set of formulas on atoms \(\mathcal{V}\) that can be formed with conjunction and disjunction, quotienting out associativity and commutativity. **Definition 4** (wire-safety).: Assume a constructor usage relation \(R_{CU}\), a constructor wiring relation \(R_{WC}\), and a pair \((n,c)\in R_{CU}\). Let \(c\) reside in \(b_{c}\) and \(n\) reside in \(b_{n}\), where \(b_{n}\sqsubseteq b_{c}\). If \(b_{n}\neq b_{c}\), let \(b^{\prime}_{n}\) be the box residing directly in \(b_{c}\) such that \(b_{n}\sqsubseteq b^{\prime}_{n}\sqsubset b_{c}\), and let \(c^{\prime}\) be the constructor receiver port (in \(b_{c}\)) associated with \(b^{\prime}_{n}\). \(R_{CU}\) is _wire-safe_ for \(R_{WC}\) if, for all \((n,c)\in R_{CU}\), either \(b_{n}=b_{c}\) or \((c,c^{\prime})\in R_{WC}\). **Definition 5** (correspondences).: Given a type \(T\) and a term \(t\), a relation \(C\in(\mathcal{B}\rightharpoonup\mathcal{I})\otimes(\mathcal{N}\rightharpoonup \mathcal{I})\otimes(\mathcal{P}\rightharpoonup\mathcal{F})\) is a a _correspondence_ if the following hold: (1) If \(b\in\mathcal{B}\) (or \(n\in\mathcal{N}\)) corresponds to \(\iota\in\mathcal{I}\), then the correspondence relation is bijective between the ports attached to \(b\) (or \(n\)) and the fields of \(\iota\). (2) If \(p\in\mathcal{P}_{C}\) corresponds to \(f\in\mathcal{F}_{C}\), then the box associated with \(p\) corresponds to the interface associated with \(f\). (3) Constructor and resource ports are associated to constructor and resource fields, respectively. (4) If \(p\) corresponds to \(f\), their receiver and provider kinds are the same if \(p\) is attached to a node and opposite if \(p\) is attached to a box. **Definition 6** (let-binding correspondences).: A correspondence \(C\) is a _correspondence for \(d\in\mathcal{D}\)_ if the box of \(d\) (via \(R_{A}\) and \(R_{CA}\)) and nodes of \(d\) (via \(R_{A}\) and \(R_{CU}\)) correspond to the interface of \(d\) (via \(R_{DI}\)). Distinct let-binding correspondences must cover disjoint sets of term components. **Definition 7** (external correspondences).: Given a type \(T\) an a term \(t\), we say that a correspondence relation \(C\) is an _external correspondence_ between \(t\) and \(T\) if \(C\) relates the root box of \(t\) with the root interface of \(T\) and \(C\) is disjoint from \(R_{DC}\). **Remark**.: Given a type \(T\), a term \(t\), and an external correspondence \(C\) between \(t\) and \(T\). Let \(C^{*}\coloneqq C\cup R_{DC}\). Every \(n\in\mathcal{N}\) and \(b\in\mathcal{B}\) occurs exactly once in \(C^{*}\). For every \(p\in\mathcal{P}\), either it is attached to some \(d\in\mathcal{D}\) and does not occur in \(C^{*}\), or it occurs exactly once in \(C^{*}\). **Definition 8** (term equality).: Given a type \(T\), terms \(t_{1}\) and \(t_{2}\), and correspondences \(C_{1}\) between \(t_{1}\) and \(T\) and \(C_{2}\) between \(t_{2}\) and \(T\), we say that \((t_{1},C_{1})\) is \(T\)_-equal_ to \((t_{2},C_{2})\) if, fixing a concrete labeling of vertices to yield \(\widehat{T}\), \(\widehat{t_{1}}\), \(\widehat{t_{2}}\), \(\widehat{C_{1}}\), and \(\widehat{C_{2}}\), there exists some relabeling \(h\) of the vertices in \(\widehat{t_{2}}\) such that \((\widehat{t_{1}},\widehat{C_{1}})=(h(\widehat{t_{2}}),h(\widehat{C_{2}}))\). **Definition 9** (substitution).: Assume a type \(T\), term \(t\), and correspondence \(C\) between \(t\) and \(T\). Given \(b\in\mathcal{B}\) and \(n\in\mathcal{N}\), where \(b\) resides in some \(b_{0}\in\mathcal{B}\) and \(n\) is at or below \(b_{0}\) in the residence forest, and given \(\iota\in\mathcal{I}(\mathcal{T})\) and correspondences \(C_{b}\subseteq R_{DC}\) between \(b\) and \(\iota\) and \(C_{n}\subseteq R_{DC}\) between \(n\) and \(\iota\), we define the _substitution of \((b,C_{b})\) for \((n,C_{n})\) at \(\iota\)_ to be the result if we: (1) Delete \(n\) (from \(\mathcal{N}\) and all relations). (2) For each component residing in \(b\), make a fresh copy residing in the box that contained \(n\), also making appropriate copies in \(R_{DC}\) and \(C\). (3) For each port \(p_{b}\) attached to \(b\), let \(p^{\prime}_{b}\) be the fresh copy of \(p_{b}\), and let \(C^{\prime}_{bp}\subseteq R_{DC}\) be the portion relevant to \(p^{\prime}_{b}\). Note that, for each \(f\) residing in \(\iota\), we now have a \(p^{\prime}_{b}\) that is fresh and a \(p_{n}\) that used to be attached to \(n\), and that they are a received/provided pair. Let \(C_{np}\subseteq C_{n}\) be the part relevant to \(p_{n}\). (4) For each resource field \(f\), \(p^{\prime}_{b}\) was wired to some \(p^{\prime\prime}_{b}\) and \(p_{n}\) was wired to some \(p^{\prime}_{n}\). Erase \(f\), \(p^{\prime}_{b}\), and \(p_{n}\) and add \((p^{\prime\prime}_{b},p^{\prime}_{n})\) to the wiring relation. (5) For each constructor field \(f\), create a new let-binding \(d\), and attach \(p^{\prime}_{b}\) and \(p_{n}\) to it. For the \(\iota_{f}\in\mathcal{I}(\mathcal{T})\) associated with \(f\), make a fresh copy of its subtree in \(\mathcal{I}(\mathcal{T})\) and let the root of the copy be \(\iota^{\prime}_{f}\). Change \(C^{\prime}_{bp}\) and \(C_{np}\) to refer to \(\iota^{\prime}_{f}\). Add \((d,\iota^{\prime}_{f})\) to \(R_{DI}\). **Definition 10** (inlining of let-bindings).: Assume a type \(T\), term \(t\), and correspondence \(C\) between \(t\) and \(T\). Given a let-binding \({d\in\mathcal{D}}\), let the associated interface be \(\iota\), the attached constructor receiver port be \(p_{-}\), the attached constructor provider port be \(p_{+}\), the argument to \(p_{-}\) be the box \(b\), the set of nodes produced by \(p_{+}\) be \(N\), the subset of \(R_{DC}\) relevant to \(b\) be \(C_{b}\), and for each \({n\in N}\), the subset of \(R_{DC}\) relevant to \(n\) be \(C_{n}\). Define the _inlining of \(d\)_ to be the result if we: (1) For each \({n\in N}\), substitute \((b,C_{b})\) for \((n,C_{n})\) at \(\iota\). (2) Delete \(d\), \(\iota\), \(p_{-}\), \(p_{+}\), and \(b\). **Definition 11** (reduction).: Given a type \(T\), terms \(t_{1}\) and \(t_{2}\), and correspondences \(C_{1}\) between \(t_{1}\) and \(T\) and \(C_{2}\) between \(t_{2}\) and \(T\), we say that \((t_{1},C_{1})\)\(T\)_-reduces_ to \((t_{2},C_{2})\) if there exists a \({d\in\mathcal{D}(t_{1})}\) such that the result of inlining \(d\) in \((t_{1},C_{1})\) is \(T\)-equal to \((t_{2},C_{2})\). **Definition 12** (descent of components).: A _component_ is a box, node, port, or let-binding. A component \(c_{1}\)_is a child of_\(c_{2}\) if \(c_{1}\) is attached (related in \(R_{A}\)) to \(c_{2}\), if \(c_{1}\) is the constructor argument (related in \(R_{CA}\)) of \(c_{2}\), or if \(c_{1}\) is a constructor usage (related in \(R_{CU}\)) of \(c_{2}\). For the transitive closure of this relation, \(c_{1}\)_descends from_\(c_{2}\). Note that the descent relation forms a forest, separate from the residence forest \((R_{R})\), and note that every component descends either from a let-binding or one of the roots of the residence forest. **Definition 13** (well-formedness).: Assume a type \(T\), a term \(t\), and an external correspondence \(C\) between \(t\) and \(T\). Let \(W\) be the relation on ports \({R_{WR}\cup R_{WC}}\). Define the relation \(F\) on ports such that \((p_{1},p_{2})\in F\) if either: (1) \(p_{1}\) and \(p_{2}\) are attached to the same \({d\in\mathcal{D}}\); (2) let \(c\) be the nearest common ancestor box or node of \(p_{1}\) and \(p_{2}\) in the descent relation, let \(p_{1}^{\prime},p_{2}^{\prime}\) be the ancestors of \(p_{1},p_{2}\) (respectively) attached to \(c\), and let \(f_{1}^{\prime},f_{2}^{\prime}\) be the fields corresponding to \(p_{1}^{\prime},p_{2}^{\prime}\) (respectively); if there is no such \(c\), \((p_{1},p_{2})\notin F\); otherwise, if \(c\) is a node, then \((p_{1},p_{2})\in F\) if \((f_{1}^{\prime},f_{2}^{\prime})\in R_{C}\); if \(c\) is a box, then \((p_{1},p_{2})\in F\) if \((f_{1}^{\prime},f_{2}^{\prime})\notin R_{C}\). Now, we say \((T,t,C)\) is _well-formed_ if for every cycle taking alternating edges in \(F\) and \(W\), there is some pair of ports \(p_{1},p_{2}\) in this cycle such that \((p_{1},p_{2})\in F\) but the edge \((p_{1},p_{2})\) is not part of the cycle. (This is directly inspired by the chorded-acyclic R&B-cograph condition from [26], and extended to handle the nesting of interfaces.) ## 3 Expressing Data ### Binary Trees Now we will see how Ideograph represents data, using unlabeled binary trees as an example. In polymorphic lambda calculus, they are represented by the type \(\forall X.(X\to X\to X)\to(1\to X)\to X\): the first argument, \(X\to X\to X\), is the (arity-2) branch constructor, and the second Figure 9: The type of unlabeled binary trees (i). A term of that type (ii). The representation of that term as a Church-encoding in a generic functional language (iii). A term that is ill-formed as an unlabeled binary tree (iv). The dashed lines are not part of the term, but are \(R_{C}\) between the fields of the type, shown between the corresponding ports. The illegal cycle is marked in orange. argument, \(1\to X\), is the (arity-1) leaf constructor (taking unit, since the trees are unlabeled). Dropping polymorphism, this corresponds to the Ideograph type in Figure 9 (i). The top-right constructor field represents branch nodes, with resource ports for one parent (top), one left child (bottom-left), and one right child (bottom-right); the bottom-right constructor field represents leaf nodes, with a resource port for one parent (top). The remaining resource port (top-left) corresponds to the root of the tree. Figure 9 (ii) and (iii) represent the same binary tree, with two branch nodes and three leaf nodes. The connectivity relation and well-formedness condition rule out terms like in Figure 9 (iv). Linearity ensures that there is a single tree: additional trees would have no resource port to serve as their root, and such terms would be ruled out by the bijectivity of the resource wiring relation. ### Directed Multigraphs As an example of a structure that is not a term algebra, and thus lacks a traditional Church-encoding [6], consider directed multigraphs. They are specified in Ideograph by the type in Figure 10 (i), with the top-right field corresponding to "vertices" and the bottom-right field corresponding to "edges". Figure 10 (ii) shows a multigraph with three vertices and three edges and its representation as a term. Because vertices may be associated with any number of edges, vertex nodes have a constructor port rather than a resource port. Edges have two ports, for receiving constructors from their source and target vertices. Here, constructor provider ports are shown directly connected to constructor receiver ports, which is not formally allowed--we abuse notation for clarity, and mean that the receiver port is attached to a box which contains a single node constructed by the provider port. Figure 10 (iii) shows a similar multigraph with three vertices, but with six edges. There is a related type for representing bags, which are essentially multigraphs without edges. ### Untyped Lambda Calculus Closed terms in untyped lambda calculus also do not form a term algebra. They have three kinds of nodes--application, abstraction, and variable--but every variable node must somehow be associated with an abstraction above it. Figure 10: The type of directed multigraphs (i). Two terms and the directed multigraphs they represent, (ii) and (iii). To improve readability, the constructor usage relation is depicted with the labels “v” and “e” rather than lines. Figure 11 (i) shows their type in Ideograph, with the bottom-right constructor field representing application, having resource fields for a parent (top), a left child (bottom-left), and a right child (bottom-right), and the top-right constructor field representing abstraction, having resource fields for a parent (top) and a child (bottom), and a constructor port for "variable nodes referring to this abstraction" (left). Rather than starting with a single constructor for "variable" nodes, each time an abstraction node is constructed, a new "variable" constructor appears. The connectivity relation on the interface of abstractions prevents variables from occurring above their binder, as in Figure 11 (iii): the edge between the "variable" constructor port and the "parent" port prohibits this, while the lack of edge between the "variable" constructor port and the "child" port allow variables to occur in the body of an abstraction. In contrast, Figure 11 (iv) lacks the variable-parent edge and thus does admit this term. This representation is closely related to parametric higher-order abstract syntax (phoas) [29, 7], which leverages parametric polymorphism to represent variable binding. Computation over our representation, like phoas, respects the binding structure of terms, allowing the implementation of single-step beta-reduction and providing capture-avoiding substitution for free. ## 4 Manipulating Data Now we show how to manipulate such data structures. The lack of polymorphism in this simplified presentation forces a simple example, since it is not clear how to write many functions over Church-encodings without instantiating the universal quantifier with complex types. Though the full version of Ideograph can represent much richer functions, the following example should Figure 11: The type of closed terms in untyped lambda calculus (i). A term representing “\(\lambda x.\lambda y.\,y\)\(x\)” (ii). A term corresponding to “\(x\) (\(\lambda x.x\))” (iii). Another type, differing from (i) by a single connectivity edge (iv). The connectivity relation from (i) is overlaid with dashed lines on the corresponding ports in (ii) and (iii). The cycle marked in orange makes the external correspondence between (i) and (iii) ill-formed. There is a well-formed external correspondence between (iii) and (iv). Figure 12: The type of functions from directed multigraphs to directed multigraphs (i). The term of that type that replaces each edge in the input with a pair of edges (ii). provide the right intuition. Recall the representation of directed multigraphs from Section 3.2. The term in Figure 12 (ii) behaves like a function that takes a directed multigraph as input and doubles each edge. Figure 13 depicts the evaluation of this function on the directed multigraph with three vertices and three edges from Figure 10 (ii). We emphasize that, in most programming languages, manipulating a graph involves manipulating a structure with labeled nodes--be it an adjacency matrix or a list of pairs of node indices--which makes it possible to write functions that are dependent on the labeling, and thus not truly functions over graphs. Ideograph cannot do that: it merely replaces each node of a structure (in this case both vertices and edges are nodes) with some pattern, as with Church-encodings; in this case, each vertex is replaced by a single vertex, and each edge by two edges of the same orientation. Unfortunately, this is conservative: there are legitimate functions on graphs that we cannot express, including the function that returns the number of vertices as a Church-numeral. Specifically, Ideograph has two distinct types that resemble the natural numbers, which are roughly "lists of units" and "bags of units"; we can write the function that counts the number of vertices as a bag of units, but we cannot write the function that converts a bag of units to a list of units, even though it would be sound. Though we could add a primitive function to accomplish this, we might wish to define it internally. Characterizing and enlarging the set of functions that can be represented is left for future work. ## 5 Related Work Linear logic.Ideograph is closely related to linear logic [12]. Most presentations of linear logic use the rules of "contraction", "weaking", and "dereliction" for exponentials, but Andreoli's equivalent dyadic system [4] instead uses a rule called "adsorption", which is very reminiscent of our nodes and our constructor usage relation. An obvious difference is that our propositions are graphs, not trees, allowing us to quotient out certain type equivalences, like the ordering of products. The type equivalences that we quotient out are similar to the _provable type isomorphisms_ for intuitionistic type systems [9]. This leads us to conjecture that Ideograph is Figure 13: The term that passes an instance of “x” to a call to “f” (i). The result of inlining in (i) the definitions of “x” and “f” (ii). The result of inlining in (ii) the let-bindings “v1” and “e1” (iii). “x” is a let-binding whose body is the directed multigraph in Figure 10 (ii), and “f” is a let-binding whose body is the function in Figure 12 (ii). (iii) is the term from Figure 10 (iii). -polymorphic (second-order propositional) multiplicative exponential linear logic with the mix rule, but with these type isomorphisms quotiented out. Proof nets and interaction nets.Our work is closely related to proof nets [12], and, in particular, their extension, interaction nets [16]. There are two key differences between our work and interaction nets: (1) In interaction nets, each symbol (roughly our "node") has a _principal_ port, which is used in reduction; in our work, nodes do not have privileged ports, and reduction proceeds exclusively by substituting definitions (of types or terms) for their occurrences. (2) In interaction nets, the set of symbols and their associated ports must be fixed ahead of time; in our work, the symbol set is not fixed, with occurrences of symbols potentially adding fresh symbols to the set. There is work on representing lambda calculus terms using interaction nets [20, 19, 22]. This work uses explicit "duplication" and "erasure" symbols, whereas exponentials ("constructors" in our terminology) are a central piece of our formalism. A key advantage of that work is improved reduction performance on some benchmarks, facilitated by the sharing of subterms [19]. We hope to evaluate our system on their benchmark in future work. There are versions of both proof nets [12] and interaction nets [17, 18] that represent exponentials with "boxes", and we expect these to be closely related to Ideograph, though our types are graphs rather than trees. Functional programming.There are several key differences between Ideograph and more traditional functional programming languages like OCaml, Haskell, and Rust. (1) Ideograph does not have primitive inductive datatypes, instead using an analogue of Church-encodings. (2) Ideograph is both pure and strongly-normalizing and does not prescribe an evaluation order. (3) Ideograph is linear in the sense of Girard [12], whereas Rust and Linear Haskell lack exponentials, Linear Haskell has separate non-linear types, and Rust is affine. (4) Most languages have functions implicitly return a single value, but boxes ("functions") in Ideograph explicitly name their zero or more outputs, similar to out-parameters in C and similar languages. (5) Ideograph has a natural interpretation as graph substitution, even if a textual formalism were preferred for writing programs. Polymorphic lambda calculus.There are three key differences between (polymorphic) Ideograph and polymorphic lambda calculus (System F): Ideograph is conjectured to be a canonical version of polymorphic multiplicative exponential linear logic (pmell) with the mix rule; pmell is the classical counterpart to polymorphic intuitionistic linear logic (pill); and pill is the linear counterpart to System F. The presentation here is not polymorphic, and so corresponds to intuitionistic linear logic and the simply-typed lambda calculus. In terms of ability to express data types, we expect Ideograph and pmell to be the same. However, the canonicity of Ideograph means that e.g. for a directed multigraph \(g\), where in pmell there is a different (fully-normalized) term for each labeling of the vertices and edges of \(g\), in Ideograph there is a unique (fully normalized) term representing \(g\). We are unsure of how linearity and classicality affect the ability to express data (i.e. the set of types and their fully-normalized terms). Linearity and classicality have established effects on the computational behavior of languages: linear calculi are often able to explicitly distinguish call-by-value and call-by-name [15], and classicality allows the expression of constructs like call/cc [14]. Data structures that form heterogeneous term algebras can be procedurally Church-encoded into System F types [6], but structures like directed multigraphs and lambda calculus terms are not term algebras. Graph representations of programming languages.There is work representing existing programming languages, in particular lambda calculus, as graphs [11, 27, 13]. A key difference of our work is that we are not trying to represent existing programming languages for the purposes of, e.g. optimizing compilation. Rather we want to represent data structures (of which syntax trees happen to be one) and pure computations over them. As a result, we are not concerned with effects or evaluation order, with which much of this work contends. Graph representations of types.There is work on representing formulas in multiplicative linear logic as undirected graphs [3, 26]. Our type system is closely related to these when we do not use exponentials or second-order propositional quantifiers. (Note that we adopt the edge convention opposite of theirs for \(\otimes\) and \(\mathcal{Y}\).) Representations of graphs.There is work representing graph structures in pure functional programming languages via recursive binders [21]. While our system is linearly typed and theirs is not, we expect them to be closely related and hope to pursue the connection in future work. Graph programming languages.There are several graph programming languages, including GROOVE [25], GP-2 [24], and LMNtal [28]. All such systems we are aware of have a notion of a rewrite rule, which matches a subgraph and replaces it with some other subgraph. In contrast, our system has only one reduction rule, which is analogous to beta reduction. LMNtal lacks a type system, whereas types are a core part of Ideograph. HyperLMNtal [30], which extends LMNtal with hyperedges, has been used to encode lambda calculus terms. In contrast to our use of constructors, they connect a binder to all of its variable occurrences via a single hyperedge. Ideograph is a variant of "labeled port graphs" [10], a formalism where edges connect to nodes at "ports", which has been used to represent programs. Parametric higher-order abstract syntax.There is work on encoding syntax with variable binding using functions in the meta-language. In particular, [29] and [7] leverage parametric polymorphism to encode exactly the closed terms in several lambda calculi and allow only structure-respecting operations. Their encodings, when translated into the types of Ideograph, correspond very closely to the type we presented in Section 3.3. However, Ideograph can represent languages where variables must be used exactly once, and parametric higher-order abstract syntax cannot. Moreover, our goal is to represent structures beyond just syntax. ## 6 Future Work There are several avenues for future work. We must first prove standard properties about Ideograph, including subject reduction and strong normalization. We would then like to prove that term-equality is graph isomorphism-complete, to characterize the structures that can be represented by the types, and to formally establish the connection to linear logic. Finally, we plan to develop an implementation, which we expect to be fairly straightforward due to the simplicity of the operational semantics. ## Acknowledgements We would like to thank Ian Mackie, Kazunori Ueda, and two anonymous reviewers for their valuable feedback, as well as Lawrence Dunn, Harrison Goldstein, Eleftherios Ioannidis, Nick Rioux, and Lucas Silver for reading early drafts. This work is funded in part by NSF Awards CCF-1910769 and CCF-1917852.
2307.13602
Fortaleza: The emergence of a network hub
Digitalisation, accelerated by the pandemic, has brought the opportunity for companies to expand their businesses beyond their geographic location and has considerably affected networks around the world. Cloud services have a better acceptance nowadays, and it is foreseen that this industry will grow exponentially in the following years. With more distributed networks that need to support customers in different locations, the model of one-single server in big financial centres has become outdated and companies tend to look for alternatives that will meet their needs, and this seems to be the case with Fortaleza, in Brazil. With several submarine cables connections available, the city has stood out as a possible hub to different regions, and this is what this paper explores. Making use of real traffic data through looking glasses, we established a latency classification that ranges from exceptionally low to high and analysed 800 latencies from Roubaix, Fortaleza and Sao Paulo to Miami, Mexico City, Frankfurt, Paris, Milan, Prague, Sao Paulo, Santiago, Buenos Aires and Luanda. We found that non-developed countries have a big dependence on the United States to route Internet traffic. Despite this, Fortaleza proves to be an alternative for serving different regions with relatively low latencies.
Eric Bragion, Habiba Akter, Mohit Kumar, Minxian Xu, Ahmed M. Abdelmoniem, Sukhpal Singh Gill
2023-06-28T14:55:38Z
http://arxiv.org/abs/2307.13602v1
# Fortaleza: The Emergence of a Network Hub ###### Abstract Digitalisation, accelerated by the pandemic, has brought the opportunity for companies to expand their businesses beyond their geographic location and has considerably affected networks around the world. Cloud services have a better acceptance nowadays, and it is foreseen that this industry will grow exponentially in the following years. With more distributed networks that need to support customers in different locations, the model of one-single server in big financial centres has become outdated and companies tend to look for alternatives that will meet their needs, and this seems to be the case with Fortaleza, in Brazil. With several submarine cables connections available, the city has stood out as a possible hub to different regions, and this is what this paper explores. Making use of real traffic data through looking glasses, we established a latency classification that ranges from exceptionally low to high and analysed 800 latencies from Roubaki, Fortaleza and Sao Paulo to Miami, Mexico City, Frankfurt, Paris, Milan, Prague, Sao Paulo, Santiago, Buenos Aires and Luanda. We found that non-developed countries have a big dependence on the United States to route Internet traffic. Despite this, Fortaleza proves to be an alternative for serving different regions with relatively low latencies. Cloud Data Centre, Latency, Network Hubs, Fortaleza, Roubaki, Networking ## I Introduction There were 484 submarine cables connecting all continents, except Antarctica in July 2021 [1], and they are responsible for the transport of 99 per cent of the international data traffic [2]. It is already known that the pandemic has accelerated the digitalisation even in poor countries, and the internet traffic is expected to jump from 2.4 exabytes per day in 2016 to 7.7 exabytes this year, number which is 135 times the figures registered in 2005 [3]. The need for digital content has, consequently, also impacted the data centres industry, more precisely cloud computing services. The amount spent with cloud infrastructures was higher than on-premises, $130 billion and $90 billion, respectively, in 2020 [4] and the large-scale adoption of cloud solutions that year was due to the needs that COVID-19 imposed [5]. Numbers show that 92 per cent of the companies have a multi-cloud strategy, 82 per cent have public and private clouds and that the majority of enterprises (83 per cent) spend more than $1.2 million per year on cloud solutions, an increase of 11 per cent when comparing to the previous year [5]. However, despite the optimism, there are some points of attention. The same report identified that cloud costs exceeded budget by, on average, 24 per cent, and that at least 30 per cent of the total cost was considered a waste in 2020 [5]. Because of this, optimise the existing use of cloud, and consequently, save money, leads the list of priorities in 2021 - it is the fifth year in a row in which this topic is listed as the priority number one. Economic groups, such as the European Union and Mercosur, made it easy to trade across markets and being digital also means that more areas can be explored, and new revenue income might be created. China, United States (US), and United Kingdom (UK) are the main countries taking advantage of online transactions, and it is estimated that, this year, almost 20 per cent of the worldwide purchases will be online [6]. Without taking into consideration logistics challenges delivering goods, providing a smooth digital experience is also mandatory to the businesses' success. While in the past networks were mainly centred in global financial centres, now there is a dispersed network that rely more on indirect connections [7]. Cities like Fortaleza, in Brazil, and Marseille, in France, for example, are standing out because of their strategic locations when it comes to global communications and have already established themselves as important indirect connections to financial centres in their regions. The importance of latency has increased over the years and several companies have seen it as a critical factor for the business. Studies have shown that the bounce probability increases in the same proportion as the page load time on websites: the more it takes, the most likely is the user will give up [8]. For instance, if a website takes more than five seconds to load, 74 % of the users will not continue with the intended task [9]. Moreover, it is estimated that Amazon, which has several business units, including cloud services, has a 2% increase in conversion on its website for every second improved in the speed [9]. Another industry that connectivity has affected directly the business is the e-gaming. While in the 90s online games were restricted to small groups competing mostly in LANs, it is expected that in four years the game streaming and eSport industry will be worth $3.5 billion, and that Latin America will be a key region in terms of viewers with an estimated audience of 130 million people [10]. With an expected global audience of 1 billion people by 2025 [10], better and reliable networks are required, as well as smoothly communications between servers and users. ### _Motivation and contributions_ As the world gets more and more digital, there is an increased need for low latency to deliver the better user experience [27]. While big organisations have the resources to make use of robust Content Delivery Networks, in general, small enterprises still need to take a close look at their spending and optimise it as much as possible [28]. With countries in different continents speaking the same language, but content production still concentrated in just a few, what was delivered physically before, now needs to be transmitted digitally. As internet is a network of networks and lacks a central organisation that unites information from all the parties involved, it is important to identify alternatives outside big centres and have access to information for better decision-making. This paper aims to analyse Fortaleza as an international hub connecting Brazil, North America and Africa and evaluate if, overall, the latency among those regions is within acceptable indexes that will result in good user experiences. Nowadays, the city has the largest number of submarine cable connections in the world [11], 16 in total, with routes to North and South America, Europe and Africa, what puts the area at an advantage when it comes to the availability of different routes. Furthermore, the region has been investing massively in network infrastructure: in a public-private initiative, an optical fibre structure of 8,000 kilometres was created in Ceara, stated in which Fortaleza in located, connecting major cities in an attempt to provide high-speed internet access to all public bodies and most of the urban population in the region [12]. Specifically, we want to understand: * The communication cost between some non-developed countries in those areas; * The communication cost with among developed countries; * The communication cost with the most populated city in Latin America. When identifying and analysing communication cost in terms of latencies, we believe this paper will be useful to different stakeholders when planning infrastructure and content distribution. The main contributions of this project are: * To facilitate governments to understand the communication costs between different regions and create design policies that will promote better connectivity; * To demonstrate that different providers might have different service level agreement (SLAs) based on their network and, consequently, varied user experience; * To show the wholesale companies for the importance of looking glasses for their clients. ### _Article Organisation_ The first section of this paper introduced the reader to the topic, talked about motivation, contributions and now, the organisation of this research. The remaining content is organised as follows: Section II: findings from previous works are presented, as there are important facts that were taking into consideration in this study. It is interesting to notice that the diversity of authors referred located in North America, Europe and Asia portraits, once again, that latency is a global concern. Section III: addresses the methodology applied to this study, from the creation of classification groups and formats to tools used and data analysed. Section IV: the experimental results are presented by geographic region. Section V: concludes this paper and presents possibilities for future work. ## II Related work Previous studies have expressed how latency affects different areas of businesses, from online live events with musicians in different locations to e-commerce and financial transactions [13]. Most recently, with the increased number of games servers around the world, the number of players has also increased considerably, but the communication between servers and user's machines are still pointed as one of the biggest challenges, having latency playing a big role [14][15]. To prove its impact, latency has been simulated modifying games' source code and emulating network issues with Netem, available in some of Linux's distribution [14][15], for example. Some techniques were proposed to improve latency, such as equalizing the routing architecture [13] and redundancy as an alternative, but the latter might increase the overall use of the network [16]. There is a consent that latency affects and influences the user experience [17][18], and it is also clear that users are affected differently based on the actions they are taking [14][17]. For instance, it is known that delays are much more acceptable when watching videos rather than playing games[19]. Some have argued that technological advancements haven't collaborated to reduce network latency as regions nearby still register high latency [20], but there is more beyond technological aspects that affect the network performance, such as political influence and agreements between different networks [21]. Although looking glasses have been highlighted as an important tool to measure network indexes like connectivity and routes [22], it has also been stressed that queuing delays cannot be foreseen precisely yet [20]. HostDime's looking glass tool enables the observation of backbone traffic and network efficacy as it emerges via remote networks [25]. The emergence of new connection areas and the dominance of some nations were also highlighted in the academic world [26][27][28]. It is estimated that a considerable amount of the Internet traffic passes through the US, position that will probably remain unchanged for years yet due to the lack of agreement between different networks and the country's geographic location [7]. Despite this, when it comes to network and connections, the dependency on big cities has decreased along the years, although they are still considered important, and indirect connections through other centres are increasing [7]. Laboratory studies have been known as a good approach to simulate real-life scenarios and identify solutions for issues [29]. However, it is a controlled environment, and it might lack unforeseen circumstances that might affect networks [26]. Field network testing, as it is used in this study, gives the opportunity to make use of an existing infrastructure that might also be used by enterprises to deliver their services and/or products [31]. Thus, the likelihood the experiment results are closer to real-life scenarios is higher [30]. When comparing different services/applications and networks, it is important to establish an interchangeably standard measurement that can be used in all models of sampling, regardless of what you have at the application level, such as past works defined levels of online game players and how they are affected by latency [14]. Despite this, none of the studies found determined latency ranges and their level of acceptance for general use, and we have addressed this in this paper. This has also allowed us to identify possible bottlenecks and suggest scenarios that would be more valuable for the study case. Another point of interest is having the US as a central hub of the internet. Although previous studies have mapped the number of backbones in different regions [7] and this might be an assumption when we see the distribution of submarine cables around the world, for example, it is important to analyse some packets routes to have clear evidence of this. ## III Methodology Currently, in general, the market hasn't established what low, medium and high latency should be, and this changes considerably when the object of study is gaming. For the effect of comparison, this paper uses the range shown in TABLE I, which is based on a survey by a well-known technology company [23]. Ideally, for a better user experience, the latency should be in the exceptionally low, low or average range. To analyse the effectiveness of one data centre covering more than one geographic region, more than considering it is location, we need to measure the latency between different starting points across the globe to the same destinations, as this gives us a figure of the time response for any command between them. As previous works have stated that latency might affect users differently [14][17], this study focus on four different regions (North and South America, Europe and Africa) that are the target areas of a start-up called Latudio - a language learning app that will be available in seven languages-, which might be considered as a parameter for other use cases. Currently, the company has a single server in Roubaix, France - AMD Ryzen 7 3700 PRO - 8c/16t - 3.6 GHz/4.4 GHz, 64 GB ECC 2666 MHz, \(2\)\(\times\)960 GB SSD NVMe, 1 Gbps outgoing bandwidth, 10 Gbps incoming bandwidth - and is experiencing relatively high latency across all regions but Europe as shown in TABLE II. From 10 cities, six of them have high latency when communicating with the server in France, which shows the company needs to have at least another server covering a different geographic region. Our simulation utilizes realistic data making use of looking glasses made available by telecommunication providers and uses the ping command to estimate the latency between the locations. To increase the level of accuracy when recording measured latencies, since it is not possible to foresee queues delays [20], we have repeated the test 10 times during different periods. Individually, a ping command sends 4-5 packets to the destination IP and the average of those packets is what we have computed. In order to compare current latency (Roubaix, France) with a possible more favourable location, for this study, we have chosen Fortaleza, in Brazil, to run the tests as one of the source locations. We also compared the numbers gathered in Fortaleza with the latency registered in Sao Paulo, where there is the largest number of people in the country and data traffic [24]. This will help us to understand the likeness of having indirect connections to big centres when a balance among low latency, content availability across different continents and regions and cost-effectiveness is needed. In total, including the current provider in France, the network of six companies were analysed (HostIDC1, Aloo Telecom2, FDC3, Globenet4, OVH5 and Hostidtime5), which resulted in 800 latencies registered. Except for the server in Roubaix that is currently used by Latudio and considered as a parameter in this study, all the other organisations were randomly selected based on two criteria: the need of having a public looking glass tool and presence in the cities researched. Footnote 1: _Informacoes da Rede_ (2021). [Tool] Available at: [https://lg.hostick.com.be/](https://lg.hostick.com.be/). (Accessed: 11 August). Footnote 2: _Looking Glass_ (2021). [Tool] Available at: [http://lg.aloetetlecom.com.br/](http://lg.aloetetlecom.com.br/). (Accessed: 11 August). Footnote 3: _Looking Glass_ (2021). [Tool] Available at: [https://www.fdeservers.net/locking-glass](https://www.fdeservers.net/locking-glass). (Accessed: 11 August). Footnote 4: _IPM and IPvHoo Loking Glass_ (2021). [Tool] Available at: [http://lg.globenet.net/lg-cgi](http://lg.globenet.net/lg-cgi). (Accessed: 11 August). One known IP in each one of the cities studied - Miami (US), Mexico City (Mexico), Frankfurt (Germany), Paris (France), Milan (Italy), Prague (Czech Republic), Sao Paulo (Brazil), Santiago (Chile) Buenos Aires (Argentina) and Luanda (Angola) - was used as the destination for the tests and, except where stated, the latency is the average registered in millisecond (ms). Besides the average latency, among the 10 latencies registered for each city in each one of the networks, we identified the highest and lowest numbers and computed the difference between them, resulting in the average latency variation. In the cities where the latency variation surpassed 200 milliseconds, we analysed the networks individually to identify the existence of any discrepancy. Within this group, when the lowest and highest latency registered had a significant difference, we ran the _routterace_ command to identify what might be causing such discrepancy. With the IPs through which the packets passed on the route, we used a "Where is My IP Location" tool 7 that consolidates location information from five different sources, which allows us to have a more precise evidence to determine the device location and, thus, understand if the route taken had any impact on the latency. Footnote 7: _Where is My IP Location?_ (2021). [Tool] Available at: [https://www.iplocation.net/](https://www.iplocation.net/). (Accessed: 11 August). ## IV Experimental results ### _Communication with Europe_ Communication across nations in Europe has the best indexes in this study when it comes to both latency and latency variation, and results in TABLE II. show that the data centre in Roubaix servers well the entire region. On average, latency is within the exceptionally low and low ranges, between 4 and 24 milliseconds (TABLE III. and Fig. 1), with almost no latency variation (TABLE IV. and Fig. 2) - Milan had 1 millisecond variation, which we can consider as insignificant for any impact on the user experience. \begin{table} \begin{tabular}{|c|c|} \hline _Latency_ & _Classification_ \\ \hline \(<\) 20 ms & Exceptionally low \\ \hline 21 to 49 ms & Low \\ \hline 50 to 100 ms & Average \\ \hline \(>\) 100 ms & High \\ \hline \end{tabular} \end{table} TABLE I: Latency range and classification On the other hand, any server located in South America is not a good option to provide services to Europe, as the latency to all countries is considered high, ranging from 164 to 213 milliseconds. If it was the case to choose between Fortaleza and Sao Paulo, Fortaleza has the lowest latency to all destinations in Europe. However, this route has also the highest latency variation registered (7 to 15 milliseconds) when compared to the two others. Thus, although Sao Paulo to Europe has the highest latency to all destinations when compared to Fortaleza, this route has a better latency variation (2 to 5 milliseconds) than Fortaleza. Another interesting fact is that, even though Frankfurt is the third furthest city from Fortaleza and Sao Paulo (TABLE V. ), the German city had the lowest latency registered among all the European cities when the source was one of the two cities, which might indicate that better routes/agreements are available. ### _Communication with North America_ When it comes to across countries communication, Miami has the lowest latency from all cities studied (Fortaleza, Roubaix and Sao Paulo), which might support the theory that the United States is a worldwide internet hub. Having the shortest distance to Miami as TABLE V. shows, Fortaleza has also the lowest latency with 79 milliseconds. Even though Roubaix is 499 miles further away from Miami than Sao Paulo, both latencies are almost the same (113 and 114 milliseconds, consecutively), which shows a better connection between Europe and the United States. This is also seen in the latency variation: Roubaix to Miami is the only between continents route that had zero latency variation. On the other side of this measurement, even though Fortaleza to Miami has the lowest latency, it has also the biggest latency variation (TABLE IV. and Fig. 2). Despite being the neighbours of the United States, Mexico does not take full advantage of being close to an international hub when it comes to networks. The latency from all three cities (Fortaleza, Roubaix and Sao Paulo) to Mexico City was in the high range (above 150 milliseconds) and latency variation was between 27 and 67 milliseconds, with the countries in South America with the most significant variation (TABLE IV. and Fig. 2). ### _Communication with South America_ Communication with Santiago and Buenos Aires has the highest latency across all cities studied, and also the highest latency variation - a difference of up to 642 milliseconds between the lowest and highest latencies (TABLE IV. and Fig. 2). Although Sao Paulo, Santiago and Buenos Aires are on the same continent and have the shortest distance one from another as shown in TABLE V., the communication between Roubaix, in France, and Santiago has a lower latency than Sao Paulo and Fortaleza. The same trend is seen on the latency variation: while from Roubaix to Santiago there is a variation of 255 milliseconds, Sao Paulo and Fortaleza have more than twice this figure (TABLE IV. and Fig. 2). Despite the poor performance in communication with Santiago, the South American cities performed better when the destination was Buenos Aires: Sao Paulo had the lowest latency with 253 milliseconds, followed by Fortaleza (285 milliseconds) and Roubaix (308 milliseconds). Anyhow, all the latency measured are in the high latency range, which means the user might somehow be impact by the low performance. Among the three cities used as destinations (Sao Paulo, Santiago and Buenos Aires), Sao Paulo has the lowest latency from both Fortaleza and Roubaix. While the latency from Roubaix is in the high latency range, the one from Fortaleza is within the low classification group, which might indicate the city is an alternative to server more than one geographic region. Sao Paulo as a destination has also performed well when measured the latency variation, with the highest difference of 7 milliseconds. Curiously, the latency variation is lower between Roubaix and Sao Paulo than from Fortaleza, which registered a variation 4,5 times greater than the one from Europe (TABLE IV. and Fig. 2). The latency variation between Roubaix and Sao Paulo was almost the same as the variation computed in Sao Paulo, with 1 millisecond of difference. To better understand why the latency and latency variation to Santiago and Buenos Aires are so high, we ran the _traceroute_ command from the cities with the highest average latency variation registered to know the route the packets passed through. Fig. 3 shows the traceroute from Sao Paulo to Buenos Aires. When we analysed the geolocation of the IPs, we identified that the packets went from Sao Paulo to the United States, where they travelled around some cities to go back to Sao Paulo again. Just after this journey, the packets were then sent to Buenos Aires, in Argentina. Clearly, the high latency is due to the journey to the United States to go back to the same location as from where the packets were originally sent. When analysing the networks performance individually, one of them stand out for having an average latency to Buenos Aires that is less than half the general average for Sao Paulo (117 milliseconds) and the lowest latency of 35 milliseconds, which shows us that, if needed, companies have a good route available between the two cities. The route Fortaleza to Santiago has also showed dependency of the United States, as Fig. 4 shows. The packets go from Fortaleza to the United States, where they also pass through several points of connection, and then go straight to Santiago, without passing through Brazil this time. ### _Communication with Africa_ The communication with Luanda is the only one that both Fortaleza and Sao Paulo have better latency variation than Roubaix. When comparing Fortaleza and Roubaix as the sources, the latter has a latency variation 12 times greater than the prior (TABLE III, and Fig. 1). Such variation from Fortaleza was noticed just within the country (Fortaleza to Sao Paulo) and in the communication with developed nations - all latencies variations with non-developed countries were greater than 200 milliseconds. Despite the good performance in latency variation, the latency to Luanda from the three source cities are within the high latency range. On average, Fortaleza had the best performance with 150 milliseconds, which is almost the same latency to Mexico City, followed by Roubaix (197 milliseconds) and Sao Paulo (226 milliseconds). If we compare just the average latency of this study, we can say that Luanda is better connected to other nations than Santiago and Buenos Aires. When each network was analysed individually, there were two data centres in Fortaleza that had latencies to Luanda that were around half of the average. To better understand this, we ran the traceroute command in one network that had a low latency and another one with high latency, as Fig. 5 and Fig. 6 show. As with Santiago and Buenos Aires, high latency is experienced when the packets are sent to North America. As Fig. 5 shows, the data travels from Fortaleza to the United States, where it goes around some cities, to then finally be routed to Luanda. On the other hand, a direct route between the two countries significantly reduces the latency to almost half the average. Fig. 6 shows that the packets travel from Fortaleza to Luanda without the need to pass by any other city. ## V ConclusionS and future work We measured the latency between different locations having Roubaix, Fortaleza and Sao Paulo as the starting points and Miami, Mexico City, Frankfurt, Paris, Milan, Prague, Sao Paulo, Santiago, Buenos Aires and Luanda as the destinations to analyse if Fortaleza can be considered a hub connecting Brazil, North America and Africa. We also wanted to understand the latency between some non-developed, developed countries and specific cities. It was clear that any server in South America is not a good option in terms of latency when communicating with Europe, and that the server in Roubaix has an exceptionally low or low latency to any of the European countries studied. Although the latency between Roubaix and Miami are within the high latency range, it might still be considered for businesses purposes, as there is no latency variation, and it had the second-best latency. The communication with non-developed countries showed to be challenging mainly because of the dependency on the United States to route packets to the final destinations. This has highly affected not just the latency, but also the latency variation, having been registered over 600 milliseconds of variation between the lowest and highest average computed. Some routes, such as Fortaleza to Luanda and Sao Paulo to Buenos Aires, stood out due to some specific networks have considerably lower latency than the average mainly because of more direct connections between the two points [26]. Finally, when it comes to cost efficiency (one data centre used for different regions), content availability in different geographic location and relatively low latency, Fortaleza has showed to be a good option to server big centres in Brazil, the United States, through Miami, and Africa, through Luanda with the exception that the network to be used must be previously analysed to verify its effectiveness. In 1980, a line proposed by Willy Brandt divided the world in two: rich nations in the North hemisphere and poor countries in the South. When transposing the Brandt Line to the distribution and efficiency of IP networks, similar results are found: * Countries in the North are better connected; * Communication between Northern nations have almost no latency variation in most cases; * Connections in the South are more unstable; * Latency is higher in the South. Further studies are planned to analyse why the reason why the connection agreements between non-development nations are so poor and/or non-existent, and how the dependency on the United States might affect global communications if something goes wrong. Moreover, the analysis of the connections between Fortaleza and other destinations might indicate other routes that data centres in the city can cover satisfactorily. ## Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## Acknowledgments We wish to thank Latudio's co-founders, Mark Shimada and Vitek Rozkovec, HostDime's data centre manager, Lucas Montaroios, the managing director at Interxion France, Fabrice Coquio, and the product coordinator at Angola Cables, Edivan Silva, for their availability and insights pre-research regarding submarine networks and global communication. This work is partially funded by Chinese Academy of Sciences President's International Fellowship Initiative (Grant No. 2023VTC0006), National Natural Science Foundation of China (No. 62102408), Shenzhen Industrial Application Projects of undertaking the National key R & D Program of China Fig. 4: Traceroute Fortaleza to Santiago going through the US Fig. 5: Traceroute Fortaleza to Luanda going through the US Fig. 6: Traceroute Fortaleza to Luanda without going through the US Fig. 3: Traceroute Soa Paulo to Buenos Aires going through the US Fig. 4: Traceroute Fortaleza to Santiago going through the US
2306.04057
High hard X-ray polarization in Cygnus X-1 confined to the intermediate hard state: evidence for a variable jet component
Cygnus X-1, the well-known accreting black hole system, exhibits several observational features hinting at an intricate interplay between the accretion disk, its atmosphere known as the corona and the putative relativistic jet. It has been extensively studied using all available observational methods, including using the newly available technique of sensitive X-ray polarimetry. X-ray polarization characteristics are distinct for coronal and jet emissions. The low X-ray polarization measured below $\sim$100 keV is understood as arising from the corona. In contrast, the high polarization measurements reported above $\sim$400 keV required a separate jet-dominated spectral component, which spectroscopy does not demonstrate conclusively. Here we report precise polarization measurements in the 100-380 keV region made during three different sub-classes of spectral states of the source using the CZTI instrument onboard {\em AstroSat}. A high polarization (23$\pm$4 \%) is found mainly in the Intermediate Hard State of the source, and the energy-resolved measurements smoothly connect the coronal and the jet regimes. When high polarization is observed, the simultaneous spectral data hints at a separate power law component above 100 keV. We examine the possible sources of this energy-dependent high polarization in Cygnus X-1.
Tanmoy Chattopadhyay, Abhay Kumar, A. R. Rao, Yash Bhargava, Santosh V. Vadawale, Ajay Ratheesh, Gulab Dewangan, Dipankar Bhattacharyay, Mithun N. P. S., Varun Bhalerao
2023-06-06T23:14:38Z
http://arxiv.org/abs/2306.04057v2
High hard X-ray polarization in Cygnus X-1 confined to the intermediate hard state: evidence for a variable jet component ###### Abstract Cygnus X-1, the well-known accreting black hole system, exhibits several observational features hinting at an intricate interplay between the accretion disk, its atmosphere known as the corona and the putative relativistic jet. It has been extensively studied using all available observational methods, including using the newly available technique of sensitive X-ray polarimetry. X-ray polarization characteristics are distinct for coronal and jet emissions. The low X-ray polarization measured below \(\sim\)100 keV is understood as arising from the corona. In contrast, the high polarization measurements reported above \(\sim\)400 keV required a separate jet-dominated spectral component, which spectroscopy does not demonstrate conclusively. Here we report precise polarization measurements in the 100-380 keV region made during three different sub-classes of spectral states of the source using the CZTI instrument onboard _AstroSat_. A high polarization (23\(\pm\)4 %) is found mainly in the Intermediate Hard State of the source, and the energy-resolved measurements smoothly connect the coronal and the jet regimes. When high polarization is observed, the simultaneous spectral data hints at a separate power law component above 100 keV. We examine the possible sources of this energy-dependent high polarization in Cygnus X-1. X-rays: individual (Cygnus X-1) -- X-rays: binaries -- techniques: polarimetric ## 1 Introduction Cygnus X-1, a high-mass X-ray binary (HMXB) system, is one of the earliest known X-ray sources, harboring a 21.2\(\pm\)2.2 solar-mass black hole in a 5.6-day orbit with a \(40.6^{+7.7}_{-7.1}\) solar-mass star, and located at a distance of \(2.22^{+0.18}_{-0.17}\) kpc from us (Miller-Jones et al., 2021). Unlike most other X-ray sources, Cygnus X-1 is persistent and has been extensively studied across almost the entire electromagnetic spectrum over the last five decades (Sunyaev & Truemper, 1979; Ebisawa et al., 1996; Gierlinski et al., 1997; Cui et al., 1997; Di Salvo et al., 2001; Stirling et al., 2001; McConnell et al., 2002; Gallo et al., 2003; Fender et al., 2006; Cadolle Bel et al., 2006; Wilms et al., 2007; Rahoui et al., 2011; Jourdain et al., 2012b; Russell & Shahbaz, 2014; Jourdain et al., 2014; Zanin et al., 2016; Lubinski et al., 2020). The source displays state transitions between the thermal disk dominated soft state and the hard state with a power-law dominated spectrum. It is also detected in radio wavelengths, thought to originate in relativistic jets (Stirling et al., 2001; Fender et al., 2006). Cygnus X-1 is one of the brightest X-ray sources in the hard state, and the hard X-ray emission is attributed mainly to Compton scattering from a hot corona. Some studies indicate an additional component in the hard state spectrum (Cadolle Bel et al., 2006; Jourdain et al., 2014) which has been interpreted as power-law emission from an optically thin jet (Rahoui et al., 2011). Detailed modeling of the broadband spectral energy distribution (SED) of Cygnus X-1 in the hard state requires consideration of jet emission to account for the soft-gamma ray observations (Zdziarski et al., 2014). It has been suggested that under certain conditions, the jet emission may contribute significantly in hard X-rays as well (Malyshev et al., 2013; Russell and Shahbaz, 2014; Kantzas et al., 2020), similar to a few other black hole sources (Vadawale et al., 2001; Markoff et al., 2001; Vadawale et al., 2003). However, the extent to which the jet emission can contribute to hard X-rays continues to be debated (Zdziarski et al., 2014). Hard X-ray polarization measurements offer an unique possibility to distinguish between emissions arising in the corona and the jet. However, hard X-ray polarization measurements are challenging to carry out, and so far only weak hints are available of polarization in hard X-rays (Chattopadhyay, 2021). The first attempt to explore the polarization properties of Cygnus X-1 dates way back to 1970s when a Bragg polarimeter onboard The Eighth Orbiting Solar Observatory (_OSO 8_) placed an upper limit of a few percent at 2.6 keV (Long et al., 1980). Subsequently there have been attempts to measure polarization of the source in hard X-rays, both in the coronal regime (a few tens of keV to \(\sim\)100 keV) and the suspected jet regime (above 100 keV) (see Chattopadhyay, 2021, for a summary). Recently, Krawczynski et al. (2022) reported a precise measurement of polarization of Cygnus X-1 in the hard state using Imaging X-ray Polarimetry Explorer (_IXPE_) in 2-10 keV band. They found a polarization fraction of 4.0\(\pm\)0.2 % with an increasing trend in polarization with energy. The polarization angle is -20.7\(\pm\)1.4\({}^{\circ}\) (from the local north towards northeast in clockwise direction) and aligns with the outflowing radio jet. These results suggest that the X-ray coronal plasma is extended in the plane of the accretion disk. IBIS and SPI instruments onboard the The INTErnational Gamma-Ray Astrophysics Laboratory (_INTEGRAL_) independently measured high polarization for this source at \(\sim\)65 % with a polarization angle of 224\({}^{\circ}\) at energies above 400 keV (Laurent et al., 2011; Jourdain et al., 2012). These results were interpreted as the jet origin of the photons, further corroborated by the spectroscopic analysis showing two distinct spectral components, a thermal Comptonization component at energies below 200 keV and a power law component beyond 200 keV, supposedly, due to synchrotron radiation from the jet. However, Zdziarski et al. (2014) modeled wide band spectral energy distribution of Cygnus X-1 spanning from radio to MeV and suggested that for a realistic set of model parameters, contribution of the jet emission in X-rays is likely negligible. Later observations by the Polarized Gamma-ray Observer (_PoGO_+), a dedicated balloon-borne hard X-ray polarimeter sensitive in 19-181 keV, found the source to be unpolarized in the hard state. They placed an upper limit of 5.6 % at a position angle of 154\(\pm\)31\({}^{\circ}\), similar to what _IXPE_ found (Chauvin et al., 2018). They also estimated upper limits for polarization from the jet component of around 5-10 % (Chauvin et al., 2019). These findings enhance the tension between the low energy polarization measurements and the high polarization found above 250 keV by _INTEGRAL_, requiring a synchrotron emission component. A detailed polarimetric study of the source in 100-500 keV region (the energy range in which the coronal and the jet components could have similar contribution) can confirm any separate jet component in the hard X-rays. Since the radio emission, believed to be originating from the jet, is known to change between different spectral states of Cygnus X-1, it is essential to have hard X-ray polarization measurements in different spectral states to decipher the underlying emission mechanisms. However, such state-dependent hard X-ray polarization measurements have not been possible so far. Cadmium Zinc Telluride Imager (CZTI) is a moderately sensitive hard X-ray polarimeter in 100-380 keV energy range. The polarization information is obtained by accurately identifying the Compton scattered events in the CZTI plane, which modulate the azimuthal angle distribution if the incident radiation is polarized. The capability of CZTI as a polarimeter has been demonstrated both in the laboratory before the launch of _AstroSat_(Vadawale et al., 2015; Chattopadhyay et al., 2014) and in space with the measurement of polarization of Crab (Vadawale et al., 2018). Polarization measurements for a large sample of Gamma-ray Bursts (GRBs) have also been reported by Chattopadhyay et al. (2019) and Chattopadhyay et al. (2022). CZTI polarimetry range (100-380 keV) bridges the gap between the _PoGO_+ and _INTEGRAL_ measurements and, therefore, can contribute significantly to understanding the emission mechanism in this energy range. If there is indeed a transition in the emission mechanism from corona to jet, that can be effectively probed by studying the energy-resolved polarization properties of the source with CZTI. For this study, we made three long targeted observations of Cygnus X-1 after the source transitioned to hard state (hereafter ID2992, ID4646, and ID5146). We did a detailed polarization analysis of these three observations using the Compton events in CZTI. In Sec. 2, we give the details of the observations along with their spectral state determination. The polarization results are discussed in Sec. 3 followed by a brief description of the spectroscopic analysis and results in Sec. 4. In Sec. 5, we discuss the results in the context of coronal and jet contribution to the global emission of Cygnus X-1 in different spectral states. ## 2 Astrosat observation of Cygnus X-1 Since the launch of _AstroSat_, Cygnus X-1 has been observed on several occasions. Many of them, however, are of short exposures, and some are in the soft state with very low hard X-ray flux, not suitable for polarization analysis. Hence, during the last few observations cycles, three long (\(>\)200 ks) observations (_AstroSat_ observations ID2992, ID4646, and ID5146), triggered by the transition of the source from soft to hard state, were undertaken. Details of the source and blank sky observations used for polarization analysis are given in Table 1. To identify the specific subclass of the spectral state, we followed a method described by Lubinski et al. (2020). Spectral analysis was carried out in 30-100 keV energy range by fitting a powerlaw to the orbit-wise spectra, and a distribution of the orbit-wise fitted spectral index and flux (22-100 keV) was obtained for each of the three observations. From the distributions, we classify ID2992, ID4646, and ID5146 as Intermediate Hard State (IMH), Intermediate Soft State (IMS), and Pure Hard state (PH), respectively as shown in Figure 1. Hereafter, we denote these observations as IMH2992, IMS4646, and PH5146, respectively. Details of the method for spectral state determination can be found in supplementary material A. ## 3 Polarization analysis results We carried out a detailed polarization analysis for these three observations, following the steps described in Vadawale et al. (2018). Details of the CZTI polarization measurement methodology can be found in supplementary section B. The Azimuthal Scattering Angle Distribution (ASAD) and the contour plots for all the three observations are shown in Figure 2 (a: PH5146, b: IMS4646, c: IMH2992). The fitted modulations in ASAD for the PH and IMS states are low and not constrained even though the estimated minimum detectable polarizations (MDPs) for PH5146 and IMS4646 are low (\(<\)10 %). We also estimate the Bayes factor, which provides a statistical confirmation of the detection of polarization by comparing a sinusoidal polarized model to an unpolarized constant model fitted to the data. The low Bayes factors (\(<\)3, as described in appendix B), measured in both the cases, indicate no statistically significant detection of polarization in these two observations. Analysis of IMH2992, on the other hand, shows statistically significant polarization, with measured polarization fraction of 23\(\pm\)4 % in 100-380 keV, implying greater than 5\(\sigma\) detection for 1 parameter of interest at 68 % confidence level. The observed polarization angle projected in the sky plane is 236\(\pm\)12\({}^{\circ}\), which agrees with the _INTEGRAL_ results. The angle is \(\sim\)90\({}^{\circ}\) away from the _IXPE_ measured polarization angle in 2-10 keV. The contour plot on the right side of the figure shows that the polarization degree and the angle are well constrained at 68, 95, and 99 % confidence levels. The Bayes factor is also high (\(\sim\)733), confirming very high statistical significance. With such high detection significance for IMH2992, we explored the energy dependence of the polarization. Figure 2 (b), (d), and (e) show the modulation curves for IMH2992 in three energy ranges: 100-175, 175-230, and 230-380 keV. The signal in 100-175 keV is found to be unmodulated (Bayes factor \(<\)1). The signals at higher energies (175-230 keV and 230-380 keV), on the other hand, are found to be polarized (26\(\pm\)6 % at 228\(\pm\)13\({}^{\circ}\) and 39\(\pm\)9 % at 239\(\pm\)11\({}^{\circ}\) respectively) at \(>\)4\(\sigma\) level (Bayes factor of 33 and 254, respectively). We measure upper limits of polarization for the data in the first energy bin (100-175 keV) of IMH2992 and for the other two observations in 100-380 keV (see the rightmost column of table 1). Figure 3 shows the polarization fraction and angle of Cygnus X-1 in different spectral states (denoted by different symbols) from all available measurements till date (in different colors), including the _AstroSat_ CZTI measurements presented here as blue data points. Figure 1: Spectral states of Cygnus X-1 for the three _AstroSat_ CZTI observations. We fit the 30-100 keV CZTI mask-weighted spectrum for each orbit and measure the flux in 22-100 keV. The distribution of the fitted power law indices and the flux values are shown here. Each data point represents one orbit (\(\sim\)90 min long). The vertical lines are the power law index boundaries separating different spectral states, whereas the horizontal line separates the hard and the soft states as defined by Lubinski et al. (2020), thus segregating the index-flux plane into six spectral states - PH: pure hard, TH: transitional hard, IMH: intermediate hard, IMS: intermediate soft, TS: transitional soft, PS: pure soft. We find that ID5146 is a pure hard state (PH5146), whereas ID2992 and ID4646 belong to intermediate hard (IMH2992) and intermediate soft (IMS4646) states, respectively. The typical errors are shown in the legend. \begin{table} \begin{tabular}{l c c c c c c} \hline Observation ID & Exposure & RA/DEC & Duration & Power law & Flux [3] & PF/PA [4] \\ & (ks) & & & index [2] & \(\times 10^{-10}\) ergs/cm\({}^{2}\) & \\ & (ks) & & & (30-100 keV) & (22-100 keV) & \\ \hline 9000002992 & 333 & 299/35 & 15-21/06/2019 & 2.06\(\pm\)0.01 & 90.96\(\pm\)0.37 & 23\(\pm\)4 \%, 236\(\pm\)11\({}^{\circ}\) (100-380 keV) \\ & & & & & \(<\)15 \% (100-175 keV) \\ & & & & & 26\(\pm\)6 \%, 228\(\pm\)12\({}^{\circ}\) (175-230 keV) \\ & & & & & 39\(\pm\)9 \%, 239\(\pm\)11\({}^{\circ}\) (230-380 keV) \\ \hline 900004646 & 228 & 299/35 & 16-21/08/2021 & 2.14\(\pm\)0.02 & 65.47\(\pm\)0.36 & \(<\)12 \% (100-380 keV) \\ \hline 9000005146 & 336 & 299/35 & 15-23/05/2022 & 1.75\(\pm\)0.01 & 126.66\(\pm\)0.29 & \(<\)7 \% (100-380 keV) \\ \hline 900002210 [1] & 207 & 204/38 & 03/07/2018 & — & — & — \\ \hline \end{tabular} \end{table} Table 1: Summary of the Cygnus X-1 and the blank sky observations Figure 2: Results of polarization analysis with the left column showing results for the whole observations in 100-380 keV and right column showing the results for the energy resolved polarization analysis for IMH2992. Azimuthal Scattering Angle Distributions (ASAD) are shown on the left in each figure set. The sinusoidal fit is shown as a solid blue line, and 100 random MCMC iterations are shown as faint grey lines. The contour plots for polarization angle (in the detector plane - Det. Polarization angle) and polarization fraction for 68, 95, and 99 % confidence levels are shown in the right panels. X-ray emissions during PH5146 and IMS4646 are unpolarized or polarized at low levels in 100-380 keV, whereas in IMH2992, the emission is polarized at 23 % with polarization angle 236\({}^{\circ}\) in the sky projected plane. For IMH2992, the first energy bin is unpolarized. The measured polarization fractions (and angles) for the other two energy bins are 26\(\pm\)6 % (228\({}^{\circ}\)) and 39\(\pm\)9 % (239\({}^{\circ}\)), respectively. data of Cygnus X-1 encompass a few years of observation in total, we denote the results as averaged hard state, supposedly consisting of both pure hard and intermediate states. It can be seen that the CZTI measurements smoothly bridge the gap between the corona-dominated low energy (\(<\) 100 keV) measurements of low polarization and the high polarization measured by _INTEGRAL_. ## 4 Spectroscopic analysis results To investigate the spectral signatures of the polarization signal present in IMH, we undertake spectroscopic analysis of the source for these three observations. The broadband X-ray spectrum of the source can be typically characterized as a combination of a thermal accretion disk and emission from a Comptonizing medium, with the Comptonised emission often showing structure; e.g two components with different optical depths (Makishima et al., 2008) and different electron temperatures (Basak et al., 2017) or hybrid distribution of electrons (Zdziarski et al., 2017) in the hard state of the source. Since the effect of the polarized component is limited to X-rays beyond 100 keV, we need to investigate the hard X-ray spectrum in detail with minimal dependence of model on the biases from lower energies. Therefore, for spectral analysis, we focus only on modeling the CZTI spectrum in 30-190 keV and do not include data in softer X-rays from the other two _A_stroSat instruments: the Soft X-ray Telescope (SXT) and the Large Area X-ray Proportional Counter or LAXPC). Torii et al. (2011) have investigated the hard X-ray spectrum (10-400 keV) of Cygnus X-1 in the low hard state at multiple epochs and are able to model the emission with Comptonization with reflection from cold matter with compPS (Poutanen & Svensson, 1996). Thus we use a similar formalism to model the CZTI spectrum in 30-190 keV. For each observation ID, we filter the raw event files following the standard CZTI data analysis pipeline procedure and generate clean event files. From the clean event files, we generate background subtracted source spectrum using an improved mask-weighting technique with updated calibration (Mithun et al in prep), which is implemented in cztbindata module of CZTI data analysis pipeline version 3.0 and the associated CALDB1. Footnote 1: [http://astrosat-ssc.iucaa.in/cztiData](http://astrosat-ssc.iucaa.in/cztiData) In Figure 4 top panel, we show the inherent differences in the shape of the spectrum in the three states, by computing the ratio of the respective spectrum to Crab spectrum. This removes the instrumental effects and allows for comparison of the spectral slopes in a model-independent manner. The PH5146 state has the highest flux and shows the characteristics of thermal Comptonization spectra with a cutoff at \(\sim\)100 keV. In the other two states (IMH2992 and IMS4646), there is an indication of the cutoff energy being lower and the emergence of a power law component. Since we are restricted to energies above 30 keV, for spectral modeling, we fix all parameters that affect below this energy (i.e. disk temperature and reflection parameters) to their typical values (Makishima et al., 2008) and assume a spherical geometry of the Comptonizing medium (Makishima et al., 2008; Torii et al., 2011). Thus only parameters which we constrain are the optical depth of the medium, electron temperature, and the normalization. We find that all the three observations can be described adequately with a hard Comptonization (\(\chi^{2}\) of 101.1, 99.5 and 157.1 for 84 degrees of freedom in IMH2992, IMS4646, and PH5146 respectively) with the parameters consistent with those reported by Torii et al. (2011). The fitted parameter values are given in Table 2 in appendix C. The electron temperature in IMH is higher (\(\sim\)170 keV) than that observed in IMS (\(\sim\)130) or PH (\(\sim\)84) while the optical depths follow a reverse trend. Confidence intervals of the parameters are determined by Markov Chain Monte Carlo sampling of the parameter space using 50 walkers running for 10000 steps (after burning the initial 2000 steps before convergence). Figure 3: Polarization fraction (top panel) and angle (bottom panel) of Cygnus X-1 in different spectral states (Pure hard, Intermediate hard, Intermediate soft, Averaged hard, and Soft states) from all the available measurements – IXPE (Krawczynski et al., 2022), IBIS (Laurent et al., 2011) and SPI (Jourdain et al., 2012) on board _INTEGRAL_, _OSO-8_(Long et al., 1980), PoGO+ (Chauvin et al., 2018) and _AstroSat_-CZTI. The polarization angle is measured from the local north to northeast in an anti-clockwise direction for all the instruments. When plotted against the observed energy of measurement, we see an apparent increase in the polarization fraction and a swing in polarization angle at higher energies. The two distinct polarization angle distributions suggest different origins for the radiation below and above \(\sim\)200 keV. Laurent et al. (2011) have reported the presence of a powerlaw like component in the INTEGRAL spectrum of Cygnus X-1 in its hard state. We test for the presence of a similar component in the CZTI spectra by including a powerlaw with a fixed slope and a variable normalization. The addition of the component does not change the fit statistics (reduced \(\chi^{2}\) is \(\lesssim\)1) but in the case of IMH observation, it causes a significant change in the electron temperature. We keep the slope of the powerlaw component tied at 1.6 as it is representative of the typical synchrotron jet observed in the INTEGRAL observation (Laurent et al., 2011) and since we only want to test the presence of the component, computing the significance of the component by estimating its normalization is sufficient. The model decomposition for the individual observations (for the second model: tbabs*(compps+powerlaw)) is shown in Figure 4. The model parameters are noted in Table 2 in appendix C We note that the spectral modeling of the IMH state allows inclusion of a powerlaw component with reasonable constraints on its normalization, though with the inclusion of the powerlaw component, the electron temperature becomes similar to that of IMS and PH states. The IMS and PH allow inclusion of the power-law component, but with much lower normalization and is consistent with zero within a few standard deviations. Based on the spectroscopic analysis, we conclude that there is a degeneracy in the spectral information in IMH2992 with both Comptonization and Comptonization + Powerlaw models being able to describe the spectrum. The latter configuration aligns better with the polarization results if we assume that the powerlaw component is the main contributor to the observed polarization in Figure 4: The mask-weighted _AstroSat_-CZTI spectra normalized to Crab are shown in the top panel. The bottom panel shows the unfolded spectra fitted with tbabs*(compps+powerlaw), for the three observations. The compps corresponds to Comptonization and powerlaw corresponds to an additional spectral component. Here the index (\(\Gamma\)) of the powerlaw component is kept frozen in the analysis (see text for more details). The complete set of fitted spectral parameters for both the models: tbabs*(compps+powerlaw) and tbabs*compps is given in Table 2 in appendix C. \(>\) 175 \(keV\). The measured relative contributions of the powerlaw (1.4\(\times\)10\({}^{-9}\) erg/cm\({}^{2}\) in 100-175 keV, 8.4\(\times\)10\({}^{-10}\) erg/cm\({}^{2}\) in 175-230 keV, and 9.2\(\times\)10\({}^{-10}\) erg/cm\({}^{2}\) in 230-380 keV) to the total flux (3.9\(\times\)10\({}^{-9}\) erg/cm\({}^{2}\) in 100-175 keV, 1.6\(\times\)10\({}^{-9}\) erg/cm\({}^{2}\) in 175-230 keV, and 1.4\(\times\)10\({}^{-9}\) erg/cm\({}^{2}\) in 230-380 keV) are consistent with the flux contributions expected from the observed polarization results within 1\(\sigma\) scatter of each other, assuming a 50 % maximum polarization from the synchrotron flux (more details in appendix C). ## 5 Summary and Discussions In this paper, we report new polarization measurements of Cygnus X-1 using the _Astrosat_-CZTI instrument in 100-380 keV. Polarization measurements were done in three different spectral states - pure hard (PH), intermediate hard (IMH), and intermediate soft states (IMS). In the PH and IMS states, we did not see any evidence of polarization (upper limit of \(\sim\)10 %). However, the IMH state was seen to have polarized emission with a fraction of 22 %, measured with more than 5\(\sigma\) significance at an angle around 236\({}^{\circ}\) (local north to east in anti-clockwise direction). Energy resolved analysis shows that the polarization increases with energy from no polarization at energies \(<\)175 keV to \(\sim\)40 % polarization at higher energies. It has been known for a long time that the spectral shape observed at high energies for Cygnus X-1 requires multiple components. The two distinct polarization angles measured at low and high energies (see Figure 3) strongly indicate the existence of a distinct spectral component at high energies, with an origin different from that in the putative corona (Cadolle Bel et al., 2006; Jourdain et al., 2014; Rahoui et al., 2011). The high polarization seen here strongly links this component to synchrotron radiation in an ordered magnetic field, possibly from the base of the jet. Further, finding high polarization confined only to the IMH state provides further clues to the origin of jets in Cygnus X-1. In the hard state, the different sub-classes (PH, IMH, and IMS) are configurations of the accretion disk dictated by accretion rate, location of disk truncation, and the strength of outflows and jets. IMH state is fascinating because, in this spectral state, maximum radio flux variation has been observed (Lubinski et al., 2020), and strong jets are expected to be formed. We also detect high polarization in this state. The evolution of polarization fraction with energy in this state along with the spectral analysis results, therefore, favors a scenario where the coronal and jet emission mechanisms co-exist in 100-380 keV energy range and intersect around 200 keV. In the PH and IMS spectral states, on the other hand, there is no evidence of polarization from the jet component, either in the energy-integrated or in the energy-resolved analysis. However, it is to be noted that steady radio emission is seen in both these states, although the flux and its variation are found to be low (Lubinski et al., 2020). This suggests that the jet component may be present in all three states with similar polarization properties, but in the PH and the IMS states, the X-ray emission is dominated by the Corona all the way to \(\sim\)400 keV. Synchrotron process in an ordered magnetic field in a jet represents the most probable way to produce highly polarized emission. However, as emphasized by Zdziarski et al. (2014), high polarization levels in X-rays (e.g. seen in _INTEGRAL_) require extreme conditions that are unlikely to occur in a steady-state jet. Recently, Russell and Shahbaz (2014) attempted multi-wavelength SED modeling of Cygnus X-1 spectral and polarimetric data (radio, IR, optical, and _INTEGRAL_ data in X-rays), based on synchrotron emission in an ordered magnetic field of a steady-state jet. They failed to explain the high-energy polarization angles (\(\sim\)60\({}^{\circ}\) away from the jet) reported by _INTEGRAL_, while the optical and IR emissions are polarized in the direction of the jet. One expects the intrinsic polarization angle to be wavelength independent when a single electron population in an optically thin jet is responsible for emission across the electromagnetic spectrum. These contradictions suggest an alternative possibility that the observed high polarization may result from some peculiar transient phenomena occurring mainly in the IMH state of the source. We speculate below a possible scenario. In transient black hole binaries, it is observed that sources traverse well-defined paths of state transitions starting from the low hard state (with the indication of a steady jet as evidenced by the radio emission) and then make a state transition to the soft State (Belloni et al., 2005). This transition, quite often, passes through the IMH state, and it is even suggested that during this transition, the source passes through a specific 'jet-line' where super-luminal jet ejections are observed to take place (Fender et al., 2004). Cygnus X-1 is a high-mass X-ray binary, and, in contrast to the black hole transients, it makes slow transitions and spends several days in each sub-state. It is quite conceivable that we are seeing the 'jet-line' transition in slow motion during the IMH state of Cygnus X-1. Hence, many of the assumptions of the steady-state jets, which failed to explain the high polarization and the different PA, may not be valid during a transient jet. Exploring transient jet formation, based on the constraints presented in this work, could enable us to understand the intricate disk-jet connection in black hole sources. Observationally, a detailed time-resolved multi-wavelength observation of Cygnus X-1 in the IMH state would be instrumental in understanding this enigmatic source. This publication uses data from the _AstroSat_ mission of the Indian Space Research Organisation (ISRO), archived at the Indian Space Science Data Centre (ISSDC). CZT-Imager is built by a consortium of institutes across India, including the Tata Institute of Fundamental Research (TIFR), Mumbai, the Vikram Sarabhai Space Centre, Thiruvananthapuram, ISRO Satellite Centre (ISAC), Bengaluru, Inter University Centre for Astronomy and Astrophysics, Pune, Physical Research Laboratory, Ahmedabad, Space Application Centre, Ahmedabad. Contributions from the vast technical team from all these institutes are gratefully acknowledged. Specifically, we would like to thank M. K. Hingar, A. P. K. Kutty, M. H. Patil, S. Sinha and Y. K. Arora (TIFR) for the CZT- Imager hardware fabrication; and K. S. Sarma, K. H. Navalgund, R. Pandiyan and K. Subbarao (ISAC) for project management and mission operation. The continued support from M. Annadurai and A. S. Kirankumar is gratefully acknowledged. ## Appendix A Determination of Spectral States CZTI data consists of a time-tagged event list with a time resolution of 20 \(\mu\)s which include the information of the CZTI quadrant, CZT detector module ID, pixel number, and PHA value for each event. CZTI data reduction pipeline takes this event list as an input and generate standard data products like light curves and spectra. For generation of background subtracted spectrum and light curves, CZTI analysis pipeline makes use of mask-weighting technique where the background is measured simultaneously considering the pixels' open fractions and effective areas. In order to identify the spectral state class of ID2992, ID4646, and ID5146, we followed the same technique prescribed by Lubinski et al. (2020). They have done a detailed spectral analysis of Cygnus X-1 using _INTEGRAL_ data spanning over fifteen years. and categorised the states into hard and soft regimes based on the hard X-ray flux in 22-100 keV: hard state for flux above 75\(\times\)10\({}^{-10}\) erg cm\({}^{-2}\) s\({}^{-1}\) and soft state for flux below this value. Each regime is further categorised into pure, transitional and intermediate, totaling six states - pure hard (PH, \(\Gamma\leq 1.78\)), transitional hard (TH, \(1.78\leq\Gamma\leq 1.93\) ), hard intermediate (IMH, \(1.93\leq\Gamma\leq 2.29\)), soft intermediate (IMS, \(1.93\leq\Gamma\leq 2.29\)), transitional soft (TS, \(2.29\leq\Gamma\leq 2.65\)), and pure soft (PS, \(\Gamma>2.65\)) states based on the clustering of the data in spectral index and flux density diagram. Spectral analysis similar to the Lubinski et al. (2020) requires analysis of hourly or sub-hourly data (0.5-2 hour). Since each orbit of the _AstroSat_ lasts for \(\sim 96\) minutes, we proceeded with spectral analysis of orbit-wise data. Each of the three observations is divided into separate orbit-wise cleaned event files by applying the orbit-wise Good Time Interval (GTI), which is obtained using a program written in Interactive data Language (IDL). We exclude the South Atlantic Anomaly (SAA) regions for each orbit. The orbit-wise event files are then used to obtain the the spectral and response files from the standard CZTI data reduction pipeline. The spectral analysis is carried out using the xspec(Arnaud, 1996) for each orbit files. The spectrum for all the observations is fitted with an powerlaw model. The spectral indices for the orbits in 30-100 keV and the computed model flux in 22-100 keV obtained from the spectral fitting are plotted in Figure 1 for the three observations. The average values of the spectral indices and flux are given in table 1. Based on these values, we determine that ID5146 is a PH state, ID2992 is a IMH state, and ID4646 is a IMS state. ## Appendix B X-ray Polarimetry with CZT-Imager In the CZTI, polarization is estimated from the azimuthal scattering angle distribution (ASAD) of the Compton scattered photons (see Bernard et al., 2022; Chattopadhyay, 2021; Lei et al., 1997, for the Compton scattering technique details). The CZTI consists of a large pixelated detector plane (geometric area of 976 cm\({}^{2}\)) with pixel size of 2.5 mm \(\times\) 2.5 mm and 5 mm thickness, and possesses considerable Compton scattering efficiency above 100 keV, making it suitable for Compton scattering polarimetry in 100-380 keV. The on-board electronics preserve simultaneous multi-pixel events and enable time-tagged transmission of individual events, thus providing polarization information on a routine basis. Here we brief the polarization analysis steps (Vadawale et al., 2018). ## Selection of Compton Events The first step of the polarization analysis is to select valid Compton events. For each of the three individual observations, we removed the intervals of high background before and after the South Atlantic Anomaly passage in each orbit. We also removed all events from pixels that were classified as noisy or spectroscopically bad. Then, we extracted the double pixel events satisfying the Compton criteria, e.g., detected within 20 \(\mu\)s time window in two adjacent pixels and the energetics of the events satisfy Compton kinematics. For details, see Chattopadhyay et al. (2014); Vadawale et al. (2015). The Compton events are then used to obtain the source ASAD. ## Background Subtraction It is important to consider an appropriate blank sky observation for the measurement of background. Since the CZTI mask and other support structures become increasingly transparent at energies beyond 100 keV, a bright X-ray source at a large off-axis angle, even up to 80\({}^{\circ}\), can interfere with the true background. The blank sky observations were taken from a region where the Crab and Cygnus X-1 are out of the open field of view of the CZTI. We also need to consider the effect of the earth X-ray albedo, which constitutes a large fraction of the hard X-ray background in the low earth orbit. Since CZTI is at the corner of the spacecraft with most instruments present only at one side of CZTI and albedo background comes from one side of the spacecraft, this may lead to an asymmetry in the background azimuthal scattering angle distribution. In order to minimize this effect, the blank sky observations were selected such that the relative orientation of the spacecraft during the background measurement is the same (\(\pm 5^{\circ}\)) as that during the source measurement. The same data cleaning process and Compton criteria are implemented to generate the background ASAD. Prior to the background ASAD subtraction, an important point to consider is that the background count rate changes within the duration of observation with a stable periodic nature. This results from the inclined orbit of _AstroSat_ where some of the orbits pass through the outskirts of the South Atlantic Anomaly giving an increase in count rate when the spacecraft is in these regions of the orbit. Because of the rotation of the earth, this phase of high count rate reappears every \(\sim\)24 hours and this has been seen in other _AstroSat_ instruments also (Antia et al., 2022) apart from CZTI (Kumar et al., 2021; Kumar et al., 2022). To correct for this effect, we try to match the phases of orbital variation of the count rate during background and Cygnus X-1 observations using a cross-correlation method (Kumar et al., 2021) and identify the common or phase-matched regions. These phase matched regions are used to correct for the long-term variation in data before background subtraction. ## Modulation Curve Fitting In the next step, we fit the ASAD to obtain polarization fraction and angle. Because of the non-uniformity in the solid angles subtended by the surrounding edge and corner pixels to the central scattering pixel, we see an unequal count rates in the edge and corner pixels, which is first corrected by normalizing it with an ASAD for 100 % unpolarized radiation in the same energy range. For Gamma-ray bursts, this is typically obtained from the Geant4 (Agostinelli et al., 2003) Monte Carlo simulation (Chattopadhyay et al., 2019, 2022). However, for ON-axis sources, the unpolarized ASAD is best obtained from the observed source azimuthal distribution by averaging the edge and corner pixels separately (Vadawale et al., 2015, 2018). The geometry-corrected modulation curves are fitted by a sinusoidal function, \(A\cos 2(\phi-\phi_{0}+\pi/2)+B\), to estimate the polarization angle in the detector plane (\(\phi_{0}\)) and the modulation amplitude (\(\mu=A/B\)). Errors on the raw ASAD of source and background observations are computed individually based on counting statistics and propagated to calculate the errors on the background-subtracted ASAD. To estimate the values of the fitting parameters (A, B, \(\phi_{0}\)) and the uncertainties on them, we perform MCMC simulations for a large number (1 million) of iterations. For each iteration, the posterior probability is estimated based on randomly sampled model parameter values. At the end of the evolution chain, while the modulation factor and polarization angle are estimated from the best fitted values of the parameters (\(A,B\) and \(\phi_{0}\)), uncertainties on them are computed from the distribution of the posterior probabilities of the parameters. The polarization fractions for each of these observations are estimated by normalizing the fitted \(\mu\) by the modulation factor expected for 100 % polarized radiation (\(\mu_{100}\)), which is obtained from Geant4 simulations where an identical process for the selection of Compton events is followed. In order to confirm that an observation is statistically polarized, we estimate Bayes factor for the sinusoidal model (M\({}_{1}\), for polarized photons) and a constant model (M\({}_{2}\), unpolarized photons) as the ratio of marginal likelihoods of M\({}_{1}\) to M\({}_{2}\) (for more details, see Chattopadhyay et al., 2019). In the cases where the Bayes factor is greater than 3, we estimate polarization fraction and angle from the fitted parameters (for example, for IMH2992, the Bayes factor is \(>\)3 in the full energy range and in 175-230 keV and 230-380 keV). If the Bayes factor is estimated to be less than 3 (e.g., PH5146, IMS4646, and IMH2992 in 100-175 keV), we estimate polarization upper limit. ## Appendix C Spectral fit results The spectral data for the three observations (IMH2992, IMS4646, and PH5146) were fitted with two different models - tbabs*compps and tbabs*(compps+powerlaw). In Table 2, the fitted parameter values are summarized for the models. From the fitted power law and Comptonization parameters for IMH2992, we computed the contribution of the power law to the total flux in 100-175, 175-230, and 230-380 keV respectively. Assuming synchrotron origin of the power law, we computed the expected polarization fraction as a function of energy assuming a maximum 50 % polarization fraction and compare it with the observed polarization fractions as shown in Figure 5.
2306.03738
Finitistic Spaces with the Orbit Space FP^n x S^m
Let G = S^d, d = 0, 1 or 3, act freely on a finitistic connected space X. This paper gives the cohomology classification of X if a mod 2 or rational cohomology of the orbit space X/G is isomorphic to the product of a projective space and sphere FP^n x S^m, where F = R, C or H, respectively. For a free involution on X, a lower bound of covering dimension of the coincidence set of a continuous map f: X -> R^k is also determined.
Anju Kumari, Hemant Kumar Singh
2023-06-06T14:55:38Z
http://arxiv.org/abs/2306.03738v1
# Finitistic spaces with the orbit space \(\mathbb{FP}^{n}\times\mathbb{S}^{m}\) ###### Abstract. Let \(G=\mathbb{S}^{d}\), \(d=0,1\) or \(3\), act freely on a finitistic connected space \(X\). This paper gives the cohomology classification of \(X\) if a mod \(2\) or rational cohomology of the orbit space \(X/G\) is isomorphic to the product of a projective space and sphere \(\mathbb{FP}^{n}\times\mathbb{S}^{m}\), where \(\mathbb{F}=\mathbb{R},\mathbb{C}\) or \(\mathbb{H}\), respectively. For a free involution on \(X\), a lower bound of covering dimension of the coincidence set of a continuous map \(f:X\rightarrow\mathbb{R}^{k}\) is also determined. Key words and phrases:Free action; Finitistic space; Leray-Serre spectral sequence; Gysin sequence; Euler class 2010 Mathematics Subject Classification: Primary 55T10; Secondary 57S99 This paper is supported by the Science and Engineering Research Board (Department of Science and Technology, Government of India) with reference number- EMR/2017/002192. classification of \(X\) is discussed for (i) \(X/G=L_{p}^{2n+1}\) or \(\mathbb{CP}^{n}\) and \(G=\mathbb{Z}_{p},p\) a prime or \(\mathbb{S}^{1}\)[20], and for \(X/G=\mathbb{HP}^{n}\) and \(G=\mathbb{S}^{3}\)[11]. We contribute to this question by determining the cohomology classification of a finitistic space \(X\) equipped with free action of \(G=\mathbb{Z}_{2},\mathbb{S}^{1}\) or \(\mathbb{S}^{3}\) and the orbit space \(X/G\) is a mod \(2\) or rational cohomology \(\mathbb{FP}^{n}\times\mathbb{S}^{m}\), where \(\mathbb{F}=\mathbb{R},\mathbb{C}\) or \(\mathbb{H}\), respectively. These results describes the converse of some results for \(G=\mathbb{Z}_{2}\) or \(\mathbb{S}^{1}\) actions proved by Dotzel et. al. [5], and for \(G=\mathbb{S}^{3}\) actions proved in [10]. For free \(\mathbb{Z}_{2}\)-space \(X\) and any space \(Y\), the coincidence set of a continuous map \(f:X\to Y\) is defined as \(A(f)=\{x\in X|f(x)=f(gx)\text{ for each }g\in\mathbb{Z}_{2}\}\). The classical Borsuk-Ulam Theorem states that for a continuous map \(f:\mathbb{S}^{n}\to\mathbb{R}^{k}\), the coincidence set \(A(f)=\{x\in\mathbb{S}^{n}|f(x)=f(-x)\}\) is nonempty, where \(\mathbb{S}^{n}\) is equipped with antipodal action. Munkholm [14] shows that for a map \(f:\mathbb{S}^{n}\to M\), topological dimension of coincidence set \(A(f)\) is greater than or equal to \(n-k\), where \(M\) is a compact \(k\)-dimensional topological manifold, \(n>k\). Biasi et al. [1] extends this result for generalized manifolds. In this paper, we determine a lower bound of the covering dimension of coincidence set \(A(f)\) of a continuous map \(f:X\to\mathbb{R}^{k}\) if \(H^{*}(X/G;\mathbb{Z}_{2})=\mathbb{Z}_{2}[a,b]/\langle a^{n+1},b^{2}\rangle\), where \(\deg a=1\) and \(\deg b=m\). ## 2. Preliminaries Let \(G\) be a compact Lie group and \(G\to E_{G}\to B_{G}\) be the universal principal \(G\)-bundle, where \(B_{G}\) is the classifying space. Suppose \(G\) acts freely on a finitistic space \(X\). The associated bundle \(X\hookrightarrow(X\times E_{G})/G\to B_{G}\) is a fibre bundle with fibre \(X\). Put \(X_{G}=(X\times E_{G})/G\). The bundle \(X\hookrightarrow X_{G}\to B_{G}\) is called the Borel fibration. Then there exists the Leray-Serre spectral sequence for the Borel fibration \(X\stackrel{{ i}}{{\hookrightarrow}}X_{G}\stackrel{{ \pi}}{{\rightarrow}}B_{G}\) which converges to \(H^{*}(X_{G})\) as an algebra with \(E_{2}^{k,l}=H^{k}(B_{G};\mathcal{H}^{l}(X;R))\). If \(B_{G}\) is simply connected then the system of local coefficients on \(B_{G}\) is simple then the \(E_{2}\)-term becomes \[E_{2}^{k,l}=H^{k}(B_{G};R)\otimes H^{l}(X;R).\] We recall some results which are needed to prove our results: **Proposition 2.1**.: Let \(X\stackrel{{ i}}{{\hookrightarrow}}X_{G}\stackrel{{ \pi}}{{\rightarrow}}B_{G}\) be the Borel fibration. Suppose that the system of local coefficients on \(B_{G}\) is simple, then the edge homomorphisms \[\begin{array}{c}H^{k}(B_{G})\cong E_{2}^{k,0}\longrightarrow E_{3}^{k,0} \longrightarrow\cdots\longrightarrow E_{k}^{k,0}\longrightarrow E_{k+1}^{k,0} =E_{\infty}^{k,0}\subset H^{k}(X_{G}),\text{ and }\\ H^{l}(X_{G})\longrightarrow E_{\infty}^{0,l}=E_{l+1}^{0,l}\subset E_{l}^{0,l }\subset\cdots\subset E_{2}^{0,l}\cong H^{l}(X)\end{array}\] are the homomorphisms \[\pi^{*}:H^{k}(B_{G})\to H^{k}(X_{G})\text{ and }i^{*}:H^{l}(X_{G})\to H^{l}(X).\] For details about spectral sequences, we refer the reader to [12]. Let \(h:X_{G}\to X/G\) be the map induced by the \(G\)-equivariant projection \(X\times E_{G}\to X\). Then h is a homotopy equivalence [4]. All the cohomologies are Cech cohomology with coefficients in \(R\), where \(R=\mathbb{Q}\) or \(\mathbb{Z}_{2}\). Note that \(X\sim_{R}Y\) means \(H^{*}(X;R)\cong H^{*}(Y;R)\). For \(R=\mathbb{Q}\) and for \(G=\mathbb{S}^{1}\) or \(\mathbb{S}^{3}\), we assume that the associated sphere bundle \(G\hookrightarrow X\to X/G\) is orientable. **Proposition 2.2**.: ([17, 9]) Let \(G=\mathbb{S}^{1}\) or \(\mathbb{S}^{3}\), act freely on a finitistic space \(X\). If \(H^{i}(X;R)=0\) for all \(i>n\), then \(H^{i}(X/G;R)=0\) for all \(i>n\). Now, we recall Gysin sequence of sphere bundles. **Proposition 2.3**.: Let \(G=\mathbb{S}^{d}\), \(d=0,1\) or \(3\), act freely on a finitistic space \(X\). The Gysin sequence of the sphere bundle \(G\hookrightarrow X\overset{p}{\rightarrow}X/G\) is: \[\cdots\to H^{i}(X/G)\overset{p_{i}^{*}}{\longrightarrow}H^{i}(X) \overset{\rho_{i}}{\longrightarrow}H^{i-d}(X/G)\overset{\cup}{\longrightarrow }H^{i+1}(X/G)\overset{p_{i+1}^{*}}{\longrightarrow}H^{i+1}(X)\rightarrow\cdots\] which start with \[0\longrightarrow H^{d}(X/G)\overset{p_{d}^{*}}{\longrightarrow}H^{d}(X) \overset{\rho_{d}}{\longrightarrow}H^{0}(X/G)\overset{\cup}{\longrightarrow }H^{d+1}(X/G)\overset{p_{d+1}^{*}}{\longrightarrow}H^{d+1}(X)\longrightarrow\cdots\] where \(\cup:H^{i}(X/G)\to H^{i+d+1}(X/G)\) maps \(x\to x\cup u\) and \(u\in H^{d+1}(X/G)\) denotes the Euler class of the sphere bundle. It is easy to observe that \(p_{i}^{*}\) is an isomorphism for \(0\leq i\leq d-1\) for \(d=1\) or \(2\). In this paper, we have considered finitistic spaces. These spaces were introduced by R. G. Swan [21] and have been discovered as relevant spaces for the study of cohomological aspects of tranformation groups [2]. Recall that a space is said to be finitistic if it is paracompact, Hausdorff and every open cover of it have a finite dimensional open refinement. All compact spaces, all finite dimensional and all finite-dimensional paracompact spaces are some examples of finitistic spaces. Note that the space \(X=\prod\limits_{n=1}^{\infty}\mathbb{S}^{n}\times\mathbb{R}^{k}\) is an example of finitistic space which is neither compact nor has finite covering dimension. Recall that \(H^{*}(\mathbb{RP}^{n}\times\mathbb{S}^{m};R)=R[a,b]/\langle a^{n+1},b^{2}\rangle\), where \(\deg a=1\) and \(\deg b=m\). ## 3. Main Theorems Let \(G=\mathbb{S}^{d}\), \(d=0,1\) or \(3\), act freely on a finitistic space \(X\) having the mod \(2\) or rational cohomology the product of spheres \(\mathbb{S}^{(d+1)n+d}\times\mathbb{S}^{m}\). It has been proved that one of the possibilities of the orbit space \(X/G\) are the mod \(2\) or rational cohomology \(\mathbb{FP}^{n}\times\mathbb{S}^{m}\), where \(\mathbb{F}=\mathbb{R},\mathbb{C}\) or \(\mathbb{H}\), respectively. Using techniques of the Gysin sequence of sphere bundles and the Leray-Serre spectral sequence of Borel fibration, we discuss converse of these statements. These results also describes the cohomology classification of a finitistic connected free \(G\)-space \(X\) with the orbit space a product of projective space and sphere. First, we have discussed for free actions of \(G=\mathbb{S}^{3}\). **Theorem 3.1**.: Let \(G=\mathbb{S}^{3}\) act freely on a finitistic connected space \(X\) with \(X/G\sim_{R}\mathbb{HP}^{n}\times\mathbb{S}^{m}\), where \(R\) is \(\mathbb{Q}\) or \(\mathbb{Z}_{2}\). Then the cohomology algebra of \(X\) with coefficients in \(R\) is isomorphic to the cohomology algebra of one of the following: 1. \(\mathbb{S}^{m}\times\mathbb{S}^{4n+3}\); 2. \(\mathbb{S}^{3}\times\mathbb{S}^{m}\times\mathbb{HP}^{n}\); 3. \(\mathbb{HP}^{n}\times\mathbb{S}^{7}\) and \(m=4\); 4. \(R[x,y]/\langle x^{n+1},y^{4}\rangle\), where \(\deg x=4\), \(\deg y=3\), \(m=6\), \(\beta(y)=x\) and \(R=\mathbb{Z}_{2}\), where \(\beta:H^{3}(X/G)\to H^{4}(X/G)\) denotes the Bockstein homomorphism associated with the coefficient sequence \(0\to\mathbb{Z}_{2}\to Z_{4}\to\mathbb{Z}_{2}\to 0\). Proof.: It is clear that \(H^{i}(X/G)\cong H^{i}(X)\) for \(i=0,1\) and \(2\), and \(H^{i}(X)=0\) for all \(i>m+4n+3\). We consider different possibilities of the Euler class in the Gysin sequence: **Case(a):** When \(\cup:H^{0}(X/G)\to H^{4}(X/G)\) is trivial. First, assume that \(m>4n\). In this case, \(\cup:H^{i}(X/G)\to H^{i+4}(X/G)\) is trivial for all \(i\geq 0\). For \(0\leq i<n\) and \(k=0,m\), we get \(\rho_{k+4i+3}\) and \(p_{k+4i+4}^{*}\) are isomorphisms. This implies that \(H^{k+4i+3}(X)\cong H^{k+4i+4}(X)\cong R\). Suppose that \(\{u_{4i+3}\}\) and \(\{x^{i+1}\}\) denotes bases for \(H^{4i+3}(X)\) and \(H^{4i+4}(X)\), respectively, where \(\rho_{4i+3}(u_{4i+3})=a^{i}\) and \(p_{4}^{*}(a)=x\). Note that \(H^{i}(X)=0\) for \(i\neq 4n+3\), and \(4n<i<m\). If \(m\neq 4n+3\), then by the exactness of the Gysin sequence, \(H^{4n+3}(X)\cong H^{m}(X)\cong R\) with bases \(\{u_{4n+3}\}\) and \(\{y\}\), respectively; and if \(m=4n+3\) then \(H^{4n+3}(X)\cong R\oplus R\) with basis \(\{u_{4n+3},y\}\), where \(\rho_{4n+3}(u_{4n+3})=a^{n}\) and \(p_{m}^{*}(b)=y\). Also, \(H^{m+4n+3}(X)\cong R\). Let \(\{v_{m+4i+3}\}\) be basis for \(H^{m+4i+3}(X)\) with \(\rho_{m+4i+3}(v_{m+4i+3})=a^{i}b\) for all \(0\leq i\leq n\). Clearly, \(H^{j}(X)=0\) for all \(k\leq j\equiv k+1\) or \(k+2(\text{mod }4)<k+4n\); and \(k=0,m\); and \(j\neq 4n+3\). Note that \(u_{3}^{2}=0\). Now we assume that \(m\leq 4n\). We consider four cases \(m\equiv j(\text{mod }4),0\leq j\leq 3\). \(\mathbf{m\equiv 0}\)**(mod 4):** Let \(m=4i_{0}\) for some \(i_{0}\leq n\). In this case, \(H^{i}(X)=0\) for \(i\equiv 1(\text{mod }4)\) or \(i\equiv 2(\text{mod }4)\). Also, \(\rho_{4i-1}\) and \(p_{4i}^{*}\) are isomorphisms for all \(i\geq 0\). This implies that for each \(0\leq i<i_{0}\), we have \(H^{4i}(X)\cong R\) and \(H^{4i+3}(X)\cong R\) with bases \(\{x^{i}\}\) and \(\{u_{4i+3}\}\), respectively, and for \(i_{0}\leq i\leq n\), \(H^{4i}(X)\cong R\oplus R\) and \(H^{4i+3}(X)\cong R\oplus R\) with bases \(\{x^{i},x^{i-i_{0}}y\}\) and \(\{u_{4i+3},v_{4i+3}\}\), respectively, where \(p_{m}^{*}(b)=y,p_{4}^{*}(a)=x,\rho_{4i+3}(u_{4i+3})=a^{i}\) and \(\rho_{4i+3}(v_{4i+3})=a^{i-i_{0}}b\). Also, for \(1\leq i\leq i_{0}\), \(H^{4n+4i}(X)\cong R\) and \(H^{4n+4i+3}(X)\cong R\) with bases \(\{x^{n+i-i_{0}}y\}\) and \(\{v_{4n+4i+3}\}\), respectively, where \(\rho_{4n+4i+3}(v_{4n+4i+3})=a^{n+i-i_{0}}b\). Clearly, \(u_{3}^{2}=0\). Thus, \[H^{i}(X)=\begin{cases}R&\text{ if }j\leq i\equiv 0\text{ or }3(\text{mod }4)\leq m-1+j,j=0\text{ or }4n+4\\ R\oplus R&\text{ if }m\leq i\equiv 0\text{ or }3(\text{mod }4)\leq 4n+m+3\\ 0&\text{ otherwise.}\end{cases}\] \(\mathbf{m\equiv 1}\)**(mod 4):** Let \(m=4i_{0}+1\) for some \(i_{0}<n\). For \(i\neq i_{0}+j,1\leq j\leq n+1\), we get \(p_{4i}^{*}\) are isomorphisms. This implies that \(H^{4i}(X)\cong R\) with basis \(\{x^{i}\}\) for \(0\leq i\leq i_{0}\), where \(p_{4}^{*}(a)=x\). For \(i_{0}+1\leq i\leq n\), \(H^{4i}(X)\cong R\oplus R\) with basis \(\{x^{i},v_{4i}\}\); and for \(n+1\leq i\leq i_{0}+n+1\), \(H^{4i}(X)\cong R\) with basis \(\{v_{4i}\}\), where \(\rho_{4i}(v_{4i})=a^{i-i_{0}-1}b\). It is clear that \(H^{4i+2}(X)=0\) for all \(i\geq 0\); \(H^{4i+3}(X)\cong R\cong H^{m+4i}(X)\), for all \(0\leq i\leq n\), with bases \(\{u_{4i+3}\}\) and \(\{x^{i}y\}\), where \(\rho_{4i+3}(u_{4i+3})=a^{i}\) and \(p_{m}^{*}(b)=y\); and \(H^{4i+3}(X)=H^{4i+1}(X)=0\) otherwise. Obviously, \(u_{3}^{2}=0\). We have \[H^{i}(X)=\begin{cases}R&\text{ if }j\leq i\equiv 0(\text{mod }4)\leq j+m-1,j=0 \text{ or }4n+4,\text{ or }\\ &\text{ }j\leq i\equiv j(\text{mod }4)\leq j+4n,j=3\text{ or }m\\ R\oplus R&\text{ if }m<i\equiv 0(\text{mod }4)\leq 4n\\ 0&\text{ otherwise.}\end{cases}\] \(\mathbf{m\equiv 2}\)**(mod 4):** Let \(m=4i_{0}+2\) for some \(i_{0}<n\). We get \(\rho_{4i+3}\) and \(p_{4i}^{*}\) are isomorphisms, for all \(i\geq 0\). Therefore, for \(0\leq i\leq n\), \(H^{4i+3}(X)\cong H^{4i}(X)\cong R\) with bases \(\{u_{4i+3}\}\) and \(\{x^{i}\}\), respectively, and for \(i>n\), \(H^{4i+3}(X)=H^{4i}(X)=0\), where \(\rho_{4i+3}(u_{4i+3})=a^{i}\) and \(p_{4}^{*}(a)=x\). Also, we have \(H^{4i+1}(X)=0\), for \(i\neq i_{0}+j,1\leq j\leq n+1\), and for \(i_{0}+1\leq i\leq i_{0}+n+1\), \(H^{4i+1}(X)\cong H^{4i-2}(X)\cong R\) with bases \(\{v_{4i+1}\}\) and \(\{x^{i-i_{0}-1}y\}\) where \(\rho_{4i+1}(v_{4i+1})=a^{i-i_{0}-1}b\) and \(p_{m}^{*}(b)=y\). Consequently, \[H^{i}(X)=\begin{cases}R&\text{ if }j\leq i\equiv j\text{ or }j+3(\text{mod }4)\leq j+4n+3,j=0\text{ or }m\\ 0&\text{ otherwise.}\end{cases}\] \(\mathbf{m\equiv 3(mod\ 4)}\)**:** Let \(m=4i_{0}+3\) for some \(i_{0}<n\). Then \(H^{4i}(X)=0\) for \(i>n\), and \(H^{4i}(X)\cong R\) with basis \(\{x^{i}\}\), for \(0\leq i\leq n\), where \(p_{4}^{*}(a)=x\). Also, \(H^{4i+1}(X)=0\) for all \(i\geq 0\). By the exactness of the Gysin sequence, for \(i_{0}+1\leq i\leq i_{0}+n+1\), \(H^{4i+2}(X)\cong R\) with basis \(\{v_{4i+2}\}\), where \(\rho_{4i+2}(v_{4i+2})=a^{i-i_{0}-1}b\), and \(H^{4i+2}(X)=0\), otherwise. Now, for \(0\leq i<i_{0}\), \(H^{4i+3}(X)\cong R\) with basis \(\{u_{4i+3}\}\); for \(i_{0}\leq i\leq n\), \(H^{4i+3}(X)\cong R\oplus R\) with basis \(\{u_{4i+3},x^{i-i_{0}}y\}\); and for \(n<i\leq i+i_{0}\), \(H^{4i+3}(X)\cong R\) with basis \(\{x^{i-i_{0}}y\}\), where \(\rho_{4i+3}(u_{4i+3})=a^{i}\) and \(p_{m}^{*}(b)=y\). Clearly, \(u_{3}^{2}=0\). We have \[H^{i}(X)=\begin{cases}R&\text{ if }j\leq i\equiv j(\text{mod }4)\leq j+4n,j=0 \text{ or }m+3,\text{ or }\\ &\text{ }j+3\leq i\equiv 3(\text{mod }4)\leq j+m-4,j=0\text{ or }4n+4\\ R\oplus R&\text{ if }m\leq i\equiv 3(\text{mod }4)\leq 4n+3\\ 0&\text{ otherwise.}\end{cases}\] Consider the Leray-Serre spectral sequence for the Borel fibration \(X\xleftrightarrow{i}X_{G}\stackrel{{\pi}}{{\rightarrow}}B_{G}\) which converges to \(H^{*}(X_{G})\) as an algebra and \(E_{2}^{k,l}=H^{k}(B_{G};R)\otimes H^{l}(X;R)\) for all \(k,l\geq 0\). The possible nontrivial differentials in the Leray-Serre spectral sequence for the Borel fibration \(X\xleftrightarrow{i}X_{G}\stackrel{{\pi}}{{\rightarrow}}B_{G}\) are \(\{d_{4r}\}_{r\geq 1}\). By the edge homomorphisms and the fact that \(p_{j}^{*}=i^{*}\circ h^{*}\) for all \(j\geq 0\), we get \(d_{r}(1\otimes x)=0=d_{r}(1\otimes y)\) for all \(r\geq 0\). Note that \(u_{4i+3}\) and \(v_{m+4i+3}\) are not in image of \(p_{4i+3}^{*}\) and \(p_{m+4i+3}^{*}\), respectively, so their image must be nonzero under some differential for all \(0\leq i\leq n\). Consequently, \(x^{i}u_{3}\neq 0\) and \(x^{i}u_{3}y\neq 0\), for all \(i\). Obviously, \(x^{n+1}=y^{2}=0\). For \(m>4n\), it is clear that \(u_{4i+3}=\alpha_{i}x^{i}u_{3}\) and \(v_{m+4i+3}=\beta_{i}x^{i}u_{3}y\) for some nonzero elements \(\alpha_{i},\beta_{i}\) in \(R\) and \(0\leq i\leq n\). In particular, for \(m=4n+3\), \(x^{n}u_{3}\) can not be equal to any multiple of \(y\) and so \(u_{4n+3}\) is generated by \(\{y,x^{n}u_{3}\}\). Now, suppose \(m\leq 4n\). If \(m=4i_{0}\), for some \(i_{0}\leq n\) then \(u_{4i+3}=\alpha_{i}x^{i}u_{3}\) for \(0\leq i<i_{0}\) and \(v_{4i+3+m}=\beta_{i}x^{i}u_{3}y\) for \(n+1-i_{0}\leq i\leq n\), where \(\alpha_{i}\)'s and \(\beta_{i}\)'s are nonzero elements in \(R\). We observe that \(x^{i}u_{3}\) can not be equal to any multiple of \(x^{i-i_{0}}u_{3}y\), for all \(i_{0}\leq i\leq n\). Therefore, the elements \(u_{4i+3}\) and \(v_{4i+3}\) are generated by \(\{x^{i}u_{3},x^{i-i_{0}}u_{3}y\}\) for each \(i_{0}\leq i\leq n\). If \(m=4i_{0}+1\) for some \(i_{0}<n\) then the elements \(x^{i}u_{3}y\) cannot be equal to any multiple of \(x^{4i+3+m}\) for each \(0\leq i\leq n-i_{0}+1\). Consequently, \(v_{4i+m+3}\) is generated by \(\{x^{i}u_{3}y,x^{4i+m+3}\}\) and \(u_{4i+3}\) is generated by \(\{x^{i}u_{3}\}\) for all \(0\leq i\leq n\). If \(m=4i_{0}+2\) for some \(i_{0}<n\) then we have \(u_{4i+3}=\alpha_{i}x^{i}u_{3}\) and \(v_{m+4i+3}=\beta_{i}x^{i}u_{3}y\) for some nonzero elements \(\alpha_{i},\beta_{i}\) in \(R\) and \(0\leq i\leq n\). In this case, \(u_{3}^{2}\) may be both zero or nonzero. If \(m=4i_{0}+3\) then \(v_{m+4i+3}=\alpha_{i}x^{i}u_{3}y\), where \(\alpha_{i}\in R\), for all \(0\leq i\leq n\). As \(x^{i}u_{3}\) can not be equal to \(x^{i-i_{0}}y\), for all \(i_{0}\leq i\leq n\), therefore, \(u_{4i+3}\) is generated by \(\{x^{i}u_{3},x^{i-i_{0}}y\}\), for all \(0\leq i\leq n\). For \(m\neq 3\), obviously \(u_{3}^{2}=0\). If \(u_{3}^{2}\neq 0\), for \(m=3\), then \(d_{4}(1\otimes v_{6})=0\), a contradiction. Therefore, \(u_{3}^{2}=0\). The cohomology algebra of \(X\) is given by \[H^{*}(X)=R[x,u_{3},y]/\langle x^{n+1},u_{3}^{2},y^{2}\rangle,\] where \(\deg x=4\), \(\deg u_{3}=3\) and \(\deg y=m\). This realizes the possibility (ii). If \(u_{3}^{2}\neq 0\) then \(m\) must be \(6\) and \(u_{3}^{2}=\alpha y\) for some \(\alpha\) nonzero in \(R\). By the commutativity of cup product, we get \(2u_{3}^{2}=0\). This implies that \(R\) cannot be \(\mathbb{Q}\). Therefore, \[H^{*}(X)\cong R[x,u_{3}]/\langle x^{n+1},u_{3}^{4}\rangle,\] \(\deg x=4\) and \(\deg u_{3}=3\) only when \(R=\mathbb{Z}_{2}\). By the properties of Steenrod squares \(Sq^{3}(y)=y^{2}\neq 0\). As \(3\) is not a power of \(2\), we get \(Sq^{3}=Sq^{2}\circ Sq^{1}+Sq^{1}\circ Sq^{2}\). This gives that \(Sq^{1}(y)=x\). Note that \(Sq^{1}\) is the Bockstein homomorphism associated with the coefficient sequence \(0\rightarrow\mathbb{Z}_{2}\to Z_{4}\rightarrow\mathbb{Z}_{2}\to 0\). This realizes the possibility (iv) of the Theorem. **Case(b):** When \(\cup:H^{0}(X/G)\to H^{4}(X/G)\) maps \(1\) to \(ca\) for some \(c\neq 0\) in \(R\). First, we suppose that \(m>4n\). For \(0\leq i<n\) and \(j=0,m\), we get \(\cup:H^{4i+j}(X/G)\to H^{4i+4+j}(X/G)\) are isomorphisms; \(\rho_{4i+3+j}\) and \(p_{4i+4+j}^{*}\) are trivial homomorphisms. Consequently, \(H^{4i+3+j}(X)=H^{4i+4+j}(X)=0\). Also, \(H^{4i+1+j}(X)=H^{4i+2+j}(X)=0\) except for \(H^{4n+3}(X)\). Note that \(H^{i}(X)=0\) if (\(4n<i<m\) and \(i\neq 4n+3\)) and \(H^{m+4n+3}(X)\cong R\) with basis \(\{u_{m+4n+3}\}\), where \(\rho_{m+4n+3}(u_{m+4n+3})=a^{n}b\). For \(m\neq 4n+3\), \(\rho_{4n+3}\) and \(p_{m}^{*}\) are isomorphisms. Consequently, \(H^{4n+3}(X)\cong R\cong H^{m}(X)\) with bases \(\{u_{4n+3}\}\) and \(\{y\}\), respectively, where \(p_{m}^{*}(b)=y\) and \(\rho_{4n+3}(u_{4n+3})=a^{n}\). If \(m=4n+3\) then \(H^{4n+3}(X)\cong R\oplus R\) with basis \(\{u_{4n+3},y\}\), where \(\rho_{4n+3}(u_{4n+3})=a^{n}\) and \(p_{m}^{*}(b)=y\). Also, \(H^{m+4n+1}(X)=H^{m+4n+2}(X)=0\). For \(m\neq 4n+3\), we get \[H^{i}(X)=\begin{cases}R&\text{ if }i=0,4n+3,m,m+4n+3\\ 0&\text{ otherwise}\end{cases}\] and for \(m=4n+3\), we get \[H^{i}(X)=\begin{cases}R\oplus R&\text{ if }i=4n+3\\ R&\text{ if }i=0,m+4n+3\\ 0&\text{ otherwise.}\end{cases}\] For \(m\leq 4n\), we get the same cohomology groups as in \(m>4n\). Now, we compute the cohomology ring structure of \(X\). In the Leray-Serre spectral sequence for the Borel fibration \(X\hookrightarrow X_{G}\overset{\pi}{\to}B_{G}\), \(d_{4r^{\prime}}(1\otimes u_{4n+3})\neq 0\) for some \(r^{\prime}>0\) and \(d_{4r}(1\otimes y)=0\) for all \(r\geq 0\). Firstly, suppose that \(m=4i_{0}\) and \(r^{\prime}=n-i_{0}+1\), where \(1\leq i_{0}\leq n\), then \(E_{4n-m+5}^{k,4n+3}=0,E_{4n-m+5}^{k,q}=E_{2}^{k,q}\) for all \(k\geq 0\) and \(q=0,m+4n+3\). Also, \(E_{4n-m+5}^{k,m}=E_{2}^{k,m}\) for all \(k\leq 4n-m\) and trivial otherwise. Since \(G\) acts freely on \(X\), therefore, \(d_{m+4n+4}(1\otimes u_{m+4n+3})=ct^{n+i_{0}+1}\otimes 1\) for some \(c\neq 0\) in \(R\). Then \(E_{\infty}^{4k,q}=R\) for all \((4k\leq m+4n,q=0)\), \((4k\leq 4n-m,q=m)\) and trivial otherwise. This implies that \(t\otimes 1\in E_{2}^{4,0}\) and \(1\otimes y\in E_{2}^{0,m}\) are permanent cocycles. Then by edge homomorphism there exist \(u\in E_{\infty}^{4,0}\) and \(w\in E_{\infty}^{0,m}\) corresponding \(t\otimes 1\) and \(1\otimes y\) respectively with \(\pi^{*}(t)=u\). We have \(w^{2}=u^{n+i_{0}+1}=u^{n-i_{0}+1}w=0\). This implies that \[\operatorname{Tot}E_{\infty}^{*,*}\cong\mathbb{R}[u,w]/\langle w^{2},u^{n+i_ {0}+1},u^{n-i_{0}+1}w\rangle,\] where \(\deg u=4\text{ and }\deg w=m\). Then there exist an element \(v\in H^{m}(X_{G})\) corresponding to \(w\in E_{\infty}^{0,m}\) such that \(i^{*}(v)=y\). We have \(u^{n-i_{0}+1}v=\alpha u^{n+1}\) and \(v^{2}=\beta u^{2i_{0}}+\gamma u^{i_{0}}v\), where \(\alpha,\beta,\gamma\in R\) and \(\beta=0\) if \(m>4n\). So, the ring cohomology of \(X/G\) is given by \(H^{*}(X_{G})\cong R[u,v]/\langle u^{n-i_{0}+1}v-\alpha u^{n+1},v^{2}-\beta u^ {2i_{0}}-\gamma u^{i_{0}}v,u^{n+i_{0}+1}\rangle\) where \(\deg u=4,\deg v=m\) and \(\alpha,\beta,\gamma\in R\), \(\beta=0\) if \(m>4n\), which is a contradiction. So, \(r^{\prime}\) must be \(n+1\). This implies that \(d_{4n+4}(1\otimes yu_{4n+3})\neq 0\). Consequently, \(u_{4n+m+3}=\alpha yu_{4n+3}\) for some \(\alpha\neq 0\) in \(R\). Obviously, \(y^{2}=0\), and \(u_{4n+3}^{2}=0\) for \(m\not\in\{4n+3,8n+6\}\). If \(m=4n+3\) then \(u_{4n+3}^{2}\neq\beta u_{4n+m+3}\) for any \(\beta\) in \(R\) and we get \(u_{4n+3}^{2}=0\). If \(m=8n+6\) then \(u_{4n+3}^{2}\) may be both zero or nonzero. Thus, \[H^{*}(X,R)=R[y,u_{4n+3}]/\langle y^{2},u_{4n+3}^{2}\rangle,\] where \(\deg y=m\) and \(\deg u_{4n+3}=4n+3\). This realizes possibility (i). If \(u_{4n+3}^{2}\neq 0\) then \(u_{4n+3}=\alpha y\) for some nonzero element \(\alpha\in R\). Thus, the cohomology algebra of \(X\) is given by \(R[u_{4n+3}]/\langle u_{4n+3}^{4}\rangle\), where \(\deg u_{4n+3}=4n+3\). By [8, Theorem 4L.9], this cohomology algebra is not possible for \(R=\mathbb{Z}_{2}\) and by the commutativity of cup product this cohomology algebra is also not possible for \(R=\mathbb{Q}\). **Case(c):** When \(\cup:H^{0}(X/G)\to H^{4}(X/G)\) maps \(1\) to \(cb\), where \(c\neq 0\) in \(R\). In this case \(m\) must be \(4\), and we get \(H^{4i}(X)\cong R\) with basis \(\{x^{i}\}\), where \(p_{4}^{*}(a)=x\). Clearly, \(H^{3}(X)=H^{4i+1}(X)=H^{4i+2}(X)=0\) for all \(i\geq 0.\) As \(\ker\cup:H^{4i}(X/G)\to H^{4i+4}(X/G)\) is generated by \(a^{i-1}b\), we have \(H^{4i+3}(X)\cong R\) with basis \(\{u_{4i+3}\}\), where \(\rho_{4i+3}(u_{4i+3})=a^{i-1}b\) for all \(1\leq i\leq n+1.\) We must have \(d_{4}(1\otimes u_{7})=0\) and \(d_{8}(1\otimes u_{7})\neq 0.\) Consequently, \(u_{4i+7}=c_{i}x^{i}b_{7}\) for some nonzero \(c_{i}\in R,\) for all \(1\leq i\leq n.\) Therefore, \(H^{*}(X)\cong R[x,u_{7}]/\langle u_{7}^{2},x^{n+1}\rangle,\) where \(\deg x=4\) and \(\deg u_{7}=7.\) This realizes the possibility (iii). **Case(d):** When \(\cup:H^{0}(X/G)\to H^{4}(X/G)\) maps \(1\) to \(ca+c^{\prime}b,\) where \(c,c^{\prime}\neq 0\) in \(R.\) In this case also \(m\) must be \(4\) and \(H^{4}(X)\cong R\) with basis \(\{x\}\), where \(p_{4}^{*}(a)=x\). By the exactness of the Gysin sequence, we get \(H^{3}(X)=H^{4i+1}(X)=H^{4i+2}(X)=0\) for all \(i\geq 0.\) As \(\cup:H^{4i}(X/G)\to H^{4i+4}(X/G)\) is an isomorphism, we get \(H^{4i+3}(X)=H^{4i+4}(X)=0\) for all \(0<i<n.\) Note that \(H^{4n+4}(X)=0\) and \(\ker\cup:H^{4n}(X/G)\to H^{4n+4}(X/G)\) is generated by \(\{a^{n-1}b-\frac{c}{c^{\prime}}a^{n}\}.\) This implies that \(H^{4n+3}(X)\cong R\) with basis \(\{u_{4n+3}\}\), where \(\rho_{4n+3}(u_{4n+3})=a^{n-1}b-\frac{c}{c^{\prime}}a^{n}.\) Obviously, \(H^{4n+7}(X)\cong R\) with basis \(\{u_{4n+7}\}\), where \(\rho_{4n+7}(u_{4n+7})=a^{n}b.\) Using similar arguments as in case(b), \(H^{*}(X)\cong R[x,u_{4n+3}]/\langle u_{4n+3}^{2},x^{2}\rangle,\) where \(\deg x=4\) and \(\deg u_{4n+3}=4n+3.\) This realizes the possibility (i). Next, we discuss similar result for circle actions with the orbit space product of a complex projective space and sphere: **Theorem 3.2**.: Let \(G=\mathbb{S}^{1}\) act freely on a finitistic connected space \(X\) with \(X/G\sim_{R}\mathbb{CP}^{n}\times\mathbb{S}^{m}\), where \(R\) is \(\mathbb{Q}\) or \(\mathbb{Z}_{2}.\) The cohomology algebra of \(X\) with coefficients in \(R\) is isomorphic to the cohomology algebra of one of the following: * \(\mathbb{S}^{m}\times\mathbb{S}^{2n+1};\) * \(\mathbb{S}^{1}\times\mathbb{S}^{m}\times\mathbb{CP}^{n};\) * \(\mathbb{RP}^{2n+1}\times\mathbb{S}^{m};\) * \(R[x,y]/\langle x^{n+1},y^{2}+\alpha x^{3}\rangle,\) where \(\deg x=2,\deg y=3,m=2,\)\(\alpha\in R\) and \(\alpha=0\) for \(R=\mathbb{Q}.\) Proof.: As \(X\) is connected, \(H^{0}(X/G)\cong H^{0}(X),\) and clearly, \(H^{i}(X)=0\) for all \(i>m+2n+1.\) We consider the following cases: **Case(a):** When \(\cup:H^{0}(X/G)\to H^{2}(X/G)\) is trivial. First assume that \(m>2n.\) In this case, \(\cup:H^{i}(X/G)\to H^{i+2}(X/G)\) is trivial for all \(i\geq 0.\) Then \(\rho_{k+2i+1}\) and \(p_{k+2i+2}^{*}\) are isomorphisms for all \(0\leq i<n\) and \(k=0,m.\) This implies that \(H^{k+2i+1}(X)\cong H^{k+2i+2}(X)\cong R.\) Suppose that \(\{u_{2i+1}\}\) and \(\{x^{i+1}\}\) denotes the bases for \(H^{2i+1}(X)\) and \(H^{2i+2}(X),\) respectively, where \(\rho_{2i+1}(u_{2i+1})=a^{i}\) and \(p_{2}^{*}(a)=x.\) Also, \(H^{m+2n+1}(X)\cong R.\) Let \(\{v_{m+2i+1}\}\) be basis for \(H^{m+2i+1}(X)\) with \(\rho_{m+2i+1}(v_{m+2i+1})=a^{i}b\) for all \(0\leq i\leq n.\) Note that \(H^{i}(X)=0,\) for all \(2n+1<i<m.\) If \(m\neq 2n+1,\) then by the exactness of Gysin sequence, \(H^{2n+1}(X)\cong H^{m}(X)\cong R\) with bases \(\{u_{2n+1}\}\) and \(\{y\},\) respectively; and if \(m=2n+1\) then \(H^{2n+1}(X)\cong R\oplus R\) with basis \(\{u_{2n+1},y\},\) where \(\rho_{2n+1}(u_{2n+1})=a^{n}\) and \(p_{m}^{*}(b)=y.\) Now, suppose that \(m\leq 2n.\) We consider two cases when \(m\) is even or odd: **m is odd**: Let \(m=2i_{0}+1\) for some \(i_{0}<n.\) In this case, \(\cup:H^{2i}(X/G)\to H^{2i+2}(X/G)\) is trivial for all \(i\geq 0.\) Then \(\rho_{2i-1}\) and \(p_{2i}^{*}\) are isomorphisms for all \(0\leq i\leq i_{0}.\) This implies that \(H^{2i-1}(X)\cong H^{2i}(X)\cong R\) with bases \(\{u_{2i-1}\}\) and \(\{x^{i}\},\) respectively, where \(\rho_{2i-1}(u_{2i-1})=a^{i-1}\) and \(p_{2}^{*}(a)=x.\) For \(i_{0}+1\leq i\leq n,\)\(H^{2i}(X)\cong H^{2i-1}(X)\cong H^{2n+1}(X)\cong R\oplus R\) with bases \(\{x^{i},v_{2i}\},\)\(\{yx^{i-i_{0}-1},u_{2i-1}\}\) and \(\{yx^{n-i_{0}},u_{2n+1}\},\) respectively; and for \(n<i\leq i_{0}+n,\) we have \(H^{2i}(X)\cong H^{2n+m+1}(X)\cong H^{2i+1}(X)\cong R\) with bases \(\{v_{2i}\},\)\(\{v_{2n+m+1}\}\) and \(\{yx^{i-i_{0}}\},\) respectively, where \(\rho_{2i}(v_{2i})=a^{i-i_{0}-1}b,\)\(\rho_{2i+1}(u_{2i+1})=a^{i}\) and \(p_{m}^{*}(b)=y.\) **m is even:** Let \(m=2i_{0}\) for some \(i_{0}\leq n.\) We get that \(\rho_{2i-1}\) and \(p_{2i}^{*}\) are isomorphisms for all \(i\geq 0.\) This implies that for each \(0\leq i<i_{0},\)\(H^{2i}(X)\cong H^{2i+1}(X)\cong R\) with bases \(\{x^{i}\}\) and \(\{u_{2i+1}\},\) respectively, and for \(i_{0}\leq i\leq n,\) we get \(H^{2i}(X)\cong H^{2i+1}(X)\cong R\oplus R\) with bases \(\{x^{i},x^{i-i_{0}}y\}\) and \(\{u_{2i+1},v_{2i+1}\},\) respectively, where \(p_{2}^{*}(a)=x,p_{m}^{*}(b)=y,\rho_{2i+1}(u_{2i+1})=a^{i}\) and \(\rho_{2i+1}(v_{2i+1})=a^{i-i_{0}}b.\) Also, for \(1\leq i\leq i_{0},\)\(H^{2n+2i}(X)\cong H^{2n+2i+1}(X)\cong R\) with bases \(\{x^{n+i-i_{0}}y\}\) and \(\{v_{2n+2i+1}\},\) respectively, where \(\rho_{2n+2i+1}(v_{2n+2i+1})=a^{n+i-i_{0}}b.\) Finally, for all \(m\leq 2n,\) we have \[H^{i}(X)=\begin{cases}R&\text{ if }j\leq i\leq m+j-1,j=0\text{ or }2n+2,\\ R\oplus R&\text{ if }m\leq i\leq 2n+1\\ 0&\text{ otherwise.}\end{cases}\] Now, we compute the cohomology ring structure of \(X.\) In the Leray Serre spectral sequence for the Borel fibration \(X\overset{i}{\hookrightarrow}X_{G}\overset{\pi}{\rightarrow}B_{G},\) it is easy to observe that \(d_{r}(1\otimes x)=0=d_{r}(1\otimes y)\) for all \(r\geq 0,\) and the images of \(u_{2i+1}\) and \(v_{m+2i+1}\) must be nonzero under some differential, for all \(0\leq i\leq n.\) Consequently, \(x^{i}u_{1}\neq 0\) and \(x^{i}u_{1}y\neq 0.\) Obviously, \(x^{n+1}=y^{2}=0.\) For \(m>2n\), it is clear that for \(0\leq i\leq n\), \(u_{2i+1}=\alpha_{i}x^{i}u_{1}\) and \(v_{m+2i+1}=\beta_{i}x^{i}u_{1}y\) for some nonzero elements \(\alpha_{i},\beta_{i}\) in \(R\). In particular, for \(m=2n+1\), \(x^{n}u_{1}\) can not be equal to any multiple of \(y\) and so \(u_{2n+1}\) is generated by \(\{y,x^{n}u_{1}\}\). Now, suppose \(m\leq 2n\). If \(m=2i_{0}+1\) for some \(i_{0}<n\) then \(u_{2i+1}=\alpha_{i}x^{i}u_{1}\) for \(0\leq i<i_{0}\) and \(v_{m+2i+1}=\beta_{i}x^{i}u_{1}y\) for \(n-i_{0}\leq i\leq n\), where \(\alpha_{i}\)'s and \(\beta_{i}\)'s are nonzero elements in \(R\). As \(x^{i}u_{1}\) can not be equal to \(x^{i-i_{0}}y\), for all \(i_{0}\leq i\leq n\), therefore, \(u_{2i+1}\) is generated by \(\{x^{i}u_{1},x^{i-i_{0}}y\}\), for all \(i_{0}\leq i\leq n\). Also, \(x^{i}u_{1}y\) can not be equal to any multiple of \(x^{i+i_{0}+1}\), so \(v_{m+2i+1}\) is generated by \(\{x^{i+i_{0}+1},x^{i}u_{1}y\}\), for all \(0\leq i<n-i_{0}\). If \(m=2i_{0}\), for some \(i_{0}\leq n\) then \(u_{2i+1}=\alpha_{i}x^{i}u_{1}\) for \(0\leq i<i_{0}\) and \(v_{2i+m+1}=\beta_{i}x^{i}u_{1}y\) for \(n-i_{0}+1\leq i\leq n\), where \(\alpha_{i}\)'s and \(\beta_{i}\)'s are nonzero elements in \(R\). We observe that \(x^{i}u_{1}\) can not be equal to any multiple of \(x^{i-i_{0}}u_{1}y\), for all \(i_{0}\leq i\leq n\). Thus, the elements \(u_{2i+1}\) and \(v_{2i+1}\) are generated by \(\{x^{i}u_{1},x^{i-i_{0}}u_{1}y\}\). Note that \(u_{1}^{2}\) may be both zero or nonzero. If \(u_{1}^{2}=0\) then \(X\sim_{R}\mathbb{S}^{1}\times\mathbb{CP}^{n}\times\mathbb{S}^{m}\). If \(u_{1}^{2}\neq 0\), then by the commutativity of cup product, \(R=\mathbb{Z}_{2}\) and \(X\sim_{\mathbb{Z}_{2}}\mathbb{RP}^{2n+1}\times\mathbb{S}^{m}\). This realizes possibilities (ii) and (iii) of the theorem. **Case(b):** When \(\cup:H^{0}(X/G)\to H^{2}(X/G)\) maps \(1\) to \(ca\) for some \(c\neq 0\) in \(R\). First, suppose that \(m>2n\). In this case, \(\rho_{2i+1+j}\) and \(p_{2i+2+j}^{*}\) are trivial for all \(0\leq i<n\) and \(j=0\) or \(m\). Consequently, \(H^{2i+1+j}(X)=H^{2i+2+j}(X)=0\). Note that \(H^{i}(X)=0\) for \(2n+1<i<m\) and \(H^{m+2n+1}(X)\cong R\) with basis \(\{u_{m+2n+1}\}\), where \(\rho_{m+2n+1}(u_{m+2n+1})=a^{n}b\). Thus, for \(m\neq 2n+1\), \(\rho_{2n+1}\) and \(p_{m}^{*}\) are isomorphisms. This implies that \(H^{2n+1}(X)\cong H^{m}(X)\cong R\) with bases \(\{u_{2n+1}\}\) and \(\{y\}\) respectively. If \(m=2n+1\) then \(H^{2n+1}(X)\cong R\oplus R\) with basis \(\{u_{2n+1},y\}\), where \(\rho_{2n+1}(u_{2n+1})=a^{n}\) and \(p_{m}^{*}(b)=y\). It is easy to see that, for \(m\leq 2n\), the cohomology groups and generators are the same as above. In the Leray-Serre spectral sequence for the Borel fibration \(X\hookrightarrow X_{G}\stackrel{{\pi}}{{\rightarrow}}B_{G}\), \(d_{2r^{\prime}}(1\otimes u_{2n+1})\neq 0\) for some \(r^{\prime}>0\) and \(d_{2r}(1\otimes y)=0\) for all \(r\geq 0\). Let if possible \(m=2i_{0}\) and \(r^{\prime}=n-i_{0}+1\), where \(1\leq i_{0}\leq n\). Then as done in previous theorem, we get \(H^{*}(X_{G})\cong R[u,v]/\langle u^{n-i_{0}+1}v-\alpha u^{n+1},v^{2}-\beta u^{ 2i_{0}}-\gamma u^{i_{0}}v,u^{n+i_{0}+1}\rangle\) where \(\deg u=2,\deg v=m\) and \(\alpha,\beta,\gamma\in R\), \(\beta=0\) if \(m>2n\), which is a contradiction. Therefore, \(r^{\prime}\) must be \(n+1\). This implies that \(d_{2n+2}(1\otimes yu_{2n+1})\neq 0\). Consequently, \(u_{m+2n+1}=\alpha yu_{2n+1}\) for some \(\alpha\neq 0\) in \(R\). Obviously, \(y^{2}=0\) and \(u_{2n+1}^{2}=0\) for \(m\not\in\{2n+1,4n+2\}\). If \(m=2n+1\) then \(u_{2n+1}^{2}\neq\alpha^{\prime}u_{m+2n+1}\) for any \(\alpha^{\prime}\) in \(R\) and so \(u_{2n+1}^{2}=0\). If \(m=4n+2\) then \(u_{2n+1}^{2}\) may be both zero or nonzero. If \(u_{2n+1}^{2}=0\) then \(H^{*}(X)=R[y,u_{2n+1}]/\langle y^{2},u_{2n+1}^{2}\rangle\), where \(\deg y=m\) and \(\deg u_{2n+1}=2n+1\). This realizes possibility (i) of the theorem. If \(u_{2n+1}^{2}\neq 0\) then \(u_{2n+1}=\beta y\) for some nonzero element \(\beta\) in \(R\). So, we get \(H^{*}(X)=R[u_{2n+1}]/\langle u_{2n+1}^{4}\rangle\), where \(\deg u_{2n+1}=2n+1\) and \(m=4n+2\). By the commutativity of cup product this is not possible for \(R=\mathbb{Q}\). Also, by [8, Theorem 4L.9], it is not possible for \(R=\mathbb{Z}_{2}\). **Case(c):** When \(\cup:H^{0}(X/G)\to H^{2}(X/G)\) maps \(1\) to \(cb\), where \(c\neq 0\) in \(R\). In this case, \(m\) must be \(2\) and \(H^{1}(X)=0\). As \(H^{2i-1}(X/G)=0\) for all \(i\geq 0\), therefore, we have \(H^{2i-2}(X)\cong\operatorname{im}p_{2i-2}^{*}\) and \(H^{2i+1}(X)\cong\ker\{\cup:H^{2i}(X/G)\to H^{2i+2}(X/G)\}\) with bases \(\{x^{i}\}\) and \(\{u_{2i+1}\}\), respectively, where \(\rho_{2i+1}(u_{2i+1})=a^{i-1}b\) and \(p_{2}^{*}(a)=x\) for all \(1\leq i\leq n+1\). In the Leray-Serre spectral sequence, \(d_{2r}(1\otimes x)=0\) for all \(r\geq 0\) and \(d_{2r^{\prime}}(1\otimes u_{3})\neq 0\) for some \(r^{\prime}>0\). If \(r^{\prime}=1\) then \(H^{*}(X_{G})\cong R[u,v]/\langle u^{n+2},v^{n+1},uv\rangle\), where \(\deg u=\deg v=2\), a contradiction. Therefore, \(r^{\prime}\) must be \(2\), and we get \(x^{i}u_{3}\neq 0\) for all \(1\leq i\leq n\). This implies that \(u_{2i+3}=\alpha_{i}x^{i}u_{3}\) for some nonzero \(\alpha_{i}\in R\). We have \(u_{3}^{2}=\alpha x^{3}\) for some \(\alpha\in R\). By the commutativity of cup product, we get \(2u_{3}^{2}=0\). Therefore, for \(R=\mathbb{Q}\), \(\alpha\) must be zero. Therefore, \(H^{*}(X)\cong R[x,u_{3}]/\langle x^{n+1},u_{3}^{2}-\alpha x^{3}\rangle\), where \(\deg x=2\), \(\deg u_{3}=3\) and \(\alpha=0\) for \(R=\mathbb{Q}\). This realizes possibility (iv). **Case(d):** When \(\cup:H^{0}(X/G)\to H^{2}(X/G)\) maps \(1\) to \(ca+c^{\prime}b\), where \(c,c^{\prime}\neq 0\) in \(R\). We have \(m=2\) and \(\cup:H^{2i}(X/G)\to H^{2i+2}(X/G)\) is an isomorphism, for all \(0<i<n\). By the exactness of Gysin sequence, \(H^{2i-1}(X)=H^{2i+2}(X)=0\) for all \(0<i\leq n\); and \(H^{2}(X)\cong H^{2n+1}(X)\cong H^{2n+3}(X)\cong R\) with bases \(\{x\}\), \(\{u_{2n+1}\}\) and \(\{u_{2n+3}\}\), respectively, where \(p_{2}^{*}(a)=x\), \(\rho_{2n+1}(u_{2n+1})=a^{n-1}b-\frac{c}{c^{\prime}}a^{n}\) and \(\rho_{2n+3}(u_{2n+3})=a^{n}b\). Hence, \(H^{*}(X)\cong R[x,u_{2n+1}]/\langle u_{2n+1}^{2},x^{2}\rangle\), where \(\deg x=2\) and \(\deg u_{2n+1}=2n+1\). This realizes possibility (i). Finally, we classify a finitistic space \(X\) equipped with a free involution and the orbit space a product of real projective space and sphere: **Theorem 3.3**.: Let \(G=\mathbb{Z}_{2}\) act freely on a finitistic connected space \(X\) with \(X/G\sim_{\mathbb{Z}_{2}}\mathbb{RP}^{n}\times\mathbb{S}^{m}\). Then the cohomology algebra of \(X\) is one of the following: 1. \(\mathbb{S}^{m}\times\mathbb{S}^{n}\); 2. \(\mathbb{Z}_{2}[x,y]/\langle x^{n+1},y^{2}+\alpha x^{2}\rangle\), where \(\deg x=\deg y=1,m=1\) and \(\alpha\in\mathbb{Z}_{2}\); 3. \(\mathbb{Z}_{2}[y]/\langle y^{4}\rangle\), where \(\deg y=n\), \(m=2n\) and \(n=1,2\) or \(4\). Proof.: Clearly, \(H^{0}(X/G)\cong H^{0}(X)\) and \(H^{i}(X)=0\) for all \(i>m+n\). It is easy to see that Euler class of the \(0\)-sphere bundle \(X\to X/G\) must be nontrivial. Now, we consider the following cases: **Case(a):** When \(\cup:H^{0}(X/G)\to H^{1}(X/G)\) maps \(1\) to \(a\). First, we suppose that \(m\geq n\). For \(m\neq n\), we get \(\cup:H^{k+i}(X/G)\to H^{k+i+1}(X/G)\) is an isomorphism for all \(1\leq i<n\) and \(k=0\) or \(m\). This implies that \(\rho_{k+i}\) and \(p_{k+i+1}^{*}\) are trivial homomorphism. Consequently, \(H^{k+i}(X)=0\) and \(H^{n}(X)\cong H^{m}(X)\cong R\) with bases \(\{u_{n}\}\) and \(\{y\}\) respectively, where \(\rho_{n}(u_{n})=a^{n}\) and \(p_{m}^{*}(b)=y\). Note that \(H^{i}(X)=0\) for \(n<i<m\), \(H^{m+n}(X)\cong R\) with basis \(\{u_{m+n}\}\), where \(\rho_{m+n}(u_{m+n})=a^{n}b\). For \(n=m\), \(H^{n}(X)\cong R\oplus R\) with basis \(\{u_{n},y\}\), where \(\rho_{n}(u_{n})=a^{n}\) and \(p_{m}^{*}(b)=y\). Next, suppose that \(m<n\). We get \(\cup:H^{i}(X/G)\to H^{i+1}(X/G)\) is an isomorphism for all \(1<i<n+m\) and \(i\neq m-1,n\). Note that the map \(\cup:H^{m-1}(X/G)\to H^{m}(X/G)\) is injective with \(\operatorname{im}\cong R\) with basis \(\{a^{m}\}\), and for \(i=n\) it is surjective with \(\ker\cong R\) with basis \(\{a_{n}\}\), where \(\rho_{n}(u_{n})=a^{n}\). This implies that \(H^{i}(X)=0\) for \(1<i<n+m\) and \(i\neq m,n\), and \(H^{m}(X)\cong H^{n}(X)\cong R\) with basis \(\{y\}\) and \(\{u_{n}\}\), respectively, where \(p_{m}^{*}(b)=y\) and \(\rho_{n}(u_{n})=a^{n}\). Also, \(H^{n+m}\cong R\) with basis \(\{u_{n+m}\}\), where \(\rho_{n+m}(u_{n+m})=a^{n}b\). Now, we compute the cohomology ring structure of \(X\). In the Leray-Serre spectral sequence for the Borel fibration \(X\hookrightarrow X_{G}\stackrel{{\pi}}{{\rightarrow}}B_{G}\), \(d_{r^{\prime}}(1\otimes u_{n})\neq 0\) for some \(r^{\prime}>0\) and \(d_{r}(1\otimes y)=0\) for all \(r\geq 0\). Let it possible for \(n>m\), \(r^{\prime}=n-m+1\). Then we get \(H^{*}(X_{G})\cong R[u,v]/\langle u^{n-m+1}v-\alpha u^{n+1},v^{2}-\beta u^{2m}- \gamma u^{m}v,u^{n+i_{0}+1}\rangle\) where \(\deg u=1,\deg v=m\) and \(\alpha,\beta,\gamma\in R\), \(\beta=0\) if \(m>n\), which is a contradiction. Therefore, \(r^{\prime}\) must be \(n+1\). This implies that \(d_{n+1}(1\otimes yu_{n})\neq 0\). Consequently, \(u_{m+n}=yu_{n}\). Obviously, \(y^{2}=0\), and \(u_{n}^{2}=0\) for \(m\not\in\{n,2n\}\). If \(m=n\) then \(u_{n}^{2}\neq u_{m+n}\) and so \(u_{n}^{2}=0\). If \(m=2n\) then \(u_{n}^{2}\) may be both zero or nonzero. If \(u_{n}^{2}=0\) then \(H^{*}(X)=\mathbb{Z}_{2}[y,u_{n}]/\langle y^{2},u_{n}^{2}\rangle\), where \(\deg y=m\) and \(\deg u_{n}=n\). If \(u_{n}^{2}\neq 0\) then the cohomology algebra of \(X\) is \(\mathbb{Z}_{2}[u_{n}]/\langle u_{n}^{4}\rangle\), where \(\deg u_{n}=n\) and \(m=2n\). By [8, Theorem 4L.9], \(n=1,2,4\) or \(8\). This realizes possibility (i) and (iii). **Case(b):** When \(\cup:H^{0}(X/G)\to H^{2}(X/G)\) maps \(1\) to \(b\). In this case, \(m\) must be \(1\) and \(\operatorname{im}p_{1}^{*}=\mathbb{Z}_{2}\) with basis \(\{p_{1}^{*}(a)\}\). Also, \(\operatorname{im}p_{i}^{*}\cong\operatorname{im}\rho_{i}\cong\mathbb{Z}_{2}\) with basis \(\{p_{i}^{*}(a^{i})\}\) and \(\{a^{i-1}b\}\), respectively, for all \(0<i\leq n\). Consequently, \(H^{i}(X)\cong\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\) with basis \(\{x^{i},u_{i}\}\), where \(\rho_{i}(u_{i})=a^{i-1}b\) and \(p_{1}^{*}(a)=x\). Obviously, \(H^{n+1}(X)\cong\mathbb{Z}_{2}\) with basis \(\{u_{n+1}\}\), where \(\rho_{n+1}(u_{n+1})=a^{n}b\). In the Leray-Serre spectral sequence, \(d_{r}(1\otimes x)=0\) for all \(r\geq 0\) and \(d_{2}(1\otimes u_{1})\neq 0.\) This implies that for \(1\leq i\leq n,\)\(x^{i}u_{1}\neq 0,\) and hence \(u_{i+1}=x^{i}u_{1}+\alpha_{i}x^{i+1}\) for some \(\alpha_{i}\in\mathbb{Z}_{2}.\) As \(d_{2}(1\otimes xu_{1})\neq 0,\) we have \(u_{1}^{2}=\alpha x^{2}\) for some \(\alpha\in\mathbb{Z}_{2}.\) Therefore, \(H^{*}(X)\cong\mathbb{Z}_{2}[x,u_{1}]/\langle x^{n+1},u_{1}^{2}+\alpha x^{2}\rangle,\) where \(\deg x=\deg u_{1}=1.\) This realizes possibility (ii). **Case(c):** When \(\cup:H^{0}(X/G)\to H^{2}(X/G)\) maps \(1\) to \(a+b.\) In this case also \(m\) must be \(1\) and \(\cup:H^{i}(X/G)\to H^{i+1}(X/G)\) is an isomorphism for all \(0<i\leq n.\) Consequently, \(H^{i}(X)=0\) for \(1<i<n.\) Note that \(\ker p_{1}^{*}\cong\operatorname{im}\rho_{n}\cong\mathbb{Z}_{2}\) with bases \(\{a+b\}\) and \(\{a^{n}+a^{n-1}b\},\) respectively. This implies that for \(n\neq 1,\)\(H^{1}(X)\cong H^{n}(X)\cong\mathbb{Z}_{2}\) with bases \(\{x\}\) and \(\{u_{n}\},\) respectively, where \(p_{1}^{*}(a)=x\) and \(\rho_{n}(u_{n})=a^{n}+a^{n-1}b.\) Obviously, \(H^{n+1}(X)\cong\mathbb{Z}_{2}\) with basis \(\{u_{n+1}\},\) where \(\rho_{n+1}(u_{n+1})=a^{n}b.\) In particular for \(n=1,\)\(H^{1}(X)\cong\mathbb{Z}_{2}\oplus\mathbb{Z}_{2},\)\(H^{2}(X)\cong\mathbb{Z}_{2}\) with bases \(\{x,u_{1}\}\) and \(\{u_{2}\},\) respectively, where \(p_{1}^{*}(a)=x,\rho_{1}(u_{1})=a+b\) and \(\rho_{2}(u_{2})=ab.\) We must have \(u_{n+1}=xu_{n}\) and \(u_{n}^{2}=0.\) Thus, \(X\sim_{\mathbb{Z}_{2}}\mathbb{S}^{1}\times\mathbb{S}^{n}.\) This realizes possibility (i). Now, we determine covering dimension of coincidence set \(A(f)\) of continuous maps \(f:X\rightarrow\mathbb{R}^{k},\) where \(X\) is a finitistic space equipped with free involution and \(X/G\sim_{\mathbb{Z}_{2}}\mathbb{RP}^{n}\times\mathbb{S}^{m}.\) **Theorem 3.4**.: Let \(G=\mathbb{Z}_{2}\) act freely on a finitistic space \(X\) with \(X/G\sim_{\mathbb{Z}_{2}}\mathbb{RP}^{n}\times\mathbb{S}^{m},\) where \(n>4,m>1.\) If \(f:X\rightarrow\mathbb{R}^{k}\) be any continuous map, then \(cov.dim(A(f))\geq n-k\) for \(k\leq n.\) Proof.: By Theorem 3.3, the Volovikov's index \(in(X)\)[22] is \(n.\) Note that \(in(A(f))\geq in(X)-k\)[23], so we have \(in(A(f))\geq n-k.\) As \(G\) acts freely on \(X\), it induces a free action on \(A(f).\) By the definition of \(in(X),\)\(H^{n-k}(B_{G})\to H^{n-k}(A(f)_{G})\) is injective. This implies that \(cohom.dim(A(f)_{G})\geq n-k,\) where \(cohom.dim\) denotes the cohomological dimension of a space. As \(A(f)/G\) and \(A(f)_{G}\) are homotopy equivalent, \(cohom.dim(A(f)/G)\geq n-k.\) Consequently, \(cohom.dim(A(f))\geq n-k\)[15, Proposition A.11]. The result follows from the fact that \(cov.dim(A(f))\geq cohom.dim(A(f)).\) ## 4. Examples Consider the standard free actions of \(G=\mathbb{S}^{d}\) on \(\mathbb{S}^{(d+1)n+d},\) for \(d=0,1\) or \(3,\) and trivial action on \(\mathbb{S}^{m},\) then under the diagonal action \(G\) acts freely on \(\mathbb{S}^{(d+1)n+d}\times\mathbb{S}^{m}\) with the orbit space \(\mathbb{FP}^{n}\times\mathbb{S}^{m},\) where \(\mathbb{F}=\mathbb{R},\mathbb{C}\) or \(\mathbb{H},\) respectively. This realizes the possibility (i) of Theorems 3.1, 3.2 and 3.3. Similarly, if we take free actions of on \(\mathbb{S}^{2d+1}\) and trivial action on \(\mathbb{F}\mathbb{P}^{n}\) then \(G\) acts freely on \(\mathbb{F}\mathbb{P}^{n}\times\mathbb{S}^{2d+1}\) with the orbit space \(\mathbb{F}\mathbb{P}^{n}\times\mathbb{F}\mathbb{P}^{1}\), respectively. This realizes the possibility (iii) of Theorem 3.1, when \(\alpha=0\), the possibility (iv) of Theorem 3.2 and the possibility (ii) of Theorem 3.3. If we take free action of \(G=\mathbb{S}^{d}\) on itself, for \(d=1\) or \(3\), and trivial action on \(\mathbb{F}\mathbb{P}^{n}\times\mathbb{S}^{m}\), where \(\mathbb{F}=\mathbb{C}\) or \(\mathbb{H}\), respectively then \(G\) acts freely on \(\mathbb{S}^{d}\times\mathbb{F}\mathbb{P}^{n}\times\mathbb{S}^{m}\) with the orbit space \(\mathbb{F}\mathbb{P}^{n}\times\mathbb{S}^{m}\). This realizes the possibility (ii) of Theorems 3.1 and 3.2. Now, consider free action of \(\mathbb{Z}_{4}\) on \(\mathbb{S}^{3}\subseteq\mathbb{C}^{2}\) defined by \((z_{1},z_{2})\mapsto(z_{1}e^{2\pi i/4},z_{2}e^{2\pi i/4})\). This induces a free involution on \(\mathbb{R}\mathbb{P}^{3}\) with the orbit space \(\mathrm{L}^{3}(4,1)\). We know that \(\mathrm{L}^{3}(4,1)\sim_{\mathbb{Z}_{2}}\mathbb{R}\mathbb{P}^{1}\times\mathbb{ S}^{2}\). Recall that if \(G=\mathbb{Z}_{2}\) acts freely on a finitistic space \(X\) with the mod \(2\) cohomology \(\mathbb{C}\mathbb{P}^{3}\), then the orbit spaces \(X/G\) is the mod \(2\) cohomology \(\mathbb{R}\mathbb{P}^{2}\times\mathbb{S}^{4}\)[16]. These examples realizes the possibility (iii) of Theorem 3.3 for \(n=1\) and \(n=2\), respectively. **Remark.** Let \(X\) be a connected Hausdorff space equipped with free action of \(G=\mathbb{Z}_{2}\). If the orbit space \(X/G=\mathbb{R}\mathbb{P}^{n}\times\mathbb{S}^{m}\) then \(X\) is homeomorphic to \(\mathbb{S}^{n}\times\mathbb{S}^{m}\). The significance of Theorem 3.3 lies with the fact that if \(X/G\sim_{\mathbb{Z}_{2}}\mathbb{R}\mathbb{P}^{n}\times\mathbb{S}^{m}\) then \(X\) may have the mod \(2\) cohomology isomorphic to \(\mathbb{R}\mathbb{P}^{n}\times\mathbb{S}^{1}\) or \(\mathbb{S}^{n}\times\mathbb{S}^{1}\) for \(m=1\); and \(\mathbb{S}^{4}\times\mathbb{S}^{2}\) or \(\mathbb{C}\mathbb{P}^{3}\) for \(m=4\) and \(n=2\).
2302.09084
Using the Gaia excess uncertainty as a proxy for stellar variability and age
Stars are known to be more active when they are young, resulting in a strong correlation between age and photometric variability. The amplitude variation between stars of a given age is large, but the age-variability relation becomes strong over large groups of stars. We explore this relation using the excess photometric uncertainty in Gaia photometry ($Var_{G}$, $Var_{BP}$, and $Var_{RP}$) as a proxy for variability. The metrics follow a Skumanich-like relation, scaling as $\simeq t^{-0.4}$. By calibrating against a set of associations with known ages, we show how $Var$ of population members can predict group ages within 10-20% for associations younger than $\simeq$2.5 Gyr. In practice, age uncertainties are larger, primarily due to finite group size. The index is most useful at the youngest ages ($<$100 Myr), where the uncertainties are comparable to or better than those derived from a color-magnitude diagram. The index is also widely available, easy to calculate, and can be used at intermediate ages where there are few or no pre- or post-main-sequence stars. We further show how $Var$ can be used to find new associations and test if a group of co-moving stars is a real co-eval population. We apply our methods on the Theia groups within 350 pc and find $\gtrsim$90% are inconsistent with drawing stars from the field and $\simeq$80% have variability ages consistent with those derived from the CMD. Our finding suggest the great majority of these groups contain real populations.
Madyson G. Barber, Andrew W. Mann
2023-02-17T19:00:01Z
http://arxiv.org/abs/2302.09084v2
# Using Gaia excess uncertainty as a proxy for stellar variability and age ###### Abstract Stars are known to be more active when they are young, resulting in a strong correlation between age and photometric variability. The amplitude variation between stars of a given age is large, but the age-variability relation becomes strong over large groups of stars. We explore this relation using the excess photometric uncertainty in _Gaia_ photometry (\(Var_{G}\), \(Var_{BP}\), and \(Var_{RP}\)) as a proxy for variability. The metrics follow a Skumanich-like relation, scaling as \(\simeq t^{-0.4}\). By calibrating against a set of associations with known ages, we show how \(Var\) of population members can predict group ages within 10-20% for associations younger than \(\simeq\)2.5 Gyr. In practice, age uncertainties are larger, primarily due to finite group size. The index is most useful at the youngest ages (\(<\)100 Myr), where the uncertainties are comparable to or better than derived from a color-magnitude diagram. The index is also widely available, easy to calculate, and can be used at intermediate ages where there are few or no pre- or post-main-sequence stars. We further show how \(Var\) can be used to find new associations and test if a group of co-moving stars is a real co-eval population. We apply our methods on the Theia groups within 350 pc and find \(\gtrsim\)90% are inconsistent with drawing stars from the field and \(\simeq\)80% have variability ages consistent with those derived from the CMD. Our finding suggest the great majority of these groups contain real populations. Stellar ages, young star clusters, stellar rotation, stellar evolution 0000-0002-4000-0002]Madyson G. Barber 0000-0002-2880-7888]Andrew W. Mann ## 1 Introduction Compared to most stars, we know the age of the Sun to better than 1% (Connelly et al., 2012). The tight age constraint comes from meteorites, rather than observations of the Sun's photosphere. Since meteorites from other stars are not available, we must rely on less precise techniques to age-date stars, such as chromospheric activity (e.g., Zhou et al., 2021; Kiman et al., 2021), rotation (e.g., Barnes, 2007; Curtis et al., 2020), or cooling tracks of brown dwarfs and white dwarfs (e.g., Kilic et al., 2019; Marley et al., 2021). Outside the Sun, stars with the most precise and reliable ages are usually in co-eval associations (Soderblom et al., 2014). Ages can then be estimated using the bulk properties of the cluster, such as the lithium abundances (e.g., Burke et al., 2004; Wood et al., 2022) or main-sequence turn-off (Conroy and Gunn, 2010), or from a subset of stars with more easily determined properties (e.g., asteroseismic pulsators; Grunblatt et al., 2021; Bedding et al., 2022). Precision astrometry from the _Gaia_ mission (Gaia Collaboration et al., 2016) has been invaluable for finding new stellar associations (e.g., Meingast et al., 2019; Moranta et al., 2022), sub-populations of known associations (e.g., Wood et al., 2022), and additional members of known populations (e.g., Gagne and Faherty, 2018; Roser and Schilbach, 2020). Identifying and finding members of sparse groups is still challenging. Galactic shear causes the group's velocity dispersion to grow with time (Dobbs and Pringle, 2013). Larson's laws also imply that groups with a larger spatial scale should also exhibit a larger velocity spread (Larson, 1981), and the resulting velocity dispersion can exceed typical measurement uncertainties from _Gaia_. Further, the more the population extends spatially, the greater the number of nearby field stars that will align with the group's kinematics by chance. To aid with search and selection, many studies add an additional requirement to select on, such as a color-magnitude (CMD) position consistent with being pre-main-sequence (e.g. Kerr et al., 2021) or spectroscopic indicators of activity (e.g., Zerjal et al., 2021). These are often observationally expensive and/or only apply to a subset of stars. Thus, additional metrics would be invaluable when searching for young stellar associations. An activity metric that is already widely available would be particularly useful for mining all-sky surveys for young associations. Guidry et al. (2021) show that excess uncertainties in _Gaia_ photometry is an indicator of source variability. They use a metric for excess uncertainty (\(V_{G}\)) to identify white dwarfs on the ZZ Ceti instability strip. Barlow et al. (2022) use the same method to identify highly variable hot subdwarfs. The metric could be expanded to identify young stars out to the limits of _Gaia_. _Gaia_ photometry can achieve a precision of \(30\,\mathrm{mmag}\) per epoch and \(2\,\mathrm{mmag}\) total, (\(G=19\)) with a typical target getting observations every few weeks (Hodgkin et al., 2021), more than sufficient to detect stellar variations expected from \(<\)\(1\,\mathrm{Gyr}\) stars (Rizzuto et al., 2017; Miyakawa et al., 2021). Starspot coverage is known to follow a Skumanich-like decrease with age (Morris, 2020). The relation between starspot coverage and (observed) stellar variability is complex due to both variations in stellar inclination and astrophysical variation between stars. However, the two should be strongly correlated over large collections of stars (Luger et al., 2021). In the youngest stars, stellar variability may be driven by effects other than starspots, such as accretion (Park et al., 2021) and dippers (Cody et al., 2014; Ansdell et al., 2016), but the overall variability is still expected to be stronger with decreasing age. Thus, \(V_{G}\) or a similar variability diagnostic could be used to provide age estimates for populations of stars. In this paper, we update the variability metric put forth in Guidry et al. (2021), including extending its use to all three _Gaia_ filters (Section 2). Using a set of stars in associations with well-determined ages (Section 3), we provide a relation between the distribution of \(Var\) for stars in a co-eval group and the age of the group (Section 4). We discuss the impact of additional effects, like the distance to the population and field star contamination, in Section 4.1. To highlight the power of \(Var\), we show how it can be used to assign ages to newly identified populations of stars, test if a candidate group of co-moving stars represents a real young population, and find new associations (Section 5). ## 2 Gaia Excess Variability _Gaia_ mean flux (PHOT_G_MEAN_FLUX or \(<G>\)) and uncertainty (PHOT_G_MEAN_FLUX_ERROR or \(\sigma_{<G>}\)) is calculated using the uncertainty on the weighted mean of included observations (Evans et al., 2018; Riello et al., 2021). For a non-variable source and fixed instrumental noise, \(\sigma_{<G>}^{2}\) scales with the source flux and inversely with the number of observations (\(n_{obs,G}\)). Thus, a deviation above this scaling is a sign of astrophysical variation in the flux. Guidry et al. (2021) take advantage of this to identify variable white dwarfs in _Gaia_ photometry, using a variability metric defined as: \[V_{G}\equiv\frac{\sigma_{<G>}}{<G>}\sqrt{n_{obs,G}}. \tag{1}\] A higher \(V_{G}\) would indicate a source with more flux variation than expected from noise alone. In practice, instrumental noise varies with source brightness. Guidry et al. (2021) handled this by subtracting out the baseline relation between \(V_{G}\) and \(G\). Our approach to remove the scaling with brightness was to use the fitted _Gaia_ photometric uncertainties tool1(Riello et al., 2021). These relations were derived empirically, and hence included a wide range of effects. The code provides a predicted magnitude uncertainty (\(\sigma_{G,p}\)) as a function of _Gaia_\(G\) magnitude and the number of observations. We defined a new variability index we call \(Var_{G}\): Footnote 1: [https://github.com/gaia-dpci/gaia-dr3-photometric-uncertainties](https://github.com/gaia-dpci/gaia-dr3-photometric-uncertainties) \[Var_{G}=\log_{10}\Big{(}\frac{\sigma_{<G>}}{<G>}\Big{)}-\log_{10}[\sigma_{G, p}(G,n_{obs,G})], \tag{2}\] where the second term was the output of the fitted uncertainties. We extended Equation 2 to the other two _Gaia_ photometric bands, yielding \(Var_{BP}\) and \(Var_{RP}\) with a simple substitution. We performed a quick demonstration that the revised metric works for young stars by checking the distribution of \(Var_{G}\) with their position on the color-magnitude diagram (CMD), which we show in Figure 1. Stars that have the highest \(10\%\) of \(Var_{G}\) values are highlighted. As expected, these variable stars land preferentially in regions of the CMD where we see younger stars (e.g., pre-main-sequence regions for early-to-mid M dwarfs). ## 3 Target Selection Our goal was to find a set of groups for calibrating the relation between \(Var\) and age. To this end, we selected a set of co-eval populations (e.g., open clusters, moving groups, and star-formation regions) with well-determined ages and membership lists in the literature. As a comparison set and to test the effects of contamination, we also used a volume-limited sample of stars in the Solar neighborhood (random ages). We then selected the subset of stars in these groups or the field sample where \(Var\) is most effective. ### Young Associations We restricted our calibration sample of young associations to groups within 350 pc of the Sun. As discussed in Section 4, the \(Var\) index is distance dependent. We also found that groups past 350 pc tended to have smaller membership lists, more uncertain ages, and more discrepant ages between literature sources. We required groups to have at least 40 stars after all cuts on the membership list (described in Section 3.3). The method works for smaller samples of stars, but the larger uncertainties makes such groups ineffective for calibration. The majority of our sample was taken from the sample of open clusters in Cantat-Gaudin et al. (2018) and Cantat-Gaudin et al. (2020). We added in several well-characterized clusters like 32 Ori (Luhman, 2022), as well as more diffuse groups like Psc-Eri2(Meingast et al., 2019) and \(\mu\) Tau (Gagne et al., 2020). Footnote 2: Meingast-1 To sample the youngest ages, we added in young associations Taurus-Auriga (Krolikowski et al., 2021), the three major groups in the Scorpius-Centaurus OB association (Upper Scorpius, Upper Centaurus-Lupus, and Lower Centaurus-Crux, Preibisch and Mamajek, 2008), the Chamaeleon complex (Cha I and Cha II; Luhman, 2007), and Corona-Australis (Galli et al., 2020). Earlier studies have shown that these associations are not single-aged populations. For example, Goldman et al. (2018) demonstrated that Lower Centaurus-Crux is comprised of at least four sub-populations with ages that differ by 1-3 Myr. However, this spread is comparable to or smaller than our assigned age uncertainties. The spread between sub-population ages was only a problem for Taurus-Aurgia, where we opted to only include the youngest (\(<\)10 Myr) subgroups from Krolikowski et al. (2021). In total, we used 32 groups ranging in age from 3 Myr to 2.7 Gyr. Only one group was older than 1 Gyr (Ruprecht, 147) and more than half the groups were less than 100 Myr. We list all selected associations in Table 1. #### 3.1.1 Excluded groups Our list of associations was meant to be representative of groups near the Sun, not complete. The most common reason to skip a group was that it did not satisfy the 40-member minimum. Since we only included stars with \(B_{P}-R_{P}<2.5\) (see Section 3.3), the full population size needed to be significantly larger. This minimum removed many young moving groups like Columba and low-mass clusters like Ursa Major and Platais 10. Some groups were excluded because of ambiguity in the assigned age or membership. For example, Alessi 13 (\(\chi^{1}\) For) has been assigned ages ranging from 30 Myr (Galli et al., 2021) to more than 500 Myr (e.g. Yen et al., 2018). This also led us to exclude some nearby moving groups (e.g., AB Dor, Carina-Near, and Argus), many of which have discrepant ages and membership lists in the literature (e.g., Mamajek, 2016). Newly identified groups from SPYGLASS (Kerr et al., 2021) have a sample selection that is problematic for our purposes. The initial selection included only pre-MS stars, so it was heavily biased towards late-type stars where \(Var\) is less effective (Figure 2). Their final selection had more FGK stars but suffered from higher contamination. SPYGLASS groups were also restricted to those \(<\)50 Myr, where we already had 14 groups in our calibration set. We did not include MELANGE (Tofflemire et al., 2021) and Theia (Kounkel et al., 2020) groups in our calibration set. The Theia groups contain real co-eval populations (Andrews et al., 2022), but many remain controversial (Zucker et al., 2022). Instead, we used the techniques discussed in this paper to test the existence and ages that were assigned to these sets of groups in Section 5. Figure 1: A color-magnitude diagram of stars within 50 pc of the Sun (teal). Red points indicate those with \(Var_{G}\) values in the top 10% of the sample. These stars are preferentially high on the CMD for the early M dwarfs and along the zero-age main-sequence for the GK dwarfs, where we expect to see young stars. #### 3.1.2 Assigning ages Most of the groups used in our analysis had multiple age determinations in the literature. In order of priority, we adopted ages based on 1) the lithium depletion boundary, 2) an isochrone/CMD fit using eclipsing binaries or other benchmark stars, 3) an isochrone/CMD fit using _Gaia_ data, 4) an isochrone/CMD fit using other datasets. We excluded references where no uncertainty was provided. When multiple sources with the same ranking above provided an age, we used the more precise analysis. The only deviation from this procedure was for Praesepe, for which Bossini et al. (2019) reported an unrealistic age uncertainty of only 3-4 Myr (better than 1%). Instead, we adopted the age from Cummings et al. (2018). The reference used for each association age is listed in Table 1. Cantat-Gaudin et al. (2020) derived ages using an artificial neural network run on the CMD from _Gaia_ data. Using a validation set of clusters, they estimated uncertainties were 10-20%, depending on the group size. We adopted the low end (10% uncertainties), as most groups considered here had sufficiently large membership lists. ### Field Sample As a comparison set and to test how field contamination impacts \(Var\) in a group, we used a sample of nearby field stars from the _Gaia_ catalog of nearby stars (Gaia Collaboration et al., 2021). We pulled stars from the'selected objects' within 50 pc (\(\pi>20\) mas). ### Star Selection We drew our sample of stars from the membership lists listed in Table 1 with the following cuts: * phot_g_mean_flux_over_error\(>30\) * phot_bp_mean_flux_over_error\(>20\) * phot_rp_mean_flux_over_error\(>20\) * parallax_over_error\(>20\) * Membership probability (if provided) \(>50\%\) * \(M_{G}<10\) or \(B_{P}-R_{P}>1\) * \(B_{P}-R_{P}<2.5\) The first five restrictions removed sources with unreliable photometry or membership. Many membership lists also used quality cuts similar to the first four, so this kept the stellar sample more homogeneous between groups. Field contamination has a weak impact on our findings (see Section 4.1). However, many lists contain sources with membership probability down to \(\simeq 0\%\), so a minimum cut was required. The sixth requirement removed any white dwarfs from the sample. As we show in Figure 2, \(Var\) becomes ineffective for mid-to-late M dwarfs. For red stars (\(B_{P}-R_{P}>2.5\)), the 50 pc sample had higher \(Var_{G}\) levels than Pleiades (112 Myr) and Praesepe (750 Myr). Further, three groups of similar ages had similar \(Var_{G}\) levels for stars bluer than \(B_{P}-R_{P}\simeq 2.5\), but the \(Var_{G}\) levels diverged past that. This was the major reason for the final (color) requirement. ## 4 Calibration For each filter, we used the 90\({}^{th}\) percentile (highest) \(Var\) value within an association. We also tested using Figure 2: Running median of \(Var_{G}\) as a function of color. The left plot shows four associations with a range of ages and the 50 pc sample, while the right shows three groups with similar (\(\simeq\)40 Myr) ages. Displayed uncertainties are the standard error on the median. Bin sizes have an equal number of stars within a group but not between groups. The left shows the expected sequence in age for FGK and early M dwarfs, i.e., the youngest groups have the highest \(Var_{G}\) values. But for mid-to-late M dwarfs, the two groups have lower \(Var_{G}\) values than the field sample. The right plot shows three groups of similar age, which match each other until about M2, after which at least one group diverges. the \(50^{th}\) (the median) and \(75^{th}\) percentile, both of which showed a strong correlation with age. We opted for the \(90^{th}\) because it showed the lowest scatter around a linear fit and exhibited a high resiliency to field-star contamination (see Figure 3 and discussion in Section 4.1). We denote this value as \(Var_{90}(Var_{G,90},\,Var_{BP,90}\), and \(Var_{RP,90})\) to separate from \(Var\), which is the metric for a single star. We estimated uncertainties on \(Var_{90}\) for each group based on a bootstrap re-sampling of the association members. For this, we used scipy's bootstrap with the default settings. We assumed symmetric uncertainties for simplicity. We fit the relation between age and variability in log-log space-based both on previous work relating variability to age (e.g., Morris, 2020; Luger et al., 2021). The \(Var\) parameter is equivalent to a magnitude and hence was already a log of the flux variation. Age uncertainties roughly scaled with age, and we found the fit uncertainties were better modeled as a fractional error than an absolute error (favoring working in log space). This yielded a linear relation: \[\log_{10}(\text{age})\ [\text{Myr}]=m\times Var_{90}+b, \tag{3}\] where \(m\) and \(b\) were fit parameters. We fit this three times, one for each of the _Gaia_ bandpasses (\(Var_{G}\), \(Var_{BP}\), and \(Var_{RP}\)). Adding a second-order term in \(Var_{90}\) gave negligible improvement on the fit, but we explored adding a distance term (Section 4.1). We included a third fit parameter, \(\ln f\), to capture the intrinsic scatter in the relation. This could also be interpreted as underestimated uncertainties in the in \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multicolumn{1}{c}{ Name} & Age & Age & N\({}_{\text{stars}}\) & \multicolumn{1}{c}{Membership} & Distanceb \\ & (Myr) & Reference & & Reference & pc \\ \hline Taurus-Auriga & \(3.5\pm 2.5\) & Krolikowski et al. (2021) & 137 & Krolikowski et al. (2021) & 145 \\ Chamaeleon & \(4\pm 2\) & Luhman (2007) & 45 & Galli et al. (2021b) & 191 \\ Corona-Australis & \(6\pm 4\) & Galli et al. (2020) & 88 & Esplin \& Luhman (2022) & 151 \\ Upper Scorpius & \(11\pm 3\) & Pecaut et al. (2012) & 377 & Luhman \& Esplin (2020) & 144 \\ Upper Centaurus-Lupus & \(16\pm 1\) & Pecaut et al. (2012) & 169 & Damiani et al. (2019) & 175 \\ Lower Centarus Crux & \(17\pm 1\) & Pecaut et al. (2012) & 459 & Goldman et al. (2018) & 113 \\ UPK 422 & \(19\pm 2\) & Cantat-Gaudin et al. (2020) & 40 & Cantat-Gaudin et al. (2020) & 300 \\ 32 Ori & \(21\pm 4\) & Luhman (2022) & 46 & Luhman (2022) & 103 \\ UPK 640 & \(25\pm 3\) & Cantat-Gaudin et al. (2020) & 145 & Cantat-Gaudin et al. (2020) & 176 \\ Platais 8 & \(30\pm 3\) & Cantat-Gaudin et al. (2020) & 61 & Cantat-Gaudin et al. (2018) & 135 \\ NGC 2232 & \(38\pm 3\) & Binks et al. (2021) & 94 & Cantat-Gaudin et al. (2018) & 321 \\ NGC 2451A & \(44\pm 2\) & Bossini et al. (2019) & 121 & Cantat-Gaudin et al. (2018) & 192 \\ Collinder 135 & \(45\pm 5\) & Kovaleva et al. (2020) & 164 & Cantat-Gaudin et al. (2018) & 299 \\ IC 2602 & \(46^{+6}_{-5}\) & Dobbie et al. (2010) & 99 & Cantat-Gaudin et al. (2018) & 151 \\ Platais 9 & \(50\pm 5\) & Cantat-Gaudin et al. (2020) & 51 & Cantat-Gaudin et al. (2018) & 184 \\ IC 2391 & \(51^{+5}_{-4}\) & Nisak et al. (2022) & 78 & Cantat-Gaudin et al. (2018) & 151 \\ \(\mu\) Tau & \(60\pm 7\) & Gagné et al. (2020a) & 122 & Gagné & 155 \\ \(\alpha\) Persei & \(75^{+7}_{-7}\) & Galindo-Guil et al. (2022) & 318 & Cantat-Gaudin et al. (2018) & 174 \\ UPK 612 & \(100\pm 10\) & Cantat-Gaudin et al. (2020) & 141 & Cantat-Gaudin et al. (2020) & 229 \\ Pleiades & \(112\pm 5\) & Dahm (2015) & 391 & Cantat-Gaudin et al. (2018) & 136 \\ Blanco-1 & \(115\pm 10\) & Gaia Collaboration et al. (2018) & 237 \\ Psc-Eri/Meingast-1 & \(134\pm 7\) & Röser \& Schilbach (2020) & 581 & Rataenböck et al. (2020) & 131 \\ Platais 3 & \(208^{+122}_{-42}\) & Bossini et al. (2019) & 54 & Cantat-Gaudin et al. (2018) & 178 \\ M7 & \(224\pm 22\) & Cantat-Gaudin et al. (2020) & 771 & Cantat-Gaudin et al. (2018) & 280 \\ Alessi 9 & \(282^{+28}_{-29}\) & Cantat-Gaudin et al. (2020) & 118 & Cantat-Gaudin et al. (2020) & 209 \\ Group X & \(300\pm 50\) & Newton et al. (2022) & 132 & Tang et al. (2019); Newton et al. (2022) & 104 \\ NGC 7092 & \(310^{+74}_{-58}\) & Bossini et al. (2019) & 125 & Cantat-Gaudin et al. (2018) & 297 \\ Alessi 3 & \(631\pm 63\) & Cantat-Gaudin et al. (2020) & 171 & Cantat-Gaudin et al. (2018) & 279 \\ Hyades & \(650\pm 70\) & Martin et al. (2018) & 283 & Röser et al. (2019); Jerabkova et al. (2021) & 134 \\ Praesepe & \(700\pm 25\) & Cummings et al. (2018) & 422 & Cantat-Gaudin et al. (2018) & 185 \\ Coma Ber & \(750^{+50}_{-100}\) & Tang et al. (2018); Singh et al. (2021) & 98 & Tang et al. (2019) & 86 \\ Ruprecht 147 & \(2670^{+390}_{-350}\) & Torres et al. (2020) & 156 & Cantat-Gaudin et al. (2018) & 306 \\ \hline \end{tabular} \end{table} Table 1: Young Associations for Calibration put ages, but as we show in Section 4.1, the result was robust to changes in the input age uncertainties. In addition, this parameter acted as a lower limit on the age uncertainties achievable with the method. For our fit, we used a likelihood maximization in a Monte Carlo Markov Chain (MCMC) schematic with **emcee**e(Foreman-Mackey et al., 2013). For each of the three filters, we adopted uniform priors on all parameters with large bounds to prevent runaway walkers (\(f>0\) and \(-4<m<-1\)). We initialized the three parameters based on the results of least-squared fits for each filter. We then ran the chain using 30 walkers until it passed 50 times the autocorrelation time (sufficient for convergence, Goodman and Weare, 2010), typically \(\simeq 5,000\) steps. For the burn-in, we used 10% of the total number of steps, although the result was not sensitive to the choice of burn-in. Figure 4 shows the ages and \(Var_{90}\) values for all three filters with the best-fit relation and random draws from the MCMC. All parameters were well constrained with Gaussian errors with the expected covariance between the slope and Y-intercept terms (Figure 5). The best-fit parameters and uncertainties for all filters are listed in Table 2. All three metrics followed a Skumanich-like decay (\(\simeq t^{n}\)) with age. Inverting \(m\), we found \(n\) varies from \(-0.40\) to \(-0.45\), consistent with the similar relation using full light curves (\(n=-0.37\pm 0.16\); Morris, 2020). As can be seen in Figure 4, the fit had a narrow range of solutions. The uncertainty in the output age from this relation was instead dominated by the \(\ln f\) parameter. This implies a fundamental limit to the age precision of 14-18% when using this technique. ### Testing the relation The significant \(\ln f\) made clear that there are additional sources of variation in relation between \(Var_{90}\) and age. The missing variation may be related to the photometry (e.g., Poisson noise, _Gaia_'s outlier rejection), assumptions about the input (e.g., inaccurate age un Figure 4: Age of associations as a function of \(\mathrm{Var}_{G}\) (top), \(\mathrm{Var}_{Bp}\) (middle), and \(\mathrm{Var}_{Bp}\) (bottom) using the young associations listed in Table 1. Each association is colored by its distance from the Sun. The orange line represents the best-fit for each filter, with 100 translucent orange lines of randomly drawn sample fits from the MCMC posterior. The best-fit parameters parameters are listed in Table 2. Figure 3: The \(Var_{G,X}\) we would measure if a fraction of stars are field interlopers (contaminants), normalized to the value assuming no contamination. Three different values for \(X\) (50%, 75% and 90%) are shown as three colors. This was built by using member lists from the Pleiades (stars) and Lower Centaurus–Crux (circles), adding in nearby non-members and recomputing \(Var_{G,X}\). This assumes the original list has low contamination. For both groups, contamination has a weak effect (\(<20\%\)) on \(Var_{G,90}\). certainties), and/or astrophysical effects (e.g., binarity and metallicity). Many of these cannot be studied in detail absent full light curves, but we explore some where we have the requisite data below. **Distance:** As seen in Figure 4, there is a tendency for more distance (\(\gtrsim 250\) pc) groups to sit below the fit and for the closest groups (\(\lesssim 125\) pc) to sit above the fit. The result is that more distant groups had an older variability-based age and closer groups a lower one. This may be due to the fact that more distant targets are (statistically) fainter, making it harder to detect the same level of variability in the presence of Poisson noise. We tested the effective distance ranges in all three filters. Removing the distant groups, \(>250\) pc, did not significantly change the calibration and all parameters agreed within the uncertainties. The decrease in \(\ln f\) was insignificant. Similarly, removing the closest groups, \(<100\) pc, did not significantly effect the fit and all parameters agreed within uncertainties. We also explicitly fit a distance term of the form: \[\log_{10}(\text{age})\ [\text{Myr}]=m\times Var_{90}+a\times d+b, \tag{4}\] where \(d\) is the median distance (in parsecs) of the association members and \(a\) is an additional fit parameter. The output parameters are included in Table 2. For the \(G\)-band, \(a\) was consistent with 0 (2.9\(\sigma\)) but \(a\) was significant the other two bands. The additional term suggests the inferred age shifts by about 0.1-0.2% per pc in each filter. The correction thus becomes comparable to the intrinsic scatter in the relation for the most distant (\(\gtrsim 300\) pc) or nearest (\(\lesssim\)100 pc) groups. The fits accounting for distance had significantly lower \(\ln f\) than those ignoring distance. For \(Var_{RP,90}\), the lower \(\ln f\) suggested a limiting precision of 9% (compared to 14% when ignoring distance). For this reason, we suggest using the relations accounting for distance. **Binaries:** High renormalised unit weight error (RUWE; Lindegren et al., 2018) values (\(\gtrsim 1.2\)) are often used to signify binary systems (Pearce et al., 2019; Ziegler et al., 2020; Wood et al., 2022). More restrictive RUWE cuts will not remove all binaries, but should remove enough of them to see if binaries have a significant impact on the result. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multicolumn{1}{c}{ Parameter} & m & b & \(\ln f\) & \(a\) \\ \hline \(Var_{G,90}\) & \(-2.30\pm 0.10\) & \(3.928^{+0.092}_{-0.093}\) & \(0.178^{+0.044}_{-0.022}\) & \\ \(Var_{BP,90}\) & \(-2.29\pm 0.11\) & \(3.920^{+0.066}_{-0.008}\) & \(0.177^{+0.025}_{-0.023}\) & \\ \(Var_{RP,90}\) & \(-2.239\pm 0.092\) & \(4.170^{+0.096}_{-0.096}\) & \(0.141^{+0.025}_{-0.022}\) & \\ \(Var_{G,90}\), \(d\) & \(-2.40\pm 0.10\) & \(4.22^{+0.13}_{-0.14}\) & \(0.155^{+0.025}_{-0.021}\) & \(-0.0011^{+0.00039}_{-0.00038}\) \\ \(Var_{BP,90}\), \(d\) & \(-2.461^{+0.10}_{-0.098}\) & \(4.40^{+0.13}_{-0.14}\) & \(0.129^{+0.024}_{-0.022}\) & \(-0.00174\pm 0.00037\) \\ \(Var_{RP,90}\), \(d\) & \(-2.376^{+0.084}_{-0.082}\) & \(4.62\pm 0.12\) & \(0.089^{+0.023}_{-0.021}\) & \(-0.00167^{+0.00031}_{-0.00033}\) \\ \hline \end{tabular} \end{table} Table 2: MCMC Fit Parameters Figure 5: Corner plots of the parameters (slope, y-intercept, and missing uncertainty in the fit) from our MCMC model fits for \(Var_{G,90}\) (left), \(Var_{BP,90}\) (center), and \(Var_{RP,90}\) (right). The contour levels correspond to 1\(\sigma\), 2\(\sigma\), and 3\(\sigma\) of the points (from darkest to lightest). The fit parameters are Gaussian distributed, with the expected covariance between the slope and y-intercept. Plot made using corner (Foreman-Mackey, 2016). To test this, we added a RUWE cut of \(<1.3\) and an extreme cut of \(<1\). In both cases and for all filters, the \(m\) and \(b\) parameters agreed within \(1\sigma\). The \(\ln f\) parameter for the \(<1.3\) cut agreed with our original fit, but increased by \(>4\sigma\) for the \(<1\) cut. This may be because photometric variability can increase RUWE (Belokurov et al., 2020), as can the presence of a disk (Fitton et al., 2022). Thus, the tightest cut may be removing a subset of the most variable or youngest stars within a given population. Individual \(Var_{G,90}\) values changed by \(<1\sigma\) after applying the RUWE \(<1.3\) cut for all groups except Taurus-Auriga, which varied by \(3\sigma\) (most likely due to a high fraction of members with disks). Additionally, \(>\)70% of the \(Var_{G,90}\) values have smaller uncertainties before the RUWE \(<1.3\) cut was applied. We determine no RUWE cut is necessary, and applying one may negatively impact the resulting \(Var_{90}\) value. **Field-star contamination:** There are often stars with motions and positions coincident with a group, particularly for the most diffuse populations. To explore this, we added stars from our field star sample (described in Section 3.2) to two groups and measured the effect on \(Var_{90}\). For this test, we used Lower Centaurus-Crux (17 Myr) and Pleiades (112 Myr). These were selected because together they span a range of ages and both groups have membership lists with low contamination rates. We added stars to each group from the field population randomly, only requiring that the added stars pass the same data quality and color cuts as the membership list. We then re-measured \(Var_{G,90}\), as well as \(Var_{G,50}\) and \(Var_{G,75}\). The \(90^{th}\) percentile \(Var_{G}\) value was least sensitive to interloper contamination (Figure 3). Even at 30% contamination level, field interlopers cause the median \(Var_{G}\) to drop by about 20%, while the \(90^{th}\) percentile value dropped by only 5%. It took nearly a 75% contamination level to drop \(Var_{G,90}\) by \(\gtrsim\)20%. We conclude that field contamination had a weak effect on the result, which was a major motivation for selecting the \(90^{th}\) metric. **Uncertainties from group size:** The limiting age precision from our relation is 9-16% (when including distance) or 14-18% (absent distance corrections). However, this ignores uncertainties in \(Var_{90}\) which can be larger than the intrinsic uncertainty in the relation for low-mass groups. To see how decreasing the sample size effects the final age uncertainty, we used the three largest calibration groups that span most of the age range: Lower Centaurus-Crux (17 Myr), Psc-Eri (137 Myr), and Praesepe (700 Myr). We randomly removed stars from each group, recalculated the \(Var_{90}\) (bootstrap) uncertainties, and propagated those to an uncertainty in age. We ignored uncertainties in the fit parameters and \(\ln f\). As we show in Figure 6, uncertainties in \(Var_{90}\) dominated the final age uncertainties for all bands and ages if the group has \(\lesssim 100\) stars (that pass all cuts). The effect was the strongest for Psc-Eri, where the age uncertainty from uncertainties in \(Var_{RP,90}\) and \(Var_{BP,90}\) do not drop below the calibration uncertainty even for samples \(\gtrsim 400\) stars. Figure 6 also makes clear that \(Var_{RP,90}\) is not necessarily the best metric. While it has the smallest \(\ln f\) value (Table 2), the \(Var_{RP,90}\) uncertainties are larger than those for \(Var_{G,90}\), likely due to higher SNR in _Gaia_\(G\) compared to \(R_{P}\). **Color cuts:** We included a color cut due to the metric becoming ineffective for mid-to-late M dwarfs (Figure 2). To test the effect of this decision on the calibration, we reran the fit using stars with \(B_{P}-R_{P}<3\) and again using stars with \(B_{P}-R_{P}<2\). We found that the redder color cut had an insignificant effect on the fit parameters, but increased \(\ln f\) by \(2\sigma\). As expected, the relation was diluted by the cooler M dwarfs where the metric is less effective. When using a bluer color cut, we found the fit parameters agreed with our original at \(1\sigma\), including \(\ln f\). The main difference between the bluer cut and our original was that individual \(Var_{G,90}\) measurements had larger uncertainties due to the smaller sample of stars in each group. **Impact of input age uncertainties:** Ages for the full sample were computed in an inhomogeneous way. This was unavoidable, as the methods used (and physics involved) to assign ages to older groups (e.g., main-sequence turn-off and asteroseismology of evolved stars) are subject to different systematics than methods that apply to younger stars (e.g., pre-main-sequence stars and lithium depletion boundary). Even in cases where the same method was used (e.g., CMD fitting), the choice of model and algorithm rarely matched between different analyses. Generally, ages for a given group agreed between source, but not necessarily the uncertainties. To explore the effect on the final relation, we reran the fit setting age uncertainties to zero. We found the fit parameters agree within \(1\sigma\); \(\ln f\) increases marginally (\(\simeq\)1\(\sigma\)). If we instead assumed the calibration set age uncertainties are underestimated, \(\ln f\) would be smaller. However, the change is insignificant; it dropped by only \(1\sigma\) when we doubled the input age uncertainties from those listed in Table 1. To get a change of \(\geq 3\sigma\) in \(\ln f\) required increasing input uncertainties by a factor of five. We conclude that our results are insensitive to our assumptions about the group age uncertainties. ## 5 Application Here we highlight the utility of \(Var_{90}\) and the age-\(Var_{90}\) calibration by showing how they can be used to assess the assigned ages of newly identified groups, test if a young group is a real co-eval population instead of a collection of field stars with similar space velocities, and identify new young associations. ### Testing the ages of groups We drew a collection of the Theia groups (Kounkel et al., 2020) within 350pc that have at least 100 stars that pass the sample selection cuts (Section 3.3). In total, this included 59 groups with CMD-based ages from 16 Myr to 2.6 Gyr, comparable to our calibration sample. For each group, we calculated \(Var_{G,90}\), \(Var_{BP,90}\), and \(Var_{RP,90}\), converted that to an age estimate in each filter, and took the weighted mean and uncertainty of the three ages. Combining the three age estimates in this way may lead to underestimated uncertainties, as each fit was subject to some common systematics. However, the dominant uncertainty was due to scatter in \(Var_{90}\), and tests on the calibration sample suggested this simple combination was reasonable. Figure 7 compares our predicted ages to those from Kounkel et al. (2020), determined using the neural network Auriga. Auriga uses quantities derived from the photometry and parallaxes (the CMD), such as the ratio of high and low mass stars and the ratio of post-, main-, and pre-sequence stars. Of the 59 groups, 48 (80%) have variability ages 3\(\sigma\) consistent with those from Kounkel et al. (2020). Of the 11 discrepant groups, 8 are \(\gtrsim\)300 Myr with variability ages significantly higher than the Auriga-determined age. This can be seen in Figure 7 as an overdensity of points in the top-half of the age distribution sitting above the 1:1 line. Below \(\gtrsim\)300 Myr, there are a similar number of points on either side of the 1:1 line. This is, in part, because \(Var_{90}\) works better at younger ages. It is also likely that some Theia groups are field stars with coincident space motions, which we discuss in the next section. Most of the variability-based ages are _more precise_ than the isochronal ages, particularly at young ages. Of the 48 groups where the two ages agree, 26 (55%) have variability-based age uncertainties below the Auriga-based age uncertainties. For groups \(<100\) Myr, where \(Var_{90}\) works best, five of seven (70%) have smaller age uncertainties when using variability ages compared to the Auriga ages. We performed a similar test on the five published MELANGE groups. All but one predicted ages agreed Figure 6: The effects of group size on the final age uncertainties. We used three groups of various ages, Lower Centaurus–Crux (\(\sim\)17 Myr, top), Psc-Eri (\(\sim\)130 Myr, middle), and Praesepe (\(\sim\)750 Myr, bottom). The individual points show the resulting age uncertainty arising from uncertainties in \(Var_{90}\), calculated by removing stars from these associations. The lines show the age uncertainty from the fit parameters’ uncertainties (including \(\ln f\)). An optimal sample size would be where the uncertainty in \(Var_{90}\) is below the fit uncertainty, which is dependent on the filter used but is typically \(\sim\)200-250 stars. within 1\(\sigma\) to their reported values. The exception, MELANGE-3 had a 3.5\(\sigma\) older variability age (\(\simeq\)300 Myr) compared to the age derived from lithium and rotation (105 Myr; Barber et al., 2022). This may have been because the group lands at the distance limit of our calibration sample (326 pc) and has a high field contamination rate (\(\simeq\)50%; Barber et al., 2022). All associations we tested are listed in Table 3, including the literature age and variability-based age. ### Testing the validity of a group Automated machine-learning tools designed to find overdensities of stars (e.g., HDBSCAN; McInnes et al., 2017) run the risk of identifying collections of stars with similar velocities that are neither bound nor co-eval. Our results in Section 5.1 hint at this problem; there are Theia groups with variability ages higher than the CMD-based age, and many of these groups have variability ages similar to what we expect when drawing random field stars (\(\sim\)1 Gyr since we are using the 90th percentile of \(Var\)). Groups with variability levels closer to the local field stars than the values predicted by their age are unlikely to be real co-eval populations. We quantified this using a Bayes factor: \[K=\frac{P(Var_{90}|G)}{P(Var_{90}|F)}, \tag{5}\] where \(P(Var_{90}|G)\) is the probability of measuring the \(Var_{90}\) value given that the stars are drawn from a real population (with an assumed age), and \(P(Var_{90}|F)\) is the probability assuming stars are drawn from the field. We computed both terms assuming Gaussian distributions. We restricted our analysis to \(Var_{G,90}\), although the other bands gave similar results. The numerator term we calculated by propagating the assigned age into a predicted \(Var_{G,90}\) and uncertainty (accounting for age and fit uncertainties). For the denominator, we drew a random sample of stars, matching the group size, with distances within 0.1 mas of the group distance and satisfying all cuts from Section 3.3. We list the resulting \(K\) values for each group in Table 3. Four (of five) MELANGE groups and 53 (of 59) Theia groups we tested had strong evidence of being a real association (\(\log_{10}(K)>0.5\)). Four of the remaining Theia groups (Theia 514, 793, 1098, and 1532) and the one MELANGE group (MELANGE-2) were ambiguous (\(-0.5<\log_{10}(K)<0.5\)). These were cases where the variability was consistent with a field population, but the CMD age was also relatively old. MELANGE-2 is also the smallest group (32 stars), making this test challenging. The remaining two Theia groups (Theia 810 and 1358) have evidence for not being a real association (\(\log_{10}(K)<-0.5\)). Consistent with our findings in the previous section, all seven of ambiguous and unlikely groups are \(>300\) Myr and have variability ages above their CMD-based age, helping to explain the excess of points above the 1:1 line in Figure 7. ### Finding new associations In Figure 8, we can see potential of \(Var\) for searching for new associations. We first show all stars within the general area of Scorpius-Centarus (\(4<\pi<11\)) and satisfying the cuts from Section 3.3. A few of the denser regions show up, but not the overall structure. However, when we only include stars in the top 2% of \(Var_{G}\), the Sco-Cen population is clear. Further, many of the youngest groups (e.g., Corona Australis and Upper Scorpius) are the most prominent after applying the \(Var_{G}\) cut. One could have made a similar or better Sco-Cen member selection using _Gaia_ astrometry or CMD position. However, the benefit was that we were able to identify Sco-Cen and numerous sub-populations from excess noise in the _Gaia_ photometry **alone**. This would have worked even without a parallax cut; we only applied that to keep the sample size reasonable. One could therefore combine \(Var\) with positional, kinematic, and other age information to identify groups that are far more diffuse Figure 7: The predicted ages of the MELANGE groups (triangles) and Theia groups (circles) within 350pc and have \(>100\) stars that pass sample cuts. The Theia groups with \(>100\) stars but \(<200\) stars are more transparent. A line showing agreement is included for reference. Groups are colored by the Bayes factor comparing the probability of being a bona fide group or a collection of field stars (Equation 5). A lower Bayes Factor (more red) indicates the association is more likely to be drawn from field stars. or otherwise challenging to identify and confirm purely from the traditional positional and kinematic information. ## 6 Summary and Conclusions ### Summary of findings Earlier work from Guidry et al. (2021) and Barlow et al. (2022) showed that one can use the excess flux uncertainty from _Gaia_ to identify variable white dwarfs and hot subdwarfs, respectively. Here we have extended this work to young stars. Specifically, we 1) modified the excess uncertainty metric using the median flux uncertainties provided by _Gaia_(Riello et al., 2021), 2) showed that our new metric (\(Var\)) scales with age for FGK and early M dwarfs, 3) calibrated the relation between the 90th-percentile of \(Var\) (\(Var_{90}\)) and age, 4) demonstrated how the metric can be used to estimate the ages of young populations, confirm which young populations are real, and search for new young groups. Our results confirmed a correlation between stellar variability and age. Our calibrations, in all bands, whether or not we included distance corrections, yield a Skumanich decay with age consistent with similar relations using full light curves (Morris, 2020). We found a narrow range of solutions in our calibration, and the uncertainty of the output is dominated by \(\ln f\). This suggests the scatter in the relation was astrophysical and the fundamental age precision limit using the variability-age relations is \(\geq\)9%. The methods described here work best on populations below \(<500\) Myr and those with \(>\)100 stars. This is particularly true for testing if a group is real; the probability of drawing a population of highly variable stars by chance is negligibly low. ### Are the Theia Strings real structures? Kounkel and Covey (2019) constructed Theia strings by manually combining sets of groups (originally identified by HDBSCAN) with similar ages and coherent spatial and kinematic structure. Zucker et al. (2022) argued that the individual groups that make the strings may be real populations but were unlikely to be part of a single bona fide structure. They primarily pointed that each string has a high velocity dispersion, yielding a high virial mass and breakup timescales much shorter than the group ages. Manea et al. (2022) found a majority of Theia structures contain abundances more homogeneous than their local fields, noting that of the 10 strings and 8 compact groups tested, Theia 1415 was the only string (and group) they found to have a high abundance dispersion more closely matching local background stars. However, Zucker et al. (2022) argued this could happen even by chance if many of the sub-components of the string are young populations and does not require them to be part of a larger structure. Figure 8: Stars in the region around the Sco-Cen OB association. The top panel shows all the stars with \(4<\pi<11\), an extremely generous cut that includes nearly all of Sco-Cen. The bottom shows the same parallax cut, but adding a requirement that the star is in the top 2% of \(Var_{G}\). Some of the youngest regions (e.g., Upper Scorpius in the top-left and Corona Australis in the bottom left) show quite clearly after a simple variability cut, as well as dense sub-groups like Lupus. Initially, our results appeared to in contrast with Zucker et al. (2022). We found the majority of Theia groups contain variability measurements consistent with their reported isochronal ages. Just considering the strings, the two age estimates matched in 36 of 45 cases (80%). It is unlikely these numbers would match so often if each string were comprised of many groups with varying ages. Exactly how unlikely depends on the age and age spread between subgroups, but if we assume purely random draws from the Theia group ages, then we would expect no matches over the 45 strings by chance alone. Further, most Theia strings passed our validity test. Only one of the strings (Theia 830) has strong evidence for not being a real association. Even assuming cases with ambiguous results (similar probability of being pulled from the field versus a real group) were not real, 39 of 43 (90%) of the Theia strings had strong evidence of being real. These results could be reconciled if some of the sub-groups are associated and some are not and/or the strings contain some field contamination. Because \(Var_{90}\) is weakly impacted by contamination (Figure 3), most of the sub-groups within a string could be unassociated and we would still get an age consistent with the CMD-based age. However, Zucker et al. (2022) would find a high velocity dispersion even if just a few of the sub-groups were disconnected. If the unassociated groups were preferentially not real (field stars) or older than the main group, this may also help explain why, among the \(>300\) Myr groups, the variability ages were preferentially higher than the isochronal ages (Figure 7). A similar explanation is that each string is composed of multiple populations with similar but not identical ages and kinematics. An example is the Sco-Cen OB association, which is comprised of at least three, but probably many more populations (e.g. Kerr et al., 2021; Luhman, 2022). These sub-groups are unbound and have slightly differing kinematics and ages (Wright and Mamajek, 2018). The velocity difference between parts of Sco-Cen can exceed 10 km s\({}^{-1}\)(Zerjal et al., 2023), similar to many of the Theia strings, and this dispersion would only grow with time. Sco-Cen would have broken apart by the age of the oldest Theia strings, but some strings are \(<50\) Myr and a denser equivalent of Sco-Cen may still show up for hundreds of millions of years. ### Benefits of Var Our age-\(Var_{90}\) calibration can yield ages with \(\simeq\)10% precision, provided the population has a sufficient number of FGK and early M star members (\(\gg\)100). This is competitive with other methods, like isochrone fitting. We can see this in the comparison of our variability ages to the CMD-based ages from Kounkel et al. (2020); variability-based ages were often more precise, particularly below \(<\)200 Myr. A major benefit of this method is the limited information needed. We are able to get quick age estimates using available _Gaia_ DR3 data and without the need for collecting additional rotation period and lithium measurements. For example, the age for MELANGE-4 is based on Lithium absorption, which requires multiple nights of observations (Wood et al., 2022). While not as precise, we calculated a similar age from just _Gaia_ data alone (\(26^{+8}_{-5}\) Myr compared to \(27\pm 3\) Myr). The \(Var_{90}\) calibration is also independent of CMD- or abundance-based methods, meaning it can be combined to improve precision. Another example is MELANGE-1, which was identified using FriendFinder(Tofflemire et al., 2021), by selecting stars with similar positions and motions to a given target. The population showed weak evidence of spatial or kinematic over densities, and required additional radial velocity, rotation periods, and Lithium measurements to confirm the group is real and measure its age. As we showed in Section 5, we obtained a consistent (but less precise) age and confirmed its a real population from _Gaia_ data alone. While other fitting methods, such as isochrone fittings, rely on the distribution of pre-, post-, and main-sequence stars, \(Var_{90}\) works in age ranges where there are few or no pre-main sequence or evolved stars (approximately 200-500 Myr). It is also independent of extinction (provided the stars are sufficiently bright). Lastly, the method will grow in effectiveness as _Gaia_ collects additional data and we can calibrate past 350 pc. While individual stars cannot be aged using this method, \(Var\) can be used as another metric for identifying high probability group members. This is especially useful for diffuse groups with are few pre-main sequence stars (e.g., AB Dor). \(Var\) can be used to identify candidate young stars in the field, and other methods can be used to confirm membership. ### Limitations The most obvious place where \(Var_{90}\) failed was MELANGE-3; \(Var_{90}\) suggested an age of 300\(\pm\)60 Myr but the rotation, lithium levels, and CMD all indicate an age of 105\(\pm\)10 Myr (Barber et al., 2022). This discrepancy also stands out because the other disagreements for MELANGE and Theia ages were for \(>500\) Myr, where \(Var_{90}\) is less effective and groups are harder to distinguish from the field (less likely to be real structures). MELANGE-3 still did pass our validity test. For the best age estimates, the metric requires a sample sizes of at least 100, while ages can be derived from a CMD with a handful of turn-off or pre-main-sequence stars. Turn-off stars are also available at far greater distances than 350 pc. The size limitation is also a problem for low-mass nearby groups like those from Moranta et al. (2022), the majority of which have fewer than 100 members. \(Var\) is ineffective for stars cooler than \(\simeq\)M3V (Figure 2). We suspect this is a mix of a few effects; 1) the _Gaia_ fitted uncertainties are calibrated mostly on FGK stars and do not include a color term, 2) M dwarfs are intrinsically fainter than FGK dwarfs so the distance effect (brightness) discussed in Section 4.1 is stronger, 3) M dwarfs are variable for longer and their variation may saturate below 100-500 Myr (Jackson et al., 2012; Kiman et al., 2021). ### Future work The methods described here could be used to test which sub-groups of a given Theia string are co-eval. For example, we could see if the \(Var\) distributions for each sub-group are consistent with being drawn from the same parent population (one single-aged group). A more complex mixture model would also let us test if the strings are consistent with a mix of a young population and field contaminants or multiple young populations. This could be done in conjunction with analysis of the kinematics and position (e.g., which groups combine together to yield a low velocity dispersion while maintaining a consistent \(Var\) distribution). It may be possible to recover \(Var\) as a useful metric for mid-to-late M dwarfs. One option is to re-calibrate _Gaia_ photometric uncertainty estimates including color as a parameter. Similarly, one could compare \(Var\) to the expected uncertainty for a set of stars of similar distance, brightness, and \(B_{P}-R_{P}\) color. This would effectively change \(Var\) to a metric that compares the photometric uncertainties to that of the median star of similar spectral type and apparent brightness. The number of associations with high-quality membership lists and well-determined ages decreases significantly past 350 pc. This made it challenging to calibrate the relation further. The reach of _Gaia_ data and new search tools are expanding the list of groups (e.g. Qin et al., 2022; He et al., 2022). More complete membership lists and more detailed age estimates for these groups would be invaluable to calibrate Equation 3 to 500 pc or beyond. Another route for improvement would be to use the \(BPRP\) spectra from _Gaia_. Stellar variability is stronger in some parts of the spectrum than others (e.g., around H\(\alpha\)). One could create synthetic photometry from the spectra (Gaia Collaboration et al., 2022) tuned to these wavelength regions, which should be more effective than broadband photometry alone. The authors wish to thank Halee and Bandit for their tireless efforts to interrupt zoom meetings between the two authors. We also thank Pa Chia Thao for her comments on the manuscript and the UNC Journal Club for discussing the Guidry et al. (2021) paper, which spawned the idea for this work. MGB and AWM were both supported by a grant from the NSF CAREER program (AST-2143763) and a grant from NASA's exoplanet research program (XRP 80NSSC21K0393). This research has made use of the tool provided by Gaia DPAC to reproduce the Gaia (E)DR3 photometric uncertainties described in the GAIA-C5-TN-UB-JMC-031 technical note using data in Riello et al. (2021). _Facilities: Gaia_ emcee, corner.py, matplotlib (Hunter, 2007), Astropy (Astropy Collaboration et al., 2013, 2018), numpy(Harris et al., 2020), scipy(Virtanen et al., 2020). \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multicolumn{1}{c}{ Name} & Literature Age\({}^{a}\) & Age & Variability Age & Distance\({}^{b}\) & N\({}_{\rm stars}\)\({}^{c}\) & Bayes Factor & String? \\ & (Myr) & Reference & (Myr) & pc & & \(\log_{10}\)(K) & Y/N \\ \hline Theia 44 & 32\({}^{+13}_{-9}\) & Kounkel et al. (2020) & 46\({}^{+9}_{-7}\) & 127 & 109 & 27.4 & Y \\ Theia 115 & 45\({}^{+13}_{-9}\) & Kounkel et al. (2020) & 41\({}^{+9}_{-9}\) & 178 & 202 & 62.0 & Y \\ Theia 116 & 55\({}^{+17}_{-13}\) & Kounkel et al. (2020) & 61\({}^{+3}_{-9}\) & 226 & 599 & 221.6 & Y \\ Theia 120 & 37\({}^{+6}_{-6}\) & Kounkel et al. (2020) & 49\({}^{+11}_{-8}\) & 327 & 427 & 177.8 & Y \\ Theia 138 & 46\({}^{+7}_{-9}\) & Kounkel et al. (2020) & 70\({}^{+17}_{-12}\) & 359 & 189 & 67.5 & Y \\ \hline \end{tabular} \end{table} Table 3: Test Group Results \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multicolumn{1}{c}{ Name} & Literature Age\({}^{\mathbf{a}}\) & Age & Variability Age & Distance\({}^{\mathbf{b}}\) & N\({}_{\rm stars}\)\({}^{\mathbf{C}}\) & Bayes Factor & String? \\ & (Myr) & Reference & (Myr) & pc & & log\({}_{10}\)(K) & Y/N \\ \hline Theia 160 & \(79^{+33}_{-23}\) & Kounkel et al. (2020) & \(106^{+29}_{-20}\) & 175 & 139 & 24.9 & Y \\ Theia 163 & \(100^{+32}_{-32}\) & Kounkel et al. (2020) & \(106^{+24}_{-17}\) & 318 & 474 & 136.5 & Y \\ Theia 164 & \(112^{+32}_{-25}\) & Kounkel et al. (2020) & \(80^{+20}_{-14}\) & 314 & 203 & 71.8 & Y \\ Theia 211 & \(275^{+91}_{-91}\) & Kounkel et al. (2020) & \(515^{+167}_{-164}\) & 215 & 141 & 6.4 & N \\ Theia 214 & \(158^{+50}_{-38}\) & Kounkel et al. (2020) & \(184^{+65}_{-40}\) & 218 & 144 & 28.3 & Y \\ Theia 215 & \(132^{+38}_{-29}\) & Kounkel et al. (2020) & \(71^{+19}_{-13}\) & 231 & 268 & 79.4 & Y \\ Theia 216 & \(107^{+31}_{-24}\) & Kounkel et al. (2020) & \(108^{+32}_{-21}\) & 230 & 220 & 47.3 & Y \\ Theia 219 & \(219^{+50}_{-41}\) & Kounkel et al. (2020) & \(384^{+93}_{-65}\) & 262 & 182 & 14.8 & Y \\ Theia 227 & \(191^{+91}_{-62}\) & Kounkel et al. (2020) & \(493^{+109}_{-79}\) & 329 & 764 & 37.6 & N \\ Theia 228 & \(138^{+40}_{-31}\) & Kounkel et al. (2020) & \(119^{+48}_{-28}\) & 329 & 102 & 22.9 & Y \\ Theia 303 & \(151^{+58}_{-42}\) & Kounkel et al. (2020) & \(177^{+48}_{-32}\) & 224 & 226 & 36.1 & Y \\ Theia 311 & \(209^{+86}_{-61}\) & Kounkel et al. (2020) & \(337^{+78}_{-56}\) & 288 & 264 & 23.9 & Y \\ Theia 370 & \(214^{+133}_{-82}\) & Kounkel et al. (2020) & \(129^{+53}_{-31}\) & 146 & 132 & 12.8 & N \\ Theia 430 & \(178^{+48}_{-49}\) & Kounkel et al. (2020) & \(305^{+95}_{-61}\) & 161 & 117 & 6.7 & Y \\ Theia 431 & \(234^{+146}_{-30}\) & Kounkel et al. (2020) & \(205^{+53}_{-36}\) & 167 & 176 & 20.0 & Y \\ Theia 433 & \(347^{+121}_{-100}\) & Kounkel et al. (2020) & \(820^{+201}_{-14}\) & 235 & 255 & 5.4 & Y \\ Theia 438 & \(316^{+130}_{-155}\) & Kounkel et al. (2020) & \(230^{+84}_{-15}\) & 263 & 106 & 12.6 & Y \\ Theia 506 & \(407^{+112}_{-112}\) & Kounkel et al. (2020) & \(177^{+47}_{-32}\) & 93 & 202 & 18.3 & Y \\ Theia 509 & \(269^{+94}_{-70}\) & Kounkel et al. (2020) & \(146^{+42}_{-29}\) & 145 & 240 & 21.3 & Y \\ Theia 514 & \(331^{+116}_{-86}\) & Kounkel et al. (2020) & \(863^{+267}_{-177}\) & 282 & 223 & -0.3 & N \\ Theia 515 & \(275^{+71}_{-57}\) & Kounkel et al. (2020) & \(450^{+139}_{-92}\) & 297 & 125 & 6.0 & N \\ Theia 516 & \(269^{+62}_{-50}\) & Kounkel et al. (2020) & \(318^{+111}_{-70}\) & 300 & 133 & 8.4 & Y \\ Theia 519 & \(389^{+79}_{-65}\) & Kounkel et al. (2020) & \(564^{+223}_{-132}\) & 343 & 120 & 2.7 & N \\ Theia 595 & \(513^{+195}_{-141}\) & Kounkel et al. (2020) & \(527^{+125}_{-90}\) & 113 & 322 & 3.4 & Y \\ Theia 599 & \(302^{+87}_{-68}\) & Kounkel et al. (2020) & \(906^{+207}_{-150}\) & 238 & 372 & 2.8 & Y \\ Theia 600 & \(269^{+70}_{-58}\) & Kounkel et al. (2020) & \(569^{+177}_{-113}\) & 260 & 143 & 6.8 & N \\ Theia 603 & \(234^{+82}_{-61}\) & Kounkel et al. (2020) & \(288^{+105}_{-64}\) & 287 & 110 & 7.3 & Y \\ Theia 605 & \(105^{+27}_{-22}\) & Kounkel et al. (2020) & \(154^{+48}_{-25}\) & 319 & 278 & 56.0 & Y \\ Theia 678 & \(339^{+186}_{-180}\) & Kounkel et al. (2020) & \(767^{+174}_{-215}\) & 144 & 427 & 2.9 & Y \\ Theia 683 & \(295^{+82}_{-82}\) & Kounkel et al. (2020) & \(421^{+114}_{-114}\) & 216 & 132 & 2.6 & Y \\ Theia 684 & \(355^{+73}_{-73}\) & Kounkel et al. (2020) & \(560^{+143}_{-97}\) & 216 & 141 & 5.3 & Y \\ Theia 685 & \(331^{+67}_{-56}\) & Kounkel et al. (2020) & \(758^{+166}_{-122}\) & 229 & 369 & 6.9 & Y \\ Theia 695 & \(178^{+41}_{-33}\) & Kounkel et al. (2020) & \(385^{+136}_{-84}\) & 304 & 104 & 9.4 & Y \\ Theia 786 & \(282^{+81}_{-63}\) & Kounkel et al. (2020) & \(253^{+104}_{-61}\) & 152 & 118 & 8.0 & Y \\ Theia 790 & \(324^{+93}_{-72}\) & Kounkel et al. (2020) & \(563^{+203}_{-124}\) & 200 & 107 & 3.0 & N \\ Theia 792 & \(339^{+78}_{-63}\) & Kounkel et al. (2020) & \(372^{+100}_{-68}\) & 211 & 142 & 8.2 & Y \\ Theia 793 & \(501^{+144}_{-112}\) & Kounkel et al. (2020) & \(1110^{+420}_{-257}\) & 241 & 109 & 0.4 & Y \\ Theia 796 & \(251^{+65}_{-52}\) & Kounkel et al. (2020) & \(425^{+158}_{-98}\) & 229 & 109 & 4.3 & Y \\ Theia
2306.14244
Largest and Least H-Eigenvalues of Symmetric Tensors and Hypergraphs
In tensor eigenvalue problems, one is likely to be more interested in H-eigenvalues of tensors. The largest H-eigenvalue of a nonnegative tensor or of a uniform hypergraph is the spectral radius of the tensor or of the uniform hypergraph. We find upper bounds and lower bounds (interlacing inequalities) for the largest H-eigenvalue of a principal subtensor of a symmetric zero diagonal tensor that is of even order or nonnegative, as well as lower bounds for the largest H-eigenvalue of a uniform hypergraph with some vertices or edges removed. We also investigate similar problems for the least H-eigenvalues. We give examples to verify the sharpness of the bounds or in some cases for uniform hypergraphs, we characterize the equality. Particularly, for a connected linear $k$-uniform hypergraph $G$ with $v\in V(G)$, we give a sharp lower bound for the spectral radius of $G-v$ in terms of the spectral radius of $G$ and the degree of $v$ and characterize the extremal hypergraphs, and show that the maximum spectral radius of the subhypergraphs with one vertex removed is greater than or equal to the spectral radius of the hypergraph minus one, which is attained if and only if it is a Steiner system $S(2,k,n)$.
Hongying Lin, Lu Zheng, Bo Zhou
2023-06-25T13:32:10Z
http://arxiv.org/abs/2306.14244v1
# Largest and Least H-Eigenvalues of Symmetric Tensors and Hypergraphs ###### Abstract In tensor eigenvalue problems, one is likely to be more interested in H-eigenvalues of tensors. The largest H-eigenvalue of a nonnegative tensor or of a uniform hypergraph is the spectral radius of the tensor or of the uniform hypergraph. We find upper bounds and lower bounds (interlacing inequalities) for the largest H-eigenvalue of a principal subtensor of a symmetric zero diagonal tensor that is of even order or non-negative, as well as lower bounds for the largest H-eigenvalue of a uniform hypergraph with some vertices or edges removed. We also investigate similar problems for the least H-eigenvalues. We give examples to verify the sharpness of the bounds or in some cases for uniform hypergraphs, we characterize the equality. Particularly, for a connected linear \(k\)-uniform hypergraph \(G\) with \(v\in V(G)\), we give a sharp lower bound for the spectral radius of \(G-v\) in terms of the spectral radius of \(G\) and the degree of \(v\) and characterize the extremal hypergraphs, and show that the maximum spectral radius of the subhypergraphs with one vertex removed is greater than or equal to the spectral radius of the hypergraph minus one, which is attained if and only if it is a Steiner system \(S(2,k,n)\). **Mathematics Subject Classification (2010)**: 15A69, 05C50, 05C65 **Key words:** H-eigenvalue, interlacing inequalities, symmetric tensor, uniform hypergraph, Steiner system Introduction Let \(\mathbb{R}\) be the field of real numbers and \(\mathbb{R}^{n}\) the \(n\)-dimensional real space. For positive integers \(k\) and \(n\), a (real) tensor (or hypermatrix) \(\mathcal{T}=(t_{i_{1}\ldots i_{k}})\) of order \(k\) and dimension \(n\) is a multidimensional array with entries \(t_{i_{1}\ldots i_{k}}\in\mathbb{R}\) for \(i_{j}\in[n]:=\{1,\ldots,n\}\) and \(j\in[k]\). An entry \(t_{i_{1}\ldots i_{k}}\) with \(i_{1}=\cdots=i_{k}=i\in[n]\) is a diagonal entry of \(\mathcal{T}\). A zero diagonal tensor is a tensor for which all diagonal entries are equal to zero. The tensor \(\mathcal{T}\) is symmetric if each entry \(t_{i_{1}\ldots i_{k}}\) is invariant with respect to all permutations of \(i_{1},\ldots,i_{k}\). A tensor is nonnegative if all its entries are nonnegative. For a tensor \(\mathcal{T}\) of order \(k\) and dimension \(n\), and an \(n\)-dimensional vector \(\mathbf{x}=(x_{1},\ldots,x_{n})^{\top}\), \(\mathcal{T}\mathbf{x}^{k-1}\) is defined as an \(n\)-dimensional vector whose \(i\)-th entry is \[(\mathcal{T}\mathbf{x}^{k-1})_{i}\equiv\sum_{i_{2},\ldots,i_{k}\in[n]}t_{ii_{ 2}\ldots i_{k}}x_{i_{2}}\cdots x_{i_{k}}\] for \(i\in[n]\), and \(\mathcal{T}\mathbf{x}^{k}\) is defined as the \(k\)-th degree homogeneous polynomial \[\mathcal{T}\mathbf{x}^{k}\equiv\sum_{i_{1},\ldots,i_{k}\in[n]}t_{i_{1}\ldots i _{k}}x_{i_{1}}\cdots x_{i_{k}}.\] **Definition 1.1**.: _[_19, 25_]_ _Let \(\mathcal{T}\) be a tensor of order \(k\) and dimension \(n\). For some complex \(\lambda\), if there is a nonzero vector \(\mathbf{x}\) such that_ \[\lambda x_{i}^{k-1}=(\mathcal{T}\mathbf{x}^{k-1})_{i},\] _i.e.,_ \[\lambda x_{i}^{k-1}=\sum_{i_{2},\ldots,i_{k}\in[n]}t_{ii_{2}\ldots i _{k}}x_{i_{2}}\cdots x_{i_{k}} \tag{1.1}\] _for \(i\in[n]\), then \(\lambda\) is called an eigenvalue of \(\mathcal{T}\), and \(\mathbf{x}\) is called an eigenvector of \(\mathcal{T}\) corresponding to \(\lambda\). Moreover, if both \(\lambda\) and \(\mathbf{x}\) are real, then we call \(\lambda\) an H-eigenvalue and \(\mathbf{x}\) an H-eigenvector of \(\mathcal{T}\)._ For more details on tensor eigenvalues and eigenvectors, we refer the readers to [5, 17, 27]. Let \(\mathcal{T}\) be a tensor of order \(k\) and dimension \(n\). The spectral radius of \(\mathcal{T}\) is the largest modulus of the eigenvalues of \(\mathcal{T}\), denoted by \(\rho(\mathcal{T})\). Suppose that there exists at least one H-eigenvalue of \(\mathcal{T}\). For instance, \(\mathcal{T}\) has at least one H-eigenvalue if \(\mathcal{T}\) is symmetric and \(k\) is even, see [25], or if \(\mathcal{T}\) is nonnegative, see Proposition 1.1 below. In this case, we denote \(\lambda_{\max}(\mathcal{T})\) and \(\lambda_{\min}(\mathcal{T})\) the largest H-eigenvalue and the least H-eigenvalue of \(\mathcal{T}\), respectively. In this case, it is evident that \(\lambda_{\min}(\mathcal{T})\leq\lambda_{\max}(\mathcal{T})\leq\rho(\mathcal{T})\), and if \(\mathcal{T}\) is nonnegative, then \(\lambda_{\max}(\mathcal{T})=\rho(\mathcal{T})\). In most cases, one is likely to be more interested in H-eigenvalues of tensors. **Definition 1.2**.: _[_10_]_ _A tensor \(\mathcal{T}=(t_{i_{i}\ldots i_{k}})\) of order \(k\) and dimension \(n\) is said to be weakly reducible if \(t_{i_{1}\ldots i_{k}}=0\) for some \(\emptyset\neq I\subset[n]\) and for any \(i_{1}\in I\) and at least one \(j\in\{2,\ldots,k\}\) with \(i_{j}\not\in I\). Otherwise, it is weakly irreducible._ The classic Perron-Frobenius theorem has been extended to nonnegative tensors by the efforts scholars as follows, see [4, 10, 34, 35] with a unifying treatment in [11]. **Proposition 1.1**.: _[_10_]_ _For a nonnegative tensor \(\mathcal{T}\) of order \(k\) and dimension \(n\) with \(n,k\geq 2\), \(\rho(\mathcal{T})\) is an H-eigenvalue of \(\mathcal{T}\) with a positive H-eigenvector. If \(\mathcal{T}\) is weakly irreducible, then there is a unique positive H-eigenvector, up to a multiplicative constant, and moreover, if \(\lambda\) is an H-eigenvalue with a positive eigenvector, then \(\lambda=\rho(G)\)._ Let \(G\) be a \(k\)-uniform hypergraph with vertex set \(V(G)=[n]\) and edge set \(E(G)\), where \(n,k\geq 2\). For \(u\in V(G)\), denote \(E_{u}(G)\) be the set of edges containing \(u\) in \(G\). The degree of \(u\) in \(G\) is defined as \(d_{G}(u)=|E_{u}(G)|\), we also write \(d_{u}\) for \(d_{G}(u)\) if there is no confuse. For any two distinct vertices \(i\) and \(j\) of \(G\), we write \(i\sim j\) if there is an edge containing \(i\) and \(j\), and \(i\nsim j\) otherwise. A linear hypergraph is one in which every two distinct edges intersect in at most one vertex. **Definition 1.3**.: _[_6, 7_]_ _The adjacency tensor of \(G\) is defined as the symmetric, nonnegative tensor \(\mathcal{A}(G)=(a_{i_{1}\ldots i_{k}})\) of order \(k\) and dimension \(n\), where_ \[a_{i_{1}\ldots i_{k}}=\begin{cases}\frac{1}{(k-1)!}&\text{if }\{i_{1},\ldots,i_{k}\}\in E (G),\\ 0&\text{otherwise.}\end{cases}\] Note that the adjacency tensor of a uniform hypergraph is nonnegative, so it has at least one H-eigenvalue by Proposition 1.1. **Definition 1.4**.: _The spectral radius (or largest H-eigenvalue) of \(\mathcal{A}(G)\) is called the spectral radius (or the largest H-eigenvalue) of \(G\), denoted by \(\rho(G)\), and the least H-eigenvalue of \(\mathcal{A}(G)\) called the least H-eigenvalue of \(G\), denoted by \(\lambda(G)\). That is, \(\rho(G)=\rho(\mathcal{A}(G))=\lambda_{\max}(\mathcal{A}(G))\) and \(\lambda(G)=\lambda_{\min}(\mathcal{A}(G))\)._ If \(G\) is an ordinary graph, then \(\rho(G)\) and \(\lambda(G)\) are respectively the largest and the least eigenvalues of (the adjacency matrix) of \(G\)[1, 8, 21]. For a \(k\)-uniform uniform hypergraph \(G\), by Proposition 1.1, \(\rho(G)\) is an H-eigenvalue of \(\mathcal{A}(G)\) with an associated nonnegative H-eigenvector, and moreover, if \(G\) is connected, then \(\mathcal{A}(G)\) is weakly irreducible [24], implying that there is a unique unit positive H-eigenvector corresponding to \(\rho(G)\). In this article, we say a vector \(\mathbf{x}\in\mathbb{R}^{n}\) is unit if \(\|\mathbf{x}\|_{k}^{k}:=\sum_{i\in[n]}|x_{i}|^{k}=1\). The approach to study of hypergraphs through tensors has been widely accepted, see, e.g., [3, 6, 7, 9, 13, 16, 20, 23, 24, 29]. It should be pointed that other treatment of the spectral property of hypergraphs may be found, see, e,g., [15]. There are many results on the bounds for the eigenvalues (particularly the largest one) of modified graphs by Rowlinson and coauthors, see, e.g. [2, 8, 28], where a modified graph is obtained from some given graph under small changes such as by removing vertices or edges and moving certain edges. Li, Wang and Van Mieghem [18] presented a new novel type lower bound for the spectral radius of a graph when some vertices are removed. Van Mieghem et al. [32] gave bounds for the spectral radius of a graph when some edges are removed. In [33], an upper bound was established for the least eigenvalue of a graph when some vertices are removed. **Definition 1.5**.: _[_14, 25_]_ _For a tensor \(\mathcal{T}=(t_{i_{1}\ldots i_{k}})\) of order \(k\) and dimension \(n\), a principal subtensor \(\mathcal{T}[I]\) of \(\mathcal{T}\) with nonempty index set \(I\subseteq[n]\) is a tensor of order \(k\) and dimension \(|I|\) consisting of \(|I|^{k}\) elements defined by_ \[\mathcal{T}[I]=(t_{i_{1}\ldots i_{k}})\text{ with }i_{1},\ldots,i_{k}\in I.\] **Definition 1.6**.: _[_31_]_ _A Steiner system \(S(t,k,n)\), of order \(n\) and block size \(k\) with \(n\geq k\geq 2\), is a collection of \(k\)-sets of an \(n\)-set such that every \(t\)-set belongs to exactly one block. In other words, \(S(t,k,n)\) is a \(k\)-uniform hypergraph on \(n\) vertices, such that every \(t\)-element vertex subset is contained in precisely one edge._ In this paper, we find upper bounds and lower bounds (interlacing inequalities) for the largest H-eigenvalue of a principal subtensor of a symmetric zero diagonal tensor that is of even order or nonnegative, from which we derive new lower bounds for the largest H-eigenvalue (spectral radius) of a uniform hypergraph in which some vertices or edges are removed. On the other hand, we present upper bounds and lower bounds for the least H-eigenvalue of a principal subtensor of a symmetric zero diagonal tensor of even order, from which we derive new upper and lower bounds for the least H-eigenvalue of a uniform hypergraph in which some vertices or edges are removed. We also present some bounds of the components of the least eigenvectors of hypergraphs. Some results from [13, 18, 23, 30, 32, 33] are generalized or improved. Particularly, for any connected linear \(k\)-uniform hypergraph \(G\) on \(n\) vertices with \(n\geq k\geq 2\), we give a sharp upper bound on \(\rho(G-v)\) with \(v\in V(G)\) in terms of \(\rho(G)\) and \(d_{G}(v)\) and characterize the hypergraphs for which this bound is attained, and show that \(\max\{\rho(G-v):v\in V(G)\}\geq\rho(G)-1\) with equality if and only if \(G\) is a Steiner system \(S(2,k,n)\). To the best of our knowledge, there is no such type of lower (upper, respectively) bounds for the largest (least, respectively) H-eigenvalues of symmetric tensors and uniform hypergraphs in the literature. We also give examples to verify the sharpness of the bounds or in some cases for hypergraphs, we characterize the equality. ## 2 Preliminaries We now give some tools that will be use later. **Lemma 2.1**.: _[_25_, Theorem 5]_ _Let \(\mathcal{T}\) be a symmetric tensor of even order \(k\) and dimension \(n\), where \(n,k\geq 2\). Then \(\lambda_{\max}(\mathcal{T})=\max\{\mathcal{T}\mathbf{x}^{k}:\|\mathbf{x}\|_{ k}=1,\mathbf{x}\in\mathbb{R}^{n}\}\) and \(\lambda_{\min}(\mathcal{T})=\min\{\mathcal{T}\mathbf{x}^{k}:\|\mathbf{x}\|_{k}= 1,\mathbf{x}\in\mathbb{R}^{n}\}\)._ It is not difficult to see that the previous lemma is not true if \(k\) is odd. Denote by \(\mathbb{R}^{n}_{+}\) the set of all nonnegative vectors in \(\mathbb{R}^{n}\). **Lemma 2.2**.: _[_26_, Theorem 2]_ _Let \(\mathcal{T}\) be a symmetric nonnegative tensor of order \(k\) and dimension \(n\), where \(n,k\geq 2\). Then \(\lambda_{\max}(\mathcal{T})=\max\{\mathcal{T}\mathbf{x}^{k}:\|\mathbf{x}\|_{k }=1,\mathbf{x}\in\mathbb{R}^{n}_{+}\}\). If \(\lambda_{\max}(\mathcal{T})=\mathcal{T}\mathbf{x}^{k}\) for some \(\mathbf{x}\in\mathbb{R}^{n}_{+}\) with \(\|\mathbf{x}\|_{k}=1\), then \(\mathbf{x}\) is an \(H\)-eigenvector of \(\mathcal{T}\) associated with \(\lambda_{\max}(\mathcal{T})\)._ If \(\mathcal{T}\) be a symmetric tensor of order \(k\) and dimension \(n\), where \(k\) is even or if \(\mathcal{T}\) be a symmetric, essentially nonnegative tensor of order \(k\) and dimension \(n\), where \(n,k\geq 2\), then \(\lambda_{\max}(\mathcal{T})=\max\{\mathcal{T}\mathbf{x}^{k}:\|\mathbf{x}\|_{k}= 1,\mathbf{x}\in\mathbb{R}^{n}\}\). Let \(G\) be a \(k\)-uniform hypergraph with \(V(G)=[n]\), and let \(\mathbf{x}\in\mathbb{R}^{n}\). For \(U\subseteq V(G)\), let \(\mathbf{x}^{U}=\Pi_{w\in U}x_{w}\). Then \[\mathcal{A}(G)\mathbf{x}^{k}=k\sum_{e\in E(G)}\mathbf{x}^{e}\] and for \(u\in V(G)\), \[(\mathcal{A}(G)\mathbf{x}^{k-1})_{u}=\sum_{e\in E_{u}(G)}\mathbf{x}^{e\setminus \{u\}}.\] As \(\mathcal{A}(G)\) is symmetric and nonnegative, we have from Lemma 2.2 that \[\lambda(G)\leq\mathcal{A}(G)\mathbf{x}^{k}\leq\rho(G).\] For a hypergraph \(G\) with \(V_{1}\subset V(G)\), \(G-V_{1}\) denotes the hypergraph with vertex set \(V(G)\setminus V_{1}\) and edge set \(E(G)\setminus\{e:e\cap V_{1}=\emptyset\}\). If \(V_{1}=\{v\}\), then we write \(G-v\) for \(G-\{v\}\). For a hypergraph \(G\) with \(E_{1}\subseteq E(G)\), \(G-E_{1}\) denotes the hypergraph with vertex set \(V(G)\) and edge set \(E(G)\setminus E_{1}\). If \(E_{1}=\{e\}\), then we write \(G-e\) for \(G-\{e\}\). For a symmetric tensor \(\mathcal{T}\) of order \(k\) and dimension \(n\) and \(\emptyset\neq I\subset[n]\), let \(\mathcal{T}_{I}\) be the tensor of order \(k\) and dimension \(n\) such that \[(\mathcal{T}_{I})_{i_{1}\ldots i_{k}}=\begin{cases}t_{i_{1}\ldots i_{k}}& \text{if}\ \ \{i_{1},\ldots,i_{k}\}\subseteq I,\\ 0&\text{otherwise}.\end{cases}\] **Lemma 2.3**.: _Let \(\mathcal{T}\) be a zero diagonal symmetric tensor of order \(k\) and dimension \(n\), where \(n,k\geq 2\). If \(k\) is even or \(\mathcal{T}\) is nonnegative, then \(\lambda_{\max}(\mathcal{T})\geq 0\) and \(\lambda_{\min}(\mathcal{T})\leq 0\)._ Proof.: Suppose first that \(k\) is even. By Lemma 2.1, for any \(\mathbf{x}\in\mathbb{R}^{n}\) with \(\|\mathbf{x}\|_{k}=1\), we have \[\lambda_{\max}(\mathcal{T})\geq\mathcal{T}\mathbf{x}^{k}\geq\lambda_{\min}( \mathcal{T}).\] As \(t_{1\ldots 1}=0\), the coefficient of \(x_{1}^{k}\) in \(\mathcal{T}\mathbf{x}^{k}\) is \(0\). Setting \(\mathbf{y}=(1,0,\ldots,0)^{\top}\in\mathbb{R}^{n}\), we have \(\|\mathbf{y}\|_{k}=1\) and \(\mathcal{T}\mathbf{y}^{k}=0\). So \(\lambda_{\max}(\mathcal{T})\geq 0\geq\lambda_{\min}(\mathcal{T})\). Suppose next that \(\mathcal{T}\) is nonnegative. By Lemma 2.2, \(\lambda_{\max}(\mathcal{T})\geq 0\). If \(k\) is even, then by the above argument, \(\lambda_{\min}(\mathcal{T})\leq 0\). If \(k\) is odd, then set \(\mathbf{y}\) as above and we have \(\mathcal{T}\mathbf{y}^{k-1}=0\mathbf{y}^{k-1}\) since \(t_{1\ldots 1}=0\), so \(0\) is an H-eigenvalue of \(\mathcal{T}\), implying that \(\lambda_{\min}(\mathcal{T})\leq 0\). **Lemma 2.4**.: _Let \(\mathcal{T}\) be a zero diagonal symmetric tensor of order \(k\) and dimension \(n\), where \(n,k\geq 2\). Let \(\emptyset\neq I\subset[n]\). Suppose that \(k\) is even or \(\mathcal{T}\) is nonnegative. Then \(\lambda_{\max}(\mathcal{T}[I])=\lambda_{\max}(\mathcal{T}_{I})\)._ Proof.: Let \(\mathbf{x}\in\mathbb{R}^{|I|}\) be a unit eigenvector corresponding to \(\lambda_{\max}(\mathcal{T}[I])\). Then \(\lambda_{\max}(\mathcal{T}[I])=\mathcal{T}[I]\mathbf{x}^{k}\). Set \(\widehat{\mathbf{x}}\in\mathbb{R}^{n}\) as a vector such that \(\widehat{x}_{i}=x_{i}\) if \(i\in I\), and \(\widehat{x}_{i}=0\) if \(i\in[n]\setminus I\). Then it is easy to see that \(\mathcal{T}[I]\mathbf{x}^{k}=\mathcal{T}_{I}\widehat{\mathbf{x}}^{k}\). By Lemma 2.1 or 2.2, we have \(\mathcal{T}_{I}\widehat{\mathbf{x}}^{k}\leq\lambda_{\max}(\mathcal{T}_{I})\). It follows that \[\lambda_{\max}(\mathcal{T}[I])=\mathcal{T}[I]\mathbf{x}^{k}=\mathcal{T}_{I} \widehat{\mathbf{x}}^{k}\leq\lambda_{\max}(\mathcal{T}_{I}). \tag{2.1}\] By Lemma 2.3, \(\lambda_{\max}(\mathcal{T}[I])\geq 0\). If \(\lambda_{\max}(\mathcal{T}_{I})=0\), then we have from (2.1) that \(\lambda_{\max}(\mathcal{T}[I])=0=\lambda_{\max}(\mathcal{T}_{I})\). Suppose that \(\lambda_{\max}(\mathcal{T}_{I})>0\). From (2.1), we have \(\lambda_{\max}(\mathcal{T}[I])\leq\lambda_{\max}(\mathcal{T}_{I})\). Next, we prove the converse inequality. Let \(\mathbf{y}\in\mathbb{R}^{n}\) be a unit eigenvector corresponding to \(\lambda_{\max}(\mathcal{T}_{I})\). Note that \(\mathcal{T}\) is a zero diagonal tensor. For \(i\in[n]\setminus I\) and \(i_{2},\ldots,i_{k}\in[n]\), as entries of \(\mathcal{T}_{I}\), we have \(t_{i\ldots i}=0\) and \(t_{ii_{2}\ldots i_{k}}=0\). So \[\lambda_{\max}(\mathcal{T}_{I})y_{i}^{k-1}=\sum_{i_{2},\ldots,i_{k}\in[n]}t_{ ii_{2}\ldots i_{k}}y_{i_{2}}\cdots y_{i_{k}}=0\] for each \(i\in[n]\setminus I\), implying that \(y_{i}=0\) for each \(i\in[n]\setminus I\) as \(\lambda_{\max}(\mathcal{T}_{I})>0\). Let \(\widehat{\mathbf{y}}\in\mathbb{R}^{|I|}\) such that \(\widehat{y}_{i}=y_{i}\) for \(i\in I\). Note that \(\widehat{\mathbf{y}}\) is unit. For each \(i\in I\), we have \[(\mathcal{T}[I]\widehat{\mathbf{y}}^{k-1})_{i} =\sum_{i_{2},\ldots,i_{k}\in I}t_{ii_{2}\ldots i_{k}}\widehat{y}_ {i_{2}}\cdots\widehat{y}_{i_{k}}\] \[=\sum_{i_{2},\ldots,i_{k}\in I}t_{ii_{2}\ldots i_{k}}y_{i_{2}} \cdots y_{i_{k}}\] \[=\sum_{i_{2},\ldots,i_{k}\in[n]}t_{ii_{2}\ldots i_{k}}y_{i_{2}} \cdots y_{i_{k}}\] \[=\lambda_{\max}(\mathcal{T}_{I})y_{i}^{k-1}\] \[=\lambda_{\max}(\mathcal{T}_{I})\widehat{y}_{i}^{k-1}.\] This means that \(\lambda_{\max}(\mathcal{T}_{I})\) is an H-eigenvalue of \(\mathcal{T}[I]\), so \(\lambda_{\max}(\mathcal{T}[I])\geq\lambda_{\max}(\mathcal{T}_{I})\). Now it follows that \(\lambda_{\max}(\mathcal{T}[I])=\lambda_{\max}(\mathcal{T}_{I})\). **Lemma 2.5**.: _Let \(\mathcal{T}\) be a zero diagonal symmetric tensor of order even \(k\) and dimension \(n\), where \(n,k\geq 2\). Let \(\emptyset\neq I\subset[n]\). Then \(\lambda_{\min}(\mathcal{T}[I])=\lambda_{\min}(\mathcal{T}_{I})\)._ Proof.: Let \(\mathbf{x}\in\mathbb{R}^{|I|}\) be a unit eigenvector corresponding to \(\lambda_{\min}(\mathcal{T}[I])\). Then \(\lambda_{\min}(\mathcal{T}[I])=\mathcal{T}[I]\mathbf{x}^{k}\). Set \(\widetilde{\mathbf{x}}\in\mathbb{R}^{n}\) as a vector such that \(\widetilde{x}_{i}=x_{i}\) if \(i\in I\), and \(\widetilde{x}_{i}=0\) if \(i\in[n]\setminus I\). Then it is easy to see that \(\mathcal{T}[I]\mathbf{x}^{k}=\mathcal{T}_{I}\widetilde{\mathbf{x}}^{k}\). By Lemma 2.1, we have \(\lambda_{\min}(\mathcal{T}_{I})\leq\mathcal{T}_{I}\widetilde{\mathbf{x}}^{k}\). Therefore \[\lambda_{\min}(\mathcal{T}[I])=\mathcal{T}[I]\mathbf{x}^{k}=\mathcal{T}_{I} \widetilde{\mathbf{x}}^{k}\geq\lambda_{\min}(\mathcal{T}_{I}).\] By Lemma 2.3, \(\lambda_{\min}(\mathcal{T}[I])\leq 0\). If \(\lambda_{\min}(\mathcal{T}_{I})=0\), then it follows from the above inequalities that \(\lambda_{\min}(\mathcal{T}_{I})=0=\lambda_{\min}(\mathcal{T}[I])\). Suppose that \(\lambda_{\min}(\mathcal{T}_{I})<0\). Let \(\mathbf{z}\in\mathbb{R}^{n}\) be a unit eigenvector corresponding to \(\lambda_{\min}(\mathcal{T}_{I})\). As \(t_{i\ldots i}=0\) and \(t_{ii_{2}\ldots i_{k}}=0\) for each \(i\in[n]\setminus I\) and \(i_{2},\ldots,i_{k}\in[n]\), we have \(\lambda_{\min}(\mathcal{T}_{I})z_{i}^{k-1}=0\) for each \(i\in[n]\setminus I\). Then \(z_{i}=0\) for each \(i\in[n]\setminus I\). Let \(\widehat{\mathbf{z}}\) be a vector in \(\mathbb{R}^{|I|}\) such that \(\widehat{z}_{i}=z_{i}\) for \(i\in I\). Obviously, \(\widehat{\mathbf{z}}\) is unit. Then for each \(i\in I\), \[(\mathcal{T}[I]\widehat{\mathbf{z}}^{k-1})_{i}=\sum_{i_{2},\ldots,i_{k}\in I}t_ {ii_{2}\ldots i_{k}}z_{i_{2}}\cdots z_{i_{k}}\] \[=\sum_{i_{1},\ldots,i_{k}\in[n]}t_{ii_{2}\ldots i_{k}}z_{i_{2}}\cdots z _{i_{k}}\] \[=\lambda_{\min}(\mathcal{T}_{I})\widehat{z}_{i}^{k-1}.\] This shows that \(\lambda_{\min}(\mathcal{T}_{I})\) is an H-eigenvalue of \(\mathcal{T}[I]\). Thus \(\lambda_{\min}(\mathcal{T}[I])\leq\lambda_{\min}(\mathcal{T}_{I})\). It follows that \(\lambda_{\min}(\mathcal{T}[I])=\lambda_{\min}(\mathcal{T}_{I})\). We also need the well known combinatorial identity in the following lemma. **Lemma 2.6**.: _Let \(n\) and \(k\) be positive integers with \(1\leq k\leq n\). Then_ \[\sum_{i=k}^{n}\binom{i}{k}=\binom{n+1}{k+1}.\] ## 3 A formula for homogeneous polynomials of the form of inclusion-exclusion In this section we establish a formula for homogeneous polynomials of the type of Principle of Inclusion-Exclusion that will be used in the proofs. **Lemma 3.1**.: _Let \(\mathcal{T}\) be a zero diagonal nonzero symmetric tensor of order \(k\) and dimension \(n\), where \(n,k\geq 2\). Let \(\mathbf{x}\) be a unit \(n\)-dimensional eigenvector and \(\emptyset\neq I\subset[n]\). For positive integers \(s\) and \(m\) with \(s+m\leq k\),_ \[\sum_{\begin{subarray}{c}i_{1},\ldots,i_{s}\in[n]\setminus I\\ i_{s+1}\cdots,i_{s}+m\in I\\ i_{s+m}+1\cdots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}}\cdots x _{k}=\sum_{\ell=0}^{m}(-1)^{\ell}\binom{m}{\ell}\sum_{\begin{subarray}{c}i_{ 1},\ldots,i_{s}+\ell\in[n]\setminus I\\ i_{s+\ell+1},\ldots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}} \cdots x_{i_{k}}. \tag{3.1}\] Proof.: We prove identity (3.1) by induction on \(m\). It is obvious that for any \(s\geq 1\) with \(s+1\leq k\), \[\sum_{\begin{subarray}{c}i_{1},\ldots,i_{s}\in[n]\setminus I\\ i_{s+1}\in I\\ i_{s+2},\ldots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}}\cdots x _{k} =\sum_{\begin{subarray}{c}i_{1},\ldots,i_{s}\in[n]\setminus I\\ i_{s+1},\ldots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}}\cdots x _{i_{k}}-\sum_{\begin{subarray}{c}i_{1},\ldots,i_{s+1}\in[n]\setminus I\\ i_{s+2},\ldots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}}\cdots x _{i_{k}}\] \[=\sum_{\ell=0}^{1}(-1)^{\ell}\binom{1}{\ell}\sum_{\begin{subarray} {c}i_{1},\ldots,i_{s}+\ell\in[n]\setminus I\\ i_{s+\ell+1},\ldots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}} \cdots x_{i_{k}},\] proving (3.1) when \(m=1\). Suppose that \(1\leq j<k-s\) and identity (3.1) follows for \(m=j\). Suppose in the following that \(m=j+1\). Then \[\begin{split}&\sum_{\begin{subarray}{c}i_{1},\ldots,i_{k}\in[n] \setminus I\\ i_{s+1}+\ldots,i_{s+m}\in I\\ i_{s+m+1}\cdots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}} \cdots x_{k}\\ &=\sum_{\begin{subarray}{c}i_{1},\ldots,i_{k}\in[n]\setminus I\\ i_{s+1}+\ldots,i_{s+j+1}\in I\\ i_{s+j+2}\cdots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}} \cdots x_{k}\\ &=\sum_{\begin{subarray}{c}i_{1},\ldots,i_{k}\in[n]\setminus I\\ i_{s+1}+\ldots,i_{s+j}\in I\\ i_{s+j+1}\cdots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}} \cdots x_{k}-\sum_{\begin{subarray}{c}i_{1},\ldots,i_{s}\in[n]\setminus I\\ i_{s+1}+\ldots,i_{s+j}\in I\\ i_{s+j+1}\in[n]\setminus I\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}} \cdots x_{k}\\ &=\sum_{\begin{subarray}{c}i_{1},\ldots,i_{s}\in[n]\setminus I\\ i_{s+1}+\ldots,i_{s+j}\in I\\ i_{s+j+1}\cdots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}} \cdots x_{k}-\sum_{\begin{subarray}{c}i_{1},\ldots,i_{s}\in[n]\setminus I\\ i_{s+1}+\ldots,i_{s+j}\in I\\ i_{s+j+2}\cdots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}} \cdots x_{k},\\ \end{split} \tag{3.2}\] where the last equality in (3.2) follows because \(\mathcal{T}\) is symmetric. Now applying inductive hypothesis to the two summations of the last equation in (3.2), using combinatorial identity \(\binom{j}{\ell}=\binom{j+1}{\ell}-\binom{j}{\ell-1}\) for \(\ell\geq 1\), and by algebraic calculation to get \[\begin{split}&\sum_{\begin{subarray}{c}i_{1},\ldots,i_{k}\in[n] \setminus I\\ i_{s+1}+\ldots,i_{s+m}\in I\\ i_{s+m+1}\cdots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}} \cdots x_{k}\\ &=\sum_{\ell=0}^{j}(-1)^{\ell}\binom{j}{\ell}\sum_{\begin{subarray} {c}i_{1},\ldots,i_{s+\ell}\in[n]\setminus I\\ i_{s+\ell+1},\ldots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}} \cdots x_{i_{k}}-\sum_{\ell=0}^{j}(-1)^{\ell}\binom{j}{\ell}\sum_{ \begin{subarray}{c}i_{1},\ldots,i_{s+\ell+1}\in[n]\setminus I\\ i_{s+\ell+2},\ldots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}} \cdots x_{i_{k}}\\ &=\sum_{\begin{subarray}{c}i_{1},\ldots,i_{s}\in[n]\setminus I\\ i_{s+1}+\ldots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}} \cdots x_{i_{k}}+\sum_{\ell=1}^{j}(-1)^{\ell}\left(\binom{j+1}{\ell}-\binom{j }{\ell-1}\right)\sum_{\begin{subarray}{c}i_{1},\ldots,i_{s+\ell}\in[n] \setminus I\\ i_{s+\ell+1},\ldots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}} \cdots x_{i_{k}}\\ &-\sum_{\begin{subarray}{c}i_{1},\ldots,i_{s+1}\in[n]\setminus I\\ i_{s+2},\ldots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}} \cdots x_{i_{k}}-\sum_{\ell=2}^{j+1}(-1)^{\ell-1}\binom{j}{\ell-1}\sum_{ \begin{subarray}{c}i_{1},\ldots,i_{s+\ell}\in[n]\setminus I\\ i_{s+\ell+1},\ldots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}} \cdots x_{i_{k}}\\ &=\sum_{\begin{subarray}{c}i_{1},\ldots,i_{s}\in[n]\setminus I\\ i_{s+1},\ldots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}} \cdots x_{i_{k}}+\sum_{\ell=1}^{j}(-1)^{\ell}\binom{j+1}{\ell}\sum_{ \begin{subarray}{c}i_{1},\ldots,i_{s+\ell}\in[n]\setminus I\\ i_{s+\ell+1},\ldots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}} \cdots x_{i_{k}}\\ &+(-1)^{j+1}\sum_{\begin{subarray}{c}i_{1},\ldots,i_{s+j+1}\in[n]\setminus I \\ i_{s+j+2}\cdots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}} \cdots x_{i_{k}}.\end{split}\] Now we have \[\sum_{\begin{subarray}{c}i_{1},\ldots,i_{s}\in[n]\setminus I\\ i_{s+1},\ldots,i_{s+m}\in I\\ i_{s+m+1},\ldots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}}\cdots x _{k}=\sum_{\ell=0}^{j+1}(-1)^{\ell}\binom{j+1}{\ell}\sum_{\begin{subarray}{c}i_{ 1},\ldots,i_{s+\ell}\in[n]\setminus I\\ i_{s+\ell+1},\ldots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}} \cdots x_{i_{k}}.\] This proves (3.1) when \(m=j+1\). By induction, (3.1) follows. ## 4 Largest H-eigenvalues We are now ready to give our results on the largest H-eigenvalues of symmetric tensors and uniform hypergraphs. First, we give the interlacing inequalities for the largest H-eigenvalues. **Theorem 4.1**.: _Let \(\mathcal{T}\) be a zero diagonal nonzero symmetric tensor of order \(k\) and dimension \(n\), where \(n,k\geq 2\). Suppose that \(k\) is even or \(\mathcal{T}\) is nonnegative. Let \(\mathbf{x}\) be a unit eigenvector corresponding to \(\lambda_{\max}(\mathcal{T})\). Let \(\emptyset\neq I\subset[n]\). Then_ \[\lambda_{\max}(\mathcal{T})\left(1-k\sum_{i\in[n]\setminus I}x_{i }^{k}\right)-\sum_{j=1}^{k-1}(-1)^{j}\binom{k}{j+1}\sum_{\begin{subarray}{c}i _{1},\ldots,i_{j+1}\in[n]\setminus I\\ i_{j+2},\ldots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}}\cdots x _{i_{k}} \leq\lambda_{\max}(\mathcal{T}[I])\] \[\leq\lambda_{\max}(\mathcal{T}).\] Proof.: As \(\mathcal{T}\) is symmetric, we have by Lemma 3.1 that, for \(1\leq j\leq k-1\), one has \[\sum_{\begin{subarray}{c}i_{1},\ldots,i_{j}\in I\\ i_{j+1}\in[n]\setminus I\\ i_{j+2},\ldots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}}\cdots x _{i_{k}} =\sum_{\begin{subarray}{c}i_{1}\in[n]\setminus I\\ i_{2},\ldots,i_{j+1}\in I\\ i_{j+2},\ldots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}}\cdots x _{i_{k}}\] \[=\sum_{\ell=0}^{j}(-1)^{\ell}\binom{j}{\ell}\sum_{\begin{subarray} {c}i_{1},\ldots,i_{\ell+1}\in[n]\setminus I\\ i_{\ell+2},\ldots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}} \cdots x_{i_{k}}.\] For any \(i_{1}\in[n]\setminus I\), as \(\lambda_{\max}(\mathcal{T})\) is an H-eigenvalue of \(\mathcal{T}\) associated to eigenvector \(\mathbf{x}\), we have \[\sum_{i_{2},\ldots,i_{k}\in[n]}t_{i_{1}\ldots i_{k}}x_{i_{1}}\cdots x_{i_{k}}= \lambda_{\max}(\mathcal{T})x_{i_{1}}^{k}\] and hence \[\sum_{\begin{subarray}{c}i_{1}\in[n]\setminus I\\ i_{2},\ldots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}}\cdots x _{i_{k}}=\lambda_{\max}(\mathcal{T})\sum_{i_{1}\in[n]\setminus I}x_{i_{1}}^{k}.\] Therefore, \[(\mathcal{T}-\mathcal{T}_{I})\mathbf{x}^{k}\] \[=\sum_{i_{1},\ldots,i_{k}\in[n]\setminus I}t_{i_{1}\ldots i_{k}}x_{i_{1}} \cdots x_{i_{k}}-\sum_{i_{1},\ldots,i_{k}\in I}t_{i_{1}\ldots i_{k}}x_{i_{1}} \cdots x_{i_{k}}\] \[=\sum_{\begin{subarray}{c}i_{1}\in[n]\setminus I\\ i_{2},\ldots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}}\cdots x _{i_{k}}+\sum_{j=1}^{k-1}\sum_{\begin{subarray}{c}i_{1},\ldots,i_{j}\in I\\ i_{j+1}\in[n]\setminus I\\ i_{j+2},\ldots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}}\cdots x _{i_{k}}\] \[=\sum_{\begin{subarray}{c}i_{1}\in[n]\setminus I\\ i_{2},\ldots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}}\cdots x _{i_{k}}+\] \[+\sum_{j=1}^{k-1}\left(\sum_{\begin{subarray}{c}i_{1}\in[n] \setminus I\\ i_{2},\ldots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}}\cdots x _{i_{k}}+\sum_{\ell=1}^{j}(-1)^{\ell}\binom{j}{\ell}\sum_{\begin{subarray}{c }i_{1},\ldots,i_{\ell+1}\in[n]\setminus I\\ i_{\ell+2},\ldots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}} \cdots x_{i_{k}}\right)\] \[=k\sum_{\begin{subarray}{c}i_{1}\in[n]\setminus I\\ i_{2},\ldots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}}\cdots x _{i_{k}}+\sum_{j=1}^{k-1}\sum_{\ell=1}^{j}(-1)^{\ell}\binom{j}{\ell}\sum_{ \begin{subarray}{c}i_{1},\ldots,i_{\ell+1}\in[n]\setminus I\\ i_{\ell+2},\ldots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}} \cdots x_{i_{k}}.\] By interchanging the order of summation in \[\sum_{j=1}^{k-1}\sum_{\ell=1}^{j}(-1)^{\ell}\binom{j}{\ell}\sum_{ \begin{subarray}{c}i_{1},\ldots,i_{\ell+1}\in[n]\setminus I\\ i_{\ell+2},\ldots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}} \cdots x_{i_{k}},\] one has \[(\mathcal{T}-\mathcal{T}_{I})\mathbf{x}^{k}=k\lambda_{\max}(\mathcal{T})\sum_ {i\in[n]\setminus I}x_{i}^{k}+\sum_{j=1}^{k-1}(-1)^{j}\sum_{\ell=j}^{k-1} \binom{\ell}{j}\sum_{\begin{subarray}{c}i_{1},\ldots,i_{j+1}\in[n]\setminus I \\ i_{j+2},\ldots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}}\cdots x _{i_{k}}.\] Now, by Lemma 2.6, one has \[(\mathcal{T}-\mathcal{T}_{I})\mathbf{x}^{k}=k\lambda_{\max}(\mathcal{T})\sum_ {i\in[n]\setminus I}x_{i}^{k}+\sum_{j=1}^{k-1}(-1)^{j}\binom{k}{j+1}\sum_{ \begin{subarray}{c}i_{1},\ldots,i_{j+1}\in[n]\setminus I\\ i_{j+2},\ldots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}}\cdots x _{i_{k}}. \tag{4.1}\] Thus, one has by Lemma 2.1 or 2.2 that \[\lambda_{\max}(\mathcal{T}_{I}) \geq\mathcal{T}_{I}\mathbf{x}^{k}=\mathcal{T}\mathbf{x}^{k}-( \mathcal{T}-\mathcal{T}_{I})\mathbf{x}^{k}\] \[=\lambda_{\max}(\mathcal{T})-\left(k\lambda_{\max}(\mathcal{T}) \sum_{i\in[n]\setminus I}x_{i}^{k}+\sum_{j=1}^{k-1}(-1)^{j}\binom{k}{j+1}\sum_{ \begin{subarray}{c}i_{1},\ldots,i_{j+1}\in[n]\setminus I\\ i_{j+2},\ldots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}} \cdots x_{i_{k}}\right)\] \[=\lambda_{\max}(\mathcal{T})\left(1-k\sum_{i\in[n]\setminus I}x_ {i}^{k}\right)-\sum_{j=1}^{k-1}(-1)^{j}\binom{k}{j+1}\sum_{\begin{subarray}{c}i_ {1},\ldots,i_{j+1}\in[n]\setminus I\\ i_{j+2},\ldots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}}\cdots x _{i_{k}}.\] Now, the first inequality follows from the above equation and Lemma 2.4. Let \(\mathbf{y}\in\mathbb{R}^{|I|}\) be a unit eigenvector corresponding to \(\lambda_{\max}(\mathcal{T}[I])\). Construct a unit vector \(\mathbf{z}\in\mathbb{R}^{n}\) such that \(z_{i}=y_{i}\) if \(i\in I\) and \(z_{i}=0\) if \(i\in[n]\setminus I\). Note that \(T_{I}\mathbf{z}^{k}=T[I]\mathbf{y}^{k}\). As \(\mathcal{T}\) is zero diagonal, and \((\mathcal{T}-\mathcal{T}_{I})_{i_{1}\ldots i_{k}}=t_{i_{1}\ldots i_{k}}\) if \(\{i_{1},\ldots,i_{k}\}\cap([n]\setminus I)\neq\emptyset\) and \(0\) otherwise, we have \((\mathcal{T}-\mathcal{T}_{I})\mathbf{z}^{k}=0\). Thus by Lemma 2.1 or 2.2, \[\lambda_{\max}(\mathcal{T})\geq\mathcal{T}\mathbf{z}^{k}=\mathcal{T}_{I} \mathbf{z}^{k}+(\mathcal{T}-\mathcal{T}_{I})\mathbf{z}^{k}=T[I]\mathbf{y}^{k} +0=\lambda_{\max}(\mathcal{T}[I]).\] This proves the second inequality. **Example 4.1**.: _Let \(\mathcal{T}\) be a tensor of order \(4\) and dimension \(3\), where \(t_{1122}=t_{1212}=t_{1221}=t_{2121}=t_{2121}=t_{2211}=-1\), \(t_{1222}=t_{2122}=t_{2212}=t_{2212}=t_{2221}=\frac{1}{2}\), \(t_{3222}=t_{2322}=t_{2232}=t_{2223}=1\), and otherwise, \(t_{ijst}=0\). Let \(I=\{2,3\}\). By MATLAB, we have \(\lambda_{\max}(\mathcal{T})=2.4043\) with eigenvector \(\mathbf{x}_{0}=(0.1632,1,0.7465)^{\top}\) and \(\lambda_{\max}(\mathcal{T}[I])=2.2795\). Note that \(\mathbf{x}_{0}\) is not unit. Let \(\mathbf{x}=\frac{\mathbf{x}_{0}}{\|\mathbf{x}_{0}\|_{4}}\). The lower bound for \(\lambda_{\max}(\mathcal{T}[I])\) in Theorem 4.1 is equal to_ \[\lambda_{\max}(\mathcal{T})\left(1-k\sum_{i\in[n]\setminus I}x_{i }^{k}\right)-\sum_{j=1}^{k-1}(-1)^{j}\binom{k}{j+1}\sum_{\genfrac{}{}{0.0pt}{}{ i_{1},\ldots,i_{j+1}\in[n]\setminus I}{i_{j+2},\ldots,i_{k}\in[n]}}t_{i_{1} \ldots i_{k}}x_{i_{1}}\cdots x_{i_{k}}\] \[=\lambda_{\max}(\mathcal{T})\left(1-4x_{1}^{4}\right)-(-1)^{1} \binom{4}{2}t_{1122}x_{1}^{2}x_{2}^{2}\] \[=2.4043\times\left(1-4\times\frac{0.1632^{4}}{0.1632^{4}+1+0.746 5^{4}}\right)+\binom{4}{2}\times\frac{-1\times 0.1632^{2}\times 1^{2}}{0.1632^{ 4}+1+0.7465^{4}}\] \[=2.2772.\] **Example 4.2**.: _Let \(\mathcal{T}\) be a tensor of order \(3\) and dimension \(5\), where \(t_{112}=t_{121}=t_{211}=\frac{1}{3}\), \(t_{122}=t_{221}=t_{212}=\frac{1}{12}\), \(t_{113}=t_{131}=t_{311}=\frac{1}{6}\), \(t_{223}=t_{232}=t_{322}=\frac{1}{12}\), \(t_{233}=t_{323}=t_{332}=\frac{1}{18}\), \(t_{123}=t_{132}=t_{213}=t_{321}=t_{312}=-\frac{1}{12}\)\(t_{445}=t_{454}=t_{544}=\frac{1}{6}\), and otherwise, \(t_{ijst}=0\). Let \(I=\{1,2,4\}\). By MATLAB, we have \(\lambda_{\max}(\mathcal{T})=0.6894\) with eigenvector \(\mathbf{x}_{0}=(1,0.8241,0.4256,0,0)^{\top}\) and \(\lambda_{\max}(\mathcal{T}[I])=0.6387\). Let \(\mathbf{x}=\frac{\mathbf{x}_{0}}{\|\mathbf{x}_{0}\|_{3}}\) The lower bound for \(\lambda_{\max}(\mathcal{T}[I])\) in Theorem 4.1 is equal to_ \[\lambda_{\max}(\mathcal{T})\left(1-3(x_{3}^{3}+x_{5}^{3})\right)-( -1)^{1}\binom{3}{2}t_{332}x_{3}^{2}x_{2}\] \[=0.6894\times\left(1-\frac{3\times 0.4256^{3}}{1+0.8241^{3}+0.4256 ^{3}}\right)+\binom{3}{2}\times\frac{\frac{1}{18}\times 0.4256^{2}\times 0.8241}{1^{3}+0.82 41^{3}+0.4256^{3}}\] \[=0.6072.\] We remark that the second inequality has been known when \(\mathcal{T}\) is nonnegative [12, 14]. **Theorem 4.2**.: _Let \(\mathcal{T}\) be a symmetric, nonnegative zero diagonal tensor of order \(k\) and dimension \(n\), where \(k,n\geq 2\) and \(\mathbf{x}\) be a unit nonnegative eigenvector corresponding to \(\lambda_{\max}(\mathcal{T})\). Let \(\emptyset\neq I\subset[n]\). If \(\sum_{i\in I}x_{i}^{k}\neq 0\), then_ \[\frac{\lambda_{\max}(\mathcal{T})\left(1-k\sum_{i\in[n]\setminus I }x_{i}^{k}\right)-\sum_{j=1}^{k-1}(-1)^{j}{k\choose j+1}\sum_{i_{1},\ldots,i_{ j+1}\in[n]\setminus I}t_{i_{1}\ldots i_{k}}x_{i_{1}}\cdots x_{i_{k}}}{\sum_{i \in I}x_{i}^{k}} \leq\rho(\mathcal{T}[I])\] \[\leq\rho(\mathcal{T}).\] Proof.: As \(\mathcal{T}\) is nonnegative, we have by Proposition 1.1 that \(\lambda_{\max}(\mathcal{T})=\rho(\mathcal{T})\) and \(\lambda_{\max}(\mathcal{T}[I])=\rho(\mathcal{T}[I])\). It is evident that the upper bound of \(\rho(\mathcal{T}[I])\) follows from Theorem 4.1, see also [14]. By the proof of Theorem 4.1, we have \[\mathcal{T}_{I}\mathbf{x}^{k}=\rho(\mathcal{T})\left(1-k\sum_{i\in[n] \setminus I}x_{i}^{k}\right)-\sum_{j=1}^{k-1}(-1)^{j}{k\choose j+1}\sum_{i_{1 },\ldots,i_{j+1}\in[n]\setminus I\atop i_{j+2},\ldots,i_{k}\in[n]}t_{i_{1} \ldots i_{k}}x_{i_{1}}\cdots x_{i_{k}}.\] Applying Lemma 2.2 by setting \(\mathbf{y}\in\mathbb{R}^{|I|}\) with \(y_{i}=x_{i}\) for \(i\in I\), we have \[\rho(\mathcal{T}[I]) \geq\frac{\mathcal{T}[I]\mathbf{y}^{k}}{\|\mathbf{y}\|_{k}^{k}}= \frac{\mathcal{T}_{I}\mathbf{x}^{k}}{\sum_{i\in I}x_{i}^{k}}\] \[=\frac{\rho(\mathcal{T})\left(1-k\sum_{i\in[n]\setminus I}x_{i}^ {k}\right)-\sum_{j=1}^{k-1}(-1)^{j}{k\choose j+1}\sum_{i_{1},\ldots,i_{j+1} \in[n]\setminus I\atop i_{j+2},\ldots,i_{k}\in[n]}t_{i_{1}\ldots i_{k}}x_{i_{1 }}\cdots x_{i_{k}}}{\sum_{i\in I}x_{i}^{k}},\] proving the lower bound part. **Example 4.3**.: _Let \(\mathcal{T}\) be a tensor of order \(3\) and dimension \(3\), where \(t_{112}=t_{121}=t_{211}=\frac{1}{3}\), \(t_{122}=t_{221}=t_{212}=\frac{1}{12}\), \(t_{113}=t_{131}=t_{311}=\frac{1}{6}\), \(t_{223}=t_{232}=t_{322}=\frac{1}{12}\), \(t_{233}=t_{323}=t_{332}=\frac{1}{18}\) and otherwise, \(t_{ijst}=0\). Let \(I=\{1,2\}\). By MATLAB, we have \(\lambda_{\max}(\mathcal{T})=0.8143\) with eigenvector \(\mathbf{x}_{0}=(1,0.84,0.5866)^{\top}\) and \(\lambda_{\max}(\mathcal{T}[I])=0.6387\). Let \(\mathbf{x}=\frac{\mathbf{x}_{0}}{\|\mathbf{x}_{0}\|}\). The lower bound for \(\lambda_{\max}(\mathcal{T}[I])\) in Theorem 4.2 is equal to_ \[\frac{\lambda_{\max}(\mathcal{T})\left(1-3x_{3}^{3}\right)-(-1)^{ 1}{3\choose 2}t_{331}x_{3}^{2}x_{1}}{x_{1}^{3}+x_{2}^{3}}\] \[=\frac{0.8143\times\left(1-3\times\frac{0.5866^{3}}{1+0.84^{3}+0. 5866^{3}}\right)+{3\choose 2}\times\frac{1}{18}\times\frac{0.5866^{2}\times 0.84}{1+0.84^{3}+0. 5866^{3}}}{\frac{1+0.84^{3}}{1+0.84^{3}+0.5866^{3}}}\] \[=0.6381.\] When the tensor \(\mathcal{T}\) is weakly irreducible and nonnegative, by Proposition 1.1, there is a unique unit positive eigenvector corresponding to \(\lambda_{\max}(\mathcal{T})=\rho(\mathcal{T})\). By the proof of Theorem 4.1, \(\rho(\mathcal{T}[I])<\rho(\mathcal{T})\). This together with Theorem 4.2 implies the following corollary. **Corollary 4.1**.: _Let \(\mathcal{T}\) be a weakly irreducible, symmetric, nonnegative zero diagonal tensor of order \(k\) and dimension \(n\), where \(k,n\geq 2\) and \(\mathbf{x}\) be a unit positive eigenvector corresponding to \(\rho(\mathcal{T})\). For \(\emptyset\neq I\subset[n]\),_ \[\frac{\rho(\mathcal{T})\left(1-k\sum_{i\in[n]\setminus I}x_{i}^{ k}\right)-\sum_{j=1}^{k-1}(-1)^{j}\binom{k}{j+1}\sum_{\begin{subarray}{c}i_{1}, \ldots,i_{j+1}\in[n]\setminus I\\ i_{j+2},\ldots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}}\cdots x _{i_{k}}}{\sum_{i\in I}x_{i}^{k}} \leq\rho(\mathcal{T}[I])\] \[<\rho(\mathcal{T}).\] For a tensor \(\mathcal{T}\) of order \(k\) and dimension \(n\) with \(k,n\geq 2\), we denote by \(R_{i}(\mathcal{T})=\sum_{i_{2},\ldots,i_{k}\in[n]}t_{ii_{2}\ldots i_{k}}\) for \(i\in[n]\), which is called the \(i\)th row sum of \(\mathcal{T}\). **Example 4.4**.: _Let \(\mathcal{T}\) be a weakly irreducible, symmetric, nonnegative zero diagonal tensor of order \(k\) and dimension \(n\), where \(k,n\geq 2\). Suppose that \(R_{i}(\mathcal{T})=\cdots=R_{n}(\mathcal{T})=r\). By Proposition 1.1, \(\mathbf{x}=n^{-\frac{1}{k}}(1,\ldots,1)^{\top}\) is the unit positive eigenvector corresponding to \(\rho(\mathcal{T})\). Let \(\emptyset\neq I\subset[n]\). Then_ \[\sum_{i\in[n]\setminus I}x_{i}^{k}=\frac{n-|I|}{n}\] _and_ \[\sum_{j=1}^{k-1}(-1)^{j}\binom{k}{j+1}\sum_{\begin{subarray}{c}i_{1},\ldots, i_{j+1}\in[n]\setminus I\\ i_{j+2},\ldots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}x_{i_{1}}\cdots x _{i_{k}}=\frac{1}{n}\sum_{j=1}^{k-1}(-1)^{j}\binom{k}{j+1}\sum_{ \begin{subarray}{c}i_{1},\ldots,i_{j+1}\in[n]\setminus I\\ i_{j+2},\ldots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}.\] _By Corollary 4.1,_ \[\rho(\mathcal{T}[I])\geq\frac{\rho(\mathcal{T})\left(n-k(n-|I|) \right)-\sum_{j=1}^{k-1}(-1)^{j}\binom{k}{j+1}\sum_{\begin{subarray}{c}i_{1}, \ldots,i_{j+1}\in[n]\setminus I\\ i_{j+2},\ldots,i_{k}\in[n]\end{subarray}}t_{i_{1}\ldots i_{k}}}{|I|}. \tag{4.2}\] _Suppose further that \(\mathcal{T}\) is a tensor of order \(3\) and dimension \(3\), where \(t_{121}=t_{112}=t_{211}=t_{233}=t_{323}=\frac{1}{3}\), \(t_{131}=t_{113}=t_{311}=t_{232}=t_{223}=t_{322}=\frac{1}{6}\), and otherwise, \(t_{ij\ell}=0\). It is easily checked that \(\mathcal{T}\) is weakly irreducible with \(R_{1}(\mathcal{T})=R_{2}(\mathcal{T})=R_{3}(\mathcal{T})=1\). Then \(\rho(\mathcal{T})=1\) with eigenvector \((\frac{1}{\sqrt[3]{3}},\frac{1}{\sqrt[3]{3}},\frac{1}{\sqrt[3]{3}})^{\top}\). Let \(I=\{1,2\}\). Let \(\mathbf{y}=(y_{1},y_{2})\) be a unit positive eigenvector corresponding to \(\rho(\mathcal{T}[I])\). Then \(\rho(\mathcal{T}[I])y_{1}^{2}=\frac{2}{3}y_{1}y_{2}\) and \(\rho(\mathcal{T}[I])y_{2}^{2}=\frac{1}{3}y_{1}^{2}\), from which it follows that \(\rho(\mathcal{T}[I])^{3}=\frac{4}{27}\). So \(\rho(\mathcal{T}[I])=\frac{3/4}{3}\approx 0.5291\). This is compared to the lower bound for \(\rho(\mathcal{T}[I])\) given by (4.2), which is equal to \(0.5\)._ Note that the spectral radius of an uniform hypergraph is at least \(0\). For hypergraphs, we have the following result, which generalizes/improves the result of [18, Theorem 1]. **Theorem 4.3**.: _Let \(G\) be a \(k\)-uniform hypergraph with vertex set \([n]\), and \(\mathbf{x}\) be a unit nonnegative eigenvector corresponding to \(\rho(G)\), where \(n\geq k\geq 2\). Let \(\emptyset\neq I\subset[n]\). Then_ \[\rho(G)\left(1-k\sum_{i\in I}x_{i}^{k}\right)+k\sum_{j=2}^{k} \sum_{e:|e\cap I|=j}(j-1)\mathbf{x}^{e}\leq\rho(G-I)\leq\rho(G).\] _Moreover, if \(\sum_{i\in I}x_{i}^{k}\neq 1\) (for example, if \(G\) is connected), then_ \[\rho(G)\frac{\big{(}1-k\sum_{i\in I}x_{i}^{k}\big{)}+k\sum_{j=2}^{k}\sum_{e:|e \cap I|=j}(j-1)\mathbf{x}^{e}}{1-\sum_{i\in I}x_{i}^{k}}\leq\rho(G-I)\leq\rho(G).\] Proof.: Since \(\mathcal{A}(G)\) and \(\mathcal{A}(G-I)\) is nonnegative, we have \(\rho(G)=\lambda_{\max}(\mathcal{A}(G))\) and \(\rho(G-I)=\lambda_{\max}(\mathcal{A}(G-I))\). By Theorem 4.1, we have \[\rho(G)\left(1-k\sum_{i\in I}x_{i}^{k}\right)+\sum_{j=1}^{k-1}(-1)^{j}{k\choose j +1}\sum_{i_{1},\ldots,i_{j+1}\in I\atop i_{j+2},\ldots,i_{k}\in[n]}a_{i_{1} \ldots i_{k}}x_{i_{1}}\cdots x_{i_{k}}\leq\rho(G-I)\leq\rho(G). \tag{4.3}\] Let \(e=\{i_{1},\ldots,i_{k}\}\in E(G)\) with \(e\cap I=\{i_{1},\ldots,i_{\ell}\}\), where \(0\leq\ell\leq k\). The coefficient of \(\mathbf{x}^{e}\) in the lower bound given by (4.3) is \(0\) if \(\ell=0,1\). If \(\ell\geq 2\), it is \[-\sum_{j=1}^{\ell-1}(-1)^{j}{k\choose j+1}\left({\ell\choose j+1 }(j+1)!(k-j-1)!\right)\frac{1}{(k-1)!}\] \[=-k\sum_{j=1}^{\ell-1}(-1)^{j}{\ell\choose j+1}\] \[=k\sum_{j=2}^{\ell}(-1)^{j}{\ell\choose j}\] \[=k(\ell-1).\] Thus \[-\sum_{j=1}^{k-1}(-1)^{j}{k\choose j+1}\sum_{i_{1},\ldots,i_{j+1}\in I\atop i_ {j+2},\ldots,i_{k}\in[n]}a_{i_{1}\ldots i_{k}}x_{i_{1}}\cdots x_{i_{k}}=k\sum _{\ell=2}^{k}\sum_{e:|e\cap I|=\ell}(\ell-1)\mathbf{x}^{e}. \tag{4.4}\] Now, by (4.3) and (4.4), the first part follows. Suppose that \(\sum_{i\in I}x_{i}^{k}\neq 1\). By Theorem 4.1 and the above argument, the second part follows. **Example 4.5**.: _Let \(G\) be a connected \(k\)-uniform regular hypergraph with vertex set \([n]\). Then \(\mathbf{x}=n^{-\frac{1}{k}}(1,\ldots,1)^{\top}\) is the unit nonnegative eigenvector corresponding to \(\rho(G)\). Let \(\emptyset\neq I\subset[n]\). Then_ \[\rho(G-I)\geq\rho(G)\frac{(n-k|I|)+k\sum_{j=2}^{k}\sum_{e:|e\cap I|=j}(j-1)}{n -|I|}\] **Corollary 4.2**.: _Let \(G\) be a connected \(k\)-uniform hypergraph with vertex set \([n]\), and \(\mathbf{x}\) be a unit positive eigenvector corresponding to \(\rho(G)\), where \(n\geq k\geq 2\). Let \(\emptyset\neq I\subseteq[n]\). Then_ \[\sum_{i\in I}x_{i}^{k}\leq\frac{1}{k}+\frac{1}{\rho(G)}\sum_{j=2}^{k}\sum_{e:|e \cap I|=j}(j-1)\mathbf{x}^{e}.\] _In particular, for any \(v\in V(G)\),_ \[x_{v}\leq\sqrt[k]{\frac{1}{k}}. \tag{4.5}\] Proof.: Let \(\mathcal{T}=\mathcal{A}(G)\). By (4.1) and (4.4), we have \[(\mathcal{T}-\mathcal{T}_{[n]\setminus I})\mathbf{x}^{k}=k\rho(G)\sum_{i\in I }x_{i}^{k}-k\sum_{j=2}^{k}\sum_{e:|e\cap I|=j}(j-1)\mathbf{x}^{e}.\] Then \[0 \leq\mathcal{T}_{[n]\setminus I}\mathbf{x}^{k}\] \[=(\mathcal{T}-(\mathcal{T}-\mathcal{T}_{[n]\setminus I})) \mathbf{x}^{k}\] \[=\rho(G)-\left(k\rho(G)\sum_{i\in I}x_{i}^{k}-k\sum_{j=2}^{k}\sum_ {e:|e\cap I|=j}(j-1)\mathbf{x}^{e}\right)\] \[=\left(1-k\sum_{i\in I}x_{i}^{k}\right)\rho(G)+k\sum_{j=2}^{k} \sum_{e:|e\cap I|=j}(j-1)\mathbf{x}^{e}.\] The result follows. We remak that (4.5) has been observed in [22, Proposition 7.21]. **Theorem 4.4**.: _Let \(G\) be a \(k\)-uniform hypergraph with vertex set \([n]\), and \(\mathbf{x}\) be a unit non-negative eigenvector corresponding to \(\rho(G)\), where \(n\geq k\geq 2\). Let \(v\in V(G)\) and \(\mathbf{y}\) be the restriction of \(\mathbf{x}\) on \([n]\setminus\{v\}\). Then_ \[\left(1-kx_{v}^{k}\right)\rho(G)\leq\rho(G-v)\leq\rho(G).\] _Moreover, if \(x_{v}^{k}\neq 1\), then_ \[\rho(G-v)\geq\frac{1-kx_{v}^{k}}{1-x_{v}^{k}}\rho(G), \tag{4.6}\] _if \(G\) is connected, then equality holds in (4.6) if and only if \(\mathbf{y}\) is an eigenvector of \(G-v\) associated with \(\rho(G-v)\)._ Proof.: Taking \(I=\{v\}\subseteq V(G)\) in Theorem 4.3, we immediately have the inequalities. Suppose that \(G\) is connected. By Proposition 1.1, \(\mathbf{x}\) is positive, and then \(1-x_{v}^{k}>0\). Note that \[\rho(G)=k\sum_{e\in E(G)\setminus E_{v}(G)}\mathbf{x}^{e}+k\sum_{e\in E_{v}( G)}\mathbf{x}^{e}=\mathcal{A}(G-v)\mathbf{y}^{k}+k\rho(G)x_{v}^{k}.\] From this, it is easy to see that \[(1-kx_{v}^{k})\rho(G)=\mathcal{A}(G-v)\mathbf{y}^{k}\Leftrightarrow\rho(G-v)= \frac{\mathcal{A}(G-v)\mathbf{y}^{k}}{1-x_{v}^{k}}.\] So, by Lemma 2.2, equality holds in (4.6) if and only if \(\mathbf{y}\) is an eigenvector of \(G-v\) associated with \(\rho(G-v)\). If \(G\) is a connected graph with at least two vertices, then for any \(v\in V(G)\), \(\rho(G-v)\geq\frac{1-2x_{v}^{2}}{1-x_{v}^{2}}\rho(G)\), which is known [30]. If \(v\) is the vertex such that \(x_{v}\) is minimum, then the inequality \(\rho(G-v)\geq\frac{1-kx_{v}^{k}}{1-x_{v}^{k}}\) was known [13] (for \(k=2\)[23]). **Theorem 4.5**.: _Let \(G\) be a connected \(k\)-uniform linear hypergraph with vertex set \([n]\), where \(n\geq k\geq 2\). For any \(v\in[n]\),_ \[\rho(G-v)\geq\rho(G)-\sqrt[k-1]{\frac{d_{G}(v)}{\rho(G)}}\] _with equality if and only if \(v\) is adjacent to any other vertex, and \(G-v\) is regular._ Proof.: Let \(\mathbf{x}\) be the unit positive eigenvector corresponding to \(\rho(G)\) and \(\mathbf{y}\) be the restriction of \(\mathbf{x}\) on \(V(G)\setminus\{v\}\). By Theorem 4.4, \[\rho(G-v)\geq\frac{1-kx_{v}^{k}}{1-x_{v}^{k}}\rho(G)\] with equality if and only if \(\mathbf{y}\) is an eigenvector of \(G-v\) associated with \(\rho(G-v)\). Let \(\rho=\rho(G)\) and \(d=d_{G}(v)\). As \(G\) is linear, we have by [20, Theorem 3.1] and its proof that \[x_{v}\leq\frac{1}{\sqrt[k-1]{1+(k-1)\left(\frac{\rho^{k}}{d}\right)^{\frac{1} {k-1}}}}\] with equality only if the entries of \(\mathbf{y}\) are all equal. So \[\rho(G-v)\geq\frac{\rho^{\frac{k}{k-1}}-d^{\frac{1}{k-1}}}{\rho^{\frac{1}{k-1 }}},\] from which we have \[\rho(G)-\rho(G-v)\leq\rho-\frac{\rho^{\frac{k}{k-1}}-d^{\frac{1}{k-1}}}{\rho^{ \frac{1}{k-1}}}.\] Thus we have the desired upper bound for \(\rho(G)-\rho(G-v)\). Suppose that the upper bound for \(\rho(G)-\rho(G-v)\) is achieved. By the above argument, \(\mathbf{y}\) is an eigenvector of \(G-v\) associated with \(\rho(G-v)\), and the entries of \(\mathbf{y}\) are all equal. Thus \(G-v\) is regular. Note that \(\big{(}\mathcal{A}(G-v)\mathbf{x}^{k-1}\big{)}_{w}=\rho(G-v)x_{w}^{k-1}\) for \(w\in[n]\setminus\{v\}\). Then for any \(w\in[n]\setminus\{v\}\), we have \[\rho(G)x_{w}^{k-1} =\big{(}\mathcal{A}(G)\mathbf{x}^{k-1}\big{)}_{w}\] \[=\big{(}\mathcal{A}(G-v)\mathbf{x}^{k-1}\big{)}_{w}+\sum_{e\in E_ {w}(G)\cap E_{v}(G)}\mathbf{x}^{e\setminus\{w\}}\] \[=\rho(G-v)x_{w}^{k-1}+\sum_{e\in E_{w}(G)\cap E_{v}(G)}\mathbf{x} ^{e\setminus\{w\}}.\] Thus \(\sum_{e\in E_{w}(G)\cap E_{v}(G)}\mathbf{x}^{e\setminus\{w\}}=\left(\rho(G)- \rho(G-v)\right)x_{w}^{k-1}>0\). This implies that \(v\) is adjacent to any vertex of \(G\). Conversely, suppose that \(v\) is adjacent to any other vertex, and \(G-v\) is regular. As \(G\) is linear, we have \(d_{G}(v)=\frac{n-1}{k-1}\). Assume the degree of any vertex of \(G-v\) is \(r\). Let \(t\) be the largest positive number satisfying \(t(t-r)^{k-1}=d_{G}(v)\). Evidently, \(t>r\). Let \[a=\sqrt[k]{(t-r)^{k}+n-1}\] We construct a vector \(\mathbf{\hat{x}}\) on \(V(G)\) with each entry corresponding to any vertex different from \(v\) to be \(y\) such that \(\hat{x}_{v}=\frac{t-r}{a}\) and \(y=\frac{1}{a}\), where Then \(\hat{x}_{v}^{k}+(n-1)y^{k}=1\), \(t\hat{x}_{v}^{k-1}=d_{G}(v)y^{k-1}\) and \((t-r)y=\hat{x}_{v}\). Thus, \(\mathbf{\hat{x}}\) is a positive eigenvector of \(\mathcal{A}(G)\) associated with an H-eigenvalue \(t\). Note that \(\mathcal{A}(G)\) is weakly irreducible as \(G\) is connected. By Proposition 1.1, \(t=\rho(G)\). Now by Theorem 4.4, \[\rho(G)-\rho(G-v)=\frac{\rho(G)(k-1)\hat{x}_{v}^{k}}{1-\hat{x}_{v}^{k}}=\frac{ \hat{x}_{v}}{y}=\sqrt[k]{\frac{d_{G}(v)}{\rho(G)}}.\] That is, the upper bound for \(\rho(G)-\rho(G-v)\) is attained. For a Steiner system \(S(t,k,n)\), the degree (known as replication number) of any vertex is \(\frac{n-1}{k-1}\) and it possesses \(\frac{n(n-1)}{k(k-1)}\) edges. **Theorem 4.6**.: _Let \(G\) be a connected \(k\)-uniform linear hypergraph with vertex set \([n]\), where \(n\geq k\geq 2\). Let \(\gamma(G)=\max\{\rho(G-v):v\in[n]\}\). Then_ \[\gamma(G)\geq\rho(G)-\sqrt[k-1]{\frac{\delta(G)}{\rho(G)}}\geq\rho(G)-1\] _and either lower bound is attained if and only if \(G\) is a Steiner system \(S(2,k,n)\)._ Proof.: Let \(v^{*}\) be the vertex of \(G\) with the minimum degree, \(\delta(G)\). Then \(\gamma(G)\geq\rho(G-v^{*})\). By Lemma 4.5, we have \[\rho(G)-\gamma(G)\leq\rho(G)-\rho(G-v^{*})\leq\sqrt[k-1]{\frac{d_{G}(v^{*})}{ \rho(G)}}=\sqrt[k-1]{\frac{\delta(G)}{\rho(G)}}\leq 1. \tag{4.7}\] Suppose that \(\rho(G)-\gamma(G)=\sqrt[k-1]{\frac{\delta(G)}{\rho(G)}}\). Then the first and the second inequalities in (4.7) are equalities. By Theorem 4.5, \(v^{*}\) is adjacent to any other vertex and \(G-v^{*}\) is regular. Since \(G\) is a linear hypergraph, the degree of \(v^{*}\) in \(G\) is \(\frac{n-1}{k-1}\), so \(G\) is \(\frac{n-1}{k-1}\)-regular. That is, every 2-element vertex subset is contained in precisely one edge. Hence, \(G\) is a Steiner system \(S(2,k,n)\). Suppose that \(\rho(G)-\gamma(G)=1\). Then all inequalities in (4.7) are equalities. So \(G\) is \(\frac{n-1}{k-1}\)-regular. As \(G\) is a linear hypergraph again, every 2-element vertex subset is contained in precisely one edge. Hence, \(G\) is a Steiner system \(S(2,k,n)\). Conversely, suppose that \(G\) is a Steiner system \(S(2,k,n)\). Then the degree of any vertex of \(G\) is \(\frac{n-1}{k-1}\). For any vertex \(v\), the degree of any vertex of \(G-v\) is \(\frac{n-1}{k-1}-1=\frac{n-k}{k-1}\). So \(\rho(G)-\gamma(G)=\sqrt[k-1]{\frac{\delta(G)}{\rho(G)}}=1\). The following theorem generalizes the result in [32] from graphs to hypergraphs. **Theorem 4.7**.: _Let \(G\) be a \(k\)-uniform hypergraph with vertex set \([n]\). Let \(E\subseteq E(G)\). Set \(\mathbf{x}\) and \(\mathbf{y}\) the unit eigenvectors of \(\rho(G)\) and \(\rho(G-E)\), respectively. Then_ \[\rho(G)-k\sum_{e\in E}\mathbf{x}^{e}\leq\rho(G-E)\leq\rho(G)-k\sum_{e\in E} \mathbf{y}^{e}.\] Proof.: We need only to show \[k\sum_{e\in E}\mathbf{y}^{e}\leq\rho(G)-\rho(G-E)\leq k\sum_{e\in E}\mathbf{x} ^{e}.\] Since \(\rho(G)=\mathcal{A}(G)\mathbf{x}^{k}\), we have \[\rho(G-E)\geq \mathcal{A}(G-E)\mathbf{x}^{k}\] \[=k\sum_{e\in E(G)-E}\mathbf{x}^{e}\] \[=k\sum_{e\in E(G)}\mathbf{x}^{e}-k\sum_{e\in E}\mathbf{x}^{e}\] \[=\rho(G)-k\sum_{e\in E}\mathbf{x}^{e},\] from which the upper bound for \(\rho(G)-\rho(G-E)\) follows. Since \(\rho(G-E)=\mathcal{A}(G-E)\mathbf{y}^{k}=k\sum_{e\in E(G)-E}\mathbf{y}^{e}\), we have \[\rho(G) \geq\mathcal{A}(G)\mathbf{y}^{k}\] \[=k\sum_{e\in E(G)}\mathbf{y}^{e}\] \[=k\sum_{e\in E(G)-E}\mathbf{y}^{e}+k\sum_{e\in E}\mathbf{y}^{e}\] \[=\rho(G-E)+k\sum_{e\in E}\mathbf{y}^{e},\] from which the lower bound for \(\rho(G)-\rho(G-E)\) follows. **Example 4.6**.: _Let \(G\) be a connected \(k\)-uniform regular hypergraph with vertex set \([n]\). Let \(e\in E(G)\). Note that the unite positive vector corresponding to \(\rho(G)\) is \(\left(\frac{1}{\sqrt[3]{n}},\ldots,\frac{1}{\sqrt[3]{n}}\right)^{\top}\). By Theorem 4.7, we have_ \[\rho(G-e)\geq\rho(G)-\frac{k}{n}.\] **Example 4.7**.: _Let \(n\) be a positive integer at least \(3\). Let \(C_{2n}^{3}\) be the \(3\)-uniform hypercycle with vertex set \(\{v_{i}:1\leq i\leq 2n\}\) and edge set \(\{\{v_{2i-1},v_{2i},v_{2i+1}\}:1\leq i\leq n\}\), where \(v_{2n+1}=v_{1}\). It is easily checked that \(\rho(C_{2n}^{3})=2^{\frac{2}{3}}\) with unit eigenvector \(\mathbf{x}\) such that \(x_{v}=\sqrt[3]{\frac{2}{3n}}\) if \(v=v_{2i-1}\) for each \(1\leq i\leq n\) and \(x_{v}=\sqrt[3]{\frac{1}{3n}}\) if \(v=v_{2i}\) for each \(1\leq i\leq n\). Let \(P_{2n-1}^{3}\) be the \(3\)-uniform hyperpath with vertex set \(\{v_{i}:1\leq i\leq 2n-1\}\) and edge set \(\{\{v_{2i-1},v_{2i},v_{2i+1}\}:1\leq i\leq n-1\}\). Then \(\rho(P_{2n-1}^{3})=2^{\frac{2}{3}}\cos^{\frac{2}{3}}\frac{\pi}{n+1}\). Obviously, \(P_{2n-1}^{3}\cong C_{2n}^{3}-v_{2n}\). In Theorem 4.1 or Example 4.6, the lower bound on \(\rho(P_{2n-1}^{3})\) is \(\frac{3n-3}{3n-1}\cdot 2^{\frac{2}{3}}\). Obviously, the ratio of the value of \(\rho(P_{2n-1}^{3})\) and the lower bound given above tends to \(1\) when \(n\to\infty\)._ ## 5 Least H-eigenvalues In this section we study least H-eigenvalues of symmetric tensors and uniform hypergraphs. For a symmetric tensor \(\mathcal{T}\) with at least H-eigenvalue, we call a unit eigenvector of \(\mathcal{T}\) associated to \(\lambda_{\min}(\mathcal{T})\) a least eigenvector of \(\mathcal{T}\). In particular, a least eigenvector of a \(k\)-uniform hypergraph \(G\) is a least eigenvector of \(\mathcal{A}(G)\). The results in [33] are generalized to tensors and uniform hypergraphs. We note that \(k\) is always even in this section. **Theorem 5.1**.: _Let \(\mathcal{T}\) be a zero diagonal symmetric tensor of order \(k\) and dimension \(n\), where \(k\) is even and \(n,k\geq 2\). Let \(\mathbf{x}\) be a least eigenvector of \(\mathcal{T}\). Let \(\emptyset\neq I\subset[n]\). Then_ \[\lambda_{\min}(\mathcal{T}) \leq\lambda_{\min}(\mathcal{T}[I])\] \[\leq\lambda_{\min}(\mathcal{T})\left(1-k\sum_{i\in[n]\setminus I} x_{i}^{k}\right)-\sum_{j=1}^{k-1}(-1)^{j}\binom{k}{j+1}\sum_{\genfrac{}{}{0.0pt}{}{ i_{1},\ldots,i_{j+1}\in[n]\setminus I}{i_{j+2},\ldots,i_{k}\in[n]}}t_{i_{1} \ldots i_{k}}x_{i_{1}}\cdots x_{i_{k}}.\] _Moreover, if \(\sum_{i\in I}x_{i}^{k}\neq 0\), then_ \[\lambda_{\min}(\mathcal{T}[I])\leq\frac{\lambda_{\min}(\mathcal{T})\left(1-k \sum_{i\in[n]\setminus I}x_{i}^{k}\right)-\sum_{j=1}^{k-1}(-1)^{j}\binom{k}{j +1}\sum_{\genfrac{}{}{0.0pt}{}{i_{1},\ldots,i_{j+1}\in[n]\setminus I}{i_{j+2}, \ldots,i_{k}\in[n]}}t_{i_{1}\ldots i_{k}}x_{i_{1}}\cdots x_{i_{k}}}{\sum_{i\in I }x_{i}^{k}}.\] Proof.: Since \(\mathcal{T}\) is symmetric, we have by the same argument as in Theorem 4.1 that \[(\mathcal{T}-\mathcal{T}_{I})\mathbf{x}^{k}=k\lambda_{\min}(\mathcal{T})\sum_{i \in[n]\setminus I}x_{i}^{k}+\sum_{j=1}^{k-1}(-1)^{j}\binom{k}{j+1}\sum_{ \genfrac{}{}{0.0pt}{}{i_{1},\dots,i_{j+1}\in[n]\setminus I}{i_{j+2},\dots,i_{k }\in[n]}}t_{i_{1}\dots i_{k}}x_{i_{1}}\cdots x_{i_{k}}.\] Note that \(\lambda_{\min}(\mathcal{T})=\mathcal{T}\mathbf{x}^{k}\). Thus by Lemmas 2.5 and 2.1, \[\lambda_{\min}(\mathcal{T}[I]) =\lambda_{\min}(\mathcal{T}_{I})\] \[\leq\mathcal{T}_{I}\mathbf{x}^{k}\] \[=\mathcal{T}\mathbf{x}^{k}-(\mathcal{T}-\mathcal{T}_{I})\mathbf{ x}^{k}\] \[=\lambda_{\min}(\mathcal{T})-\left(k\lambda_{\min}(\mathcal{T}) \sum_{i\in[n]\setminus I}x_{i}^{k}+\sum_{j=1}^{k-1}(-1)^{j}\binom{k}{j+1}\sum_ {\genfrac{}{}{0.0pt}{}{i_{1},\dots,i_{j+1}\in[n]\setminus I}{i_{j+2},\dots,i_{ k}\in[n]}}t_{i_{1}\dots i_{k}}x_{i_{1}}\cdots x_{i_{k}}\right)\] \[=\lambda_{\min}(\mathcal{T})\left(1-k\sum_{i\in[n]\setminus I}x_{ i}^{k}\right)-\sum_{j=1}^{k-1}(-1)^{j}\binom{k}{j+1}\sum_{\genfrac{}{}{0.0pt}{ }{i_{1},\dots,i_{j+1}\in[n]\setminus I}{i_{j+2},\dots,i_{k}\in[n]}}t_{i_{1} \dots i_{k}}x_{i_{1}}\cdots x_{i_{k}}.\] Let \(\mathbf{y}\in\mathbb{R}^{|I|}\) be a unit eigenvector corresponding to \(\lambda_{\min}(\mathcal{T}[I])\). Construct a new unit vector \(\mathbf{z}\in\mathbb{R}^{n}\) such that \(z_{i}=y_{i}\) if \(i\in I\) and \(z_{i}=0\) otherwise. Then by Lemma 2.1 \[\lambda_{\min}(\mathcal{T})\leq\mathcal{T}\mathbf{z}^{k}=\mathcal{T}_{I} \mathbf{z}^{k}+(\mathcal{T}-\mathcal{T}_{I})\mathbf{z}^{k}=\mathcal{T}[I] \mathbf{y}^{k}+0=\lambda_{\min}(\mathcal{T}[I])\] as desired. This proves the first part. Suppose that \(\sum_{i\in I}x_{i}^{k}\neq 0\). By the above argument, we have \[\mathcal{T}_{I}\mathbf{x}^{k}=\lambda_{\min}(\mathcal{T})\left(1-k\sum_{i\in[ n]\setminus I}x_{i}^{k}\right)-\sum_{j=1}^{k-1}(-1)^{j}\binom{k}{j+1}\sum_{ \genfrac{}{}{0.0pt}{}{i_{1},\dots,i_{j+1}\in[n]\setminus I}{i_{j+2},\dots,i_{k }\in[n]}}t_{i_{1}\dots i_{k}}x_{i_{1}}\cdots x_{i_{k}}.\] Let \(\mathbf{w}\) be a vector in \(\mathbb{R}^{|I|}\) such that \(w_{i}=x_{i}\) if \(i\in I\). Thus by Lemma 2.1, \[\lambda_{\min}(\mathcal{T}[I]) \leq\frac{\mathcal{T}[I]\mathbf{w}^{k}}{\|\mathbf{w}\|_{k}^{k}}= \frac{\mathcal{T}_{I}\mathbf{x}^{k}}{\sum_{i\in I}x_{i}^{k}}\] \[=\frac{\lambda_{\min}(\mathcal{T})\left(1-k\sum_{i\in[n]\setminus I }x_{i}^{k}\right)-\sum_{j=1}^{k-1}(-1)^{j}\binom{k}{j+1}\sum_{\genfrac{}{}{0.0 pt}{}{i_{1},\dots,i_{j+1}\in[n]\setminus I}{i_{j+2},\dots,i_{k}\in[n]}}t_{i_{1} \dots i_{k}}x_{i_{1}}\cdots x_{i_{k}}}{\sum_{i\in I}x_{i}^{k}},\] proving the second part. **Example 5.1**.: _Let \(\mathcal{T}\) be a tensor of order \(4\) and dimension \(3\), where \(t_{1122}=t_{1212}=t_{1221}=t_{2121}=t_{2211}=t_{22112}=1\), \(t_{1222}=t_{2122}=t_{2212}=t_{2212}=3\), \(t_{3222}=t_{2322}=t_{2232}=t_{2232}=3\) _and otherwise, \(t_{ijst}=0\). Let \(I=\{2,3\}\). By MATLAB, we have \(\lambda_{\min}(\mathcal{T})=-9.9307\) with eigenvector \(\mathbf{x}_{0}=(-0.5239,1,-0.671)^{\top}\) and \(\lambda_{\min}(\mathcal{T}[I])=-6.8385\). Let \(\mathbf{x}=\frac{\mathbf{x}_{0}}{\|\mathbf{x}_{0}\|_{4}}\). The first upper bound for \(\rho(\mathcal{T}[I])\) in Theorem 5.1 is equal to_ \[\lambda_{\min}(\mathcal{T})\left(1-k\sum_{i\in[n]\setminus I}x_{i }^{k}\right)-\sum_{j=1}^{k-1}(-1)^{j}\binom{k}{j+1}\sum_{\genfrac{}{}{0.0pt}{}{ i_{1},\ldots,i_{j+1}\in[n]\setminus I}{i_{j+2},\ldots,i_{k}\in[n]}}t_{i_{1} \ldots i_{k}}x_{i_{1}}\cdots x_{i_{k}}\] \[=\lambda_{\min}(\mathcal{T})\left(1-4x_{1}^{4}\right)-(-1)^{1} \binom{4}{2}t_{1122}x_{1}^{2}x_{2}^{2}\] \[=-9.9307\times\left(1-\frac{4\times(-0.5239)^{4}}{(-0.5239)^{4}+1 +(-0.671)^{4}}\right)-(-1)^{1}\binom{4}{2}\times\frac{1\times(-0.5239)^{2} \times 1^{2}}{(-0.5239)^{4}+1+(-0.671)^{4}}\] \[=-6.3007.\] _and the second upper bound in Theorem 5.1 is equal to_ \[\frac{\lambda_{\min}(\mathcal{T})\left(1-4x_{1}^{4}\right)-(-1)^{1}\binom{4}{ 2}t_{1122}x_{1}^{2}x_{2}^{2}}{x_{2}^{k}+x_{3}^{k}}=\frac{-6.3007}{\frac{1+(-0. 671)^{4}}{(-0.5239)^{4}+1+(-0.671)^{4}}}=-6.6954.\] By similar argument as in Corollary 4.3, we have **Corollary 5.1**.: _Let \(G\) be a \(k\)-uniform hypergraph with vertex set \([n]\), where \(k\) is even, and \(n,k\geq 2\) Let \(\mathbf{x}\) be a least eigenvector of \(\mathcal{A}(G)\). If \(\emptyset\neq I\subset[n]\), then_ \[\lambda(G)\leq\lambda(G-I)\leq\lambda(G)\left(1-k\sum_{i\in I}x_{i}^{k}\right)+ k\sum_{j=2}^{k}\sum_{e:|e\cap I|=j}(j-1)\mathbf{x}^{e}.\] _Moreover, if \(\sum_{i\in I}x_{i}^{k}\neq 0\), then_ \[\lambda(G-I)\leq\frac{\lambda(G)\left(1-k\sum_{i\in I}x_{i}^{k}\right)+k\sum_ {j=2}^{k}\sum_{e:|e\cap I|=j}(j-1)\mathbf{x}^{e}}{\sum_{i\in I}x_{i}^{k}}.\] **Example 5.2**.: _Let \(G\) be a \(4\)-uniform hypergraph with vertex set \([6]\), where \(E(G)\)\(=\{\{1,2,3,4\}\), \(\{3,4,5,6\}\),\(\{1,3,4,5\}\}\). Let \(I=\{5,6\}\). Obviously, \(\lambda(G-I)=-1\). By MATLAB, \(\lambda(G)=-2.1908\) with eigenvector \(\mathbf{x}_{0}=(-0.9112,0.7465,1,1,0.9112,-0.7465)^{\top}\). Let \(\mathbf{x}=\frac{\mathbf{x}_{0}}{\|\mathbf{x}_{0}\|_{4}}\). The first upper bound for \(\lambda(G-I)\) in above corollary is_ \[\lambda(G)\left(1-4\sum_{i\in I}x_{i}^{4}\right)+4\sum_{j=2}^{4} \sum_{e:|e\cap I|=j}(j-1)\mathbf{x}^{e}\] \[=\lambda(G)\left(1-4(x_{5}^{4}+x_{6}^{4})\right)+4(2-1)x_{3}x_{4} x_{5}x_{6}\] \[=-2.1908\times\left(1-\frac{4\times(0.9112^{4}+(-0.7465)^{4})}{(-0.9112)^{4}+0.7465^{4}+1+1+0.9112^{4}+(-0.7465)^{4}}\right)\] \[+4\times\frac{1\times 1\times 0.9112\times(-0.7465)}{(-0.9112)^{4}+0.746 5^{4}+1+1+0.9112^{4}+(-0.7465)^{4}}\] \[=-0.6803,\] _and the second one is equal to_ \[\frac{\lambda(G)\left(1-4(x_{5}^{4}+x_{6}^{4})\right)+4(2-1)x_{3}x_{4}x_{5}x_{ 6}}{x_{5}^{4}+x_{6}^{4}}=-0.9071.\] **Theorem 5.2**.: _Let \(G\) be a \(k\)-uniform hypergraph with vertex set \([n]\), where \(k\) is even and \(n,k\geq 2\). Let \(E\subseteq E(G)\). Let \(\mathbf{x}\) and \(\mathbf{y}\) be the least eigenvectors of \(G\) and \(G-E\), respectively. Then_ \[\lambda(G)-k\sum_{e\in E}\mathbf{y}^{e}\leq\lambda(G-E)\leq\lambda(G)-k\sum_ {e\in E}\mathbf{x}^{e}.\] Proof.: It suffices to show that \[k\sum_{e\in E}\mathbf{x}^{e}\leq\lambda(G)-\lambda(G-E)\leq k\sum_{e\in E} \mathbf{y}^{e}.\] Since \(\lambda(G)=\mathcal{A}(G)\mathbf{x}^{k}=k\sum_{e\in E(G)}\mathbf{x}^{e}\), we have \[\lambda(G-E)\leq\mathcal{A}(G-E)\mathbf{x}^{k}=k\sum_{e\in E(G)-E}\mathbf{x} ^{e}=k\sum_{e\in E(G)}\mathbf{x}^{e}-k\sum_{e\in E}\mathbf{x}^{e}\quad=\lambda (G)-k\sum_{e\in E}\mathbf{x}^{e},\] and so \(\lambda(G)-\lambda(G-E)\geq k\sum_{e\in E}\mathbf{x}^{e}\). On the other hand, since \(\lambda(G-E)=\mathcal{A}(G-E)\mathbf{y}^{k}=k\sum_{e\in E(G)-E}\mathbf{y}^{e}\), we have \[\lambda(G)\leq\mathcal{A}(G)\mathbf{y}^{k}=k\sum_{e\in E(G)}\mathbf{y}^{e}=k \sum_{e\in E(G)-E}\mathbf{y}^{e}+k\sum_{e\in E}\mathbf{y}^{e}=\lambda(G-E)+k \sum_{e\in E}\mathbf{y}^{e},\] and so \(\lambda(G)-\lambda(G-E)\leq k\sum_{e\in E}\mathbf{y}^{e}\). **Example 5.3**.: _Let \(G\) be a \(4\)-uniform hypergraph with vertex set \([6]\), where \(E(G)\)= \(\{\{1,2,3,4\}\), \(\{3,4,5,6\}\),\(\{1,3,4,5\}\), \(\{1,2,4,5\}\}\). Let \(E=\{\{1,2,3,4\}\}\). By MATLAB, \(\lambda(G)=-2.8786\) with eigenvector \(\mathbf{x}_{0}=(-0.9457,0.848,0.928,1,0.928,-0.6688)^{\top}\) and \(\lambda(G-E)=-2.1908\) with eigenvector \(\mathbf{y}_{0}=(-0.9112,0.7465,0.9112,1,1,-0.7465)^{\top}\). Let \(\mathbf{x}=\frac{\mathbf{x}_{0}}{\|\mathbf{x}_{0}\|_{4}}\) and \(\mathbf{y}=\frac{\mathbf{y}_{0}}{\|\mathbf{y}_{0}\|_{4}}\). The lower bound for \(\lambda(G-E)\) in Theorem 5.2 is_ \[\lambda(G)-k\sum_{e\in E}\mathbf{y}^{e}\] \[=\lambda(G)-4y_{1}y_{2}y_{3}y_{4}\] \[=-2.8786-4\times\frac{-0.9112\times 0.7465\times 0.9112\times 1}{(-0.9 112)^{4}+0.7465^{4}+0.9112^{4}+1+1+(-0.7465)^{4}}\] \[=-2.2587,\] _and the upper bound for \(\lambda(G-E)\) in Theorem 5.2 is_ \[\lambda(G)-k\sum_{e\in E}\mathbf{x}^{e}\] \[=\lambda(G)-4x_{1}x_{2}x_{3}x_{4}\] \[=-2.8786-4\times\frac{(-0.9457)\times 0.848\times 0.928\times 1}{(-0.94 57)^{4}+0.848^{4}+0.928^{4}+1+0.928^{4}+(-0.6688)^{4}}\] \[=-1.4467.\] **Theorem 5.3**.: _Let \(G\) be a linear \(k\)-uniform hypergraph with at least one edge, where \(k\) is even and \(n,k\geq 2\). Let \(\mathbf{x}\) be a least eigenvector of \(G\). Then for \(i\in V(G)\),_ \[x_{i}^{k}\leq\frac{d_{i}}{d_{i}+(k-1)\lambda(G)^{\frac{k}{k-1}}} \tag{5.1}\] _with equality if and only if for \(j\in V(G)\setminus\{i\}\), \(x_{j}^{k}=\frac{\lambda(G)^{\frac{k}{k-1}}x_{i}^{k}}{d_{i}^{2}}\) if \(j\sim i\) and \(x_{j}=0\) otherwise, and the sign of \(\mathbf{x}^{e\setminus\{i\}}\) for each \(e\in E_{i}(G)\) is the same._ Proof.: Let \(\lambda=\lambda(G)\). From the eigenequation of \(G\) at \(i\), \[\lambda^{\frac{k}{k-1}}x_{i}^{k} =\left(\sum_{e\in E_{i}(G)}\mathbf{x}^{e\setminus\{i\}}\right)^ {\frac{k}{k-1}}\] \[\leq\left(\sum_{e\in E_{i}(G)}\left|\mathbf{x}^{e\setminus\{i\}} \right|\right)^{\frac{k}{k-1}}\] \[\leq\left(d_{i}\left(\frac{\sum_{\{i,i_{2},\ldots,i_{k}\}\in E_{ i}(G)}x_{i_{2}}^{k}\cdots x_{i_{k}}^{k}}{d_{i}}\right)^{\frac{1}{k}}\right)^{ \frac{k}{k-1}}\] \[=d_{i}\left(\sum_{\{i,i_{2},\ldots,i_{k}\}\in E_{i}(G)}x_{i_{2}}^{ k}\cdots x_{i_{k}}^{k}\right)^{\frac{1}{k-1}} \tag{5.2}\] \[\leq d_{i}\left(\sum_{\{i,i_{2},\ldots,i_{k}\}\in E_{i}(G)}\left( \frac{x_{i_{2}}^{k}+\cdots+x_{i_{k}}^{k}}{k-1}\right)^{k-1}\right)^{\frac{1}{k -1}}\] \[\leq\frac{d_{i}}{k-1}\sum_{\{i,i_{2},\ldots,i_{k}\}\in E_{i}(G)} \left(x_{i_{2}}^{k}+\cdots+x_{i_{k}}^{k}\right)\] \[=\frac{d_{i}}{k-1}\sum_{j:j\sim i}x_{j}^{k}\] \[\leq\frac{d_{i}}{k-1}\left(1-x_{i}^{k}\right),\] where the first and the last two inequalities follows trivially, the second and the third inequalities follow respectively from the power mean inequality and the arithmetic-geometric mean inequality. So (5.1) follows. Suppose that equality holds in (5.1). Then all inequalities in (5.2) are equalities. From the first inequality, we see that the sign of \(\mathbf{x}^{e\setminus\{i\}}\) for any \(e\in E_{i}(G)\) is the same. From the third inequality, we know that for each \(\{i,i_{2},\ldots,i_{k}\}\in E_{i}(G)\), \(x_{i_{2}}^{k}=\cdots=x_{i_{k}}^{k}\). From the last inequality, we find that either \(j\sim i\) for all \(j\in V(G)\setminus\{i\}\) or \(x_{j}^{k}=0\) for each \(j\nsim i\). So \(x_{j}^{k}=\frac{\lambda^{\frac{k}{k-1}}x_{i}^{k}}{d_{i}^{2}}\) if \(j\sim i\) and \(0\) otherwise. Conversely, suppose that \(j\in V(G)\setminus\{i\}\), \(x_{j}^{k}=\frac{\lambda(G)^{\frac{k}{k-1}}x_{i}^{k}}{d_{i}^{2}}\) if \(j\sim i\) and \(x_{j}=0\) otherwise, and the sign of \(\mathbf{x}^{e\setminus\{i\}}\) for each \(e\in E_{i}(G)\) is the same. Then all inequalities in (5.2) are equalities, so (5.1) is an equality. Suppose that \(G\) is a \(k\)-uniform hypergraph with at least one edge, where \(k\) is even and \(k\geq 2\). Let \(c_{\max}\) be the largest component among all least eigenvectors of \(\mathcal{A}(G)\). **Theorem 5.4**.: _Let \(G\) be a linear \(k\)-uniform hypergraph with vertex set \([n]\) and at least one edge, where \(k\) is even and \(n,k\geq 2\). Then_ \[c_{\max}\leq\sqrt[k]{\frac{n-1}{n-1+(k-1)^{2}}}\] _with equality if and only if \(x_{j}^{k}=\frac{x_{i}^{k}}{\left(\frac{n-1}{k-1}\right)^{\frac{k}{k-1}}}\) if \(j\sim i\) and \(x_{i}=0\) otherwise, \(\lambda(G)=-1\), maximum degree is \(\frac{n-1}{k-1}\), and the sign of \(\mathbf{x}^{e\setminus i}\) for each \(e\in E_{i}(G)\) is the same._ Proof.: Let \(\mathbf{x}\) be a least eigenvector of \(G\) containing \(c_{\max}\). Suppose without loss of generality that \(c_{\max}=x_{1}\). As \(G\) is linear, we have \(d_{i}\leq\frac{n-1}{k-1}\) for \(i\in V(G)\). As \(k\) is even and \(|E(G)|\geq 1\), we have \(\lambda(G)\leq-1\) by Corollary 5.1 and the fact that the least H-eigenvalue of the \(k\)-uniform hypergraph consisting of exactly one edge is \(-1\). So, by Theorem 5.3, \[x_{i}^{k}\leq\frac{d_{i}}{d_{i}+(k-1)\lambda(G)^{\frac{k}{k-1}}}\leq\frac{ \frac{n-1}{k-1}}{\frac{n-1}{k-1}+(k-1)(-1)^{\frac{k}{k-1}}}=\frac{n-1}{n-1+(k- 1)^{2}}\] with equalities if and only if \(x_{j}^{k}=\frac{x_{i}^{k}}{\left(\frac{n-1}{k-1}\right)^{\frac{k}{k-1}}}\) if \(j\sim i\) and \(x_{i}=0\) otherwise, \(\lambda(G)=-1\), \(d_{1}=\frac{n-1}{k-1}\), and the sign of \(\mathbf{x}^{e\setminus i}\) for each \(e\in E_{i}(G)\) is the same. Thus the result follows. For even integer \(k\), we say a \(k\)-uniform hypergraph \(G\) is odd-bipartite if \(V(G)\) can be partitioned into two disjoint vertex set \(V_{1}\) and \(V_{2}\) such that each edge intersects each of \(\{V_{1},V_{2}\}\) with odd number of vertices. **Theorem 5.5**.: _Let \(G\) be an odd-bipartite, connected \(k\)-uniform hypergraph with \(m\) edges, where \(k\) is even, and \(m\geq 1\). Then \(c_{\max}\geq\sqrt[k]{-\frac{\lambda(G)}{km}}\) with equality if and only if \(G\) is regular._ Proof.: Let \(\mathbf{x}\) be the least eigenvector of \(G\) containing \(c_{\max}\). Since \(G\) is odd-bipartite, \(\lambda(G)=-\rho(G)\)[29]. Let \(\widetilde{\mathbf{x}}\) be the vector such that \(\widetilde{x}_{i}=x_{i}\) for each \(i\in V(G)\). Obviously, \(\widetilde{\mathbf{x}}\) is unit. For any \(u\in V(G)\), \[\rho(G)\widetilde{x}_{u}^{k-1}=\rho(G)|x_{u}|^{k-1}=\left|\sum_{e\in E_{u}(G)} \mathbf{x}^{e\setminus\{u\}}\right|\leq\sum_{e\in E_{u}(G)}\left|\mathbf{x}^{ e\setminus\{u\}}\right|=\sum_{e\in E_{u}(G)}\widetilde{\mathbf{x}}^{e\setminus\{u\}},\] i.e., \[\rho(G)\widetilde{x}_{u}^{k-1}\leq\sum_{e\in E_{u}(G)}\widetilde{\mathbf{x}}^ {e\setminus\{u\}}.\] As \(\widetilde{\mathbf{x}}\) is nonnegative, \[\rho(G)\widetilde{x}_{u}^{k}\leq\sum_{e\in E_{u}(G)}\widetilde{\mathbf{x}}^{ e}.\] By Lemma 2.2, we have \[\rho(G)=\sum_{u\in V(G)}\rho(G)\widetilde{x}_{u}^{k}\leq\sum_{u\in V(G)}\sum_{e \in E_{u}(G)}\widetilde{\mathbf{x}}^{e}=\sum_{e\in E(G)}k\widetilde{\mathbf{x} }^{e}\leq\rho(G),\] so \(\rho(G)=\sum_{e\in E(G)}k\widetilde{\mathbf{x}}^{e}\) and \(\rho(G)\widetilde{x}_{u}^{k-1}=\sum_{e\in E_{u}(G)}\widetilde{\mathbf{x}}^{e \setminus\{u\}}\). That is, \(\widetilde{\mathbf{x}}\) is a unit nonnegative eigenvector corresponding to \(\rho(G)\). As \(G\) is connected, \(\widetilde{\mathbf{x}}\) is positive. It follows that \[-\lambda(G)=\rho(G)=k\sum_{e\in E(G)}\widetilde{\mathbf{x}}^{e}\leq kmc_{\max }^{k}\] with equality if and only if \(|x_{i}|=c_{\max}\), i.e., \(G\) is regular. **Acknowledgements.** We thank Prof. Tan Zhang for kind discussions. This work was supported by the National Natural Science Foundation of China (Nos. 12071158 and 11801410).
2307.03411
Learning from Heterogeneity: A Dynamic Learning Framework for Hypergraphs
Graph neural network (GNN) has gained increasing popularity in recent years owing to its capability and flexibility in modeling complex graph structure data. Among all graph learning methods, hypergraph learning is a technique for exploring the implicit higher-order correlations when training the embedding space of the graph. In this paper, we propose a hypergraph learning framework named LFH that is capable of dynamic hyperedge construction and attentive embedding update utilizing the heterogeneity attributes of the graph. Specifically, in our framework, the high-quality features are first generated by the pairwise fusion strategy that utilizes explicit graph structure information when generating initial node embedding. Afterwards, a hypergraph is constructed through the dynamic grouping of implicit hyperedges, followed by the type-specific hypergraph learning process. To evaluate the effectiveness of our proposed framework, we conduct comprehensive experiments on several popular datasets with eleven state-of-the-art models on both node classification and link prediction tasks, which fall into categories of homogeneous pairwise graph learning, heterogeneous pairwise graph learning, and hypergraph learning. The experiment results demonstrate a significant performance gain (average 12.5% in node classification and 13.3% in link prediction) compared with recent state-of-the-art methods.
Tiehua Zhang, Yuze Liu, Zhishu Shen, Xingjun Ma, Peng Qi, Zhijun Ding, Jiong Jin
2023-07-07T06:26:44Z
http://arxiv.org/abs/2307.03411v2
# Learning from Heterogeneity: A Dynamic Learning Framework for Hypergraphs ###### Abstract Graph neural network (GNN) has gained increasing popularity in recent years owing to its capability and flexibility in modeling complex graph structure data. Among all graph learning methods, hypergraph learning is a technique for exploring the implicit higher-order correlations when training the embedding space of the graph. In this paper, we propose a hypergraph learning framework named _Learning from Heterogeneity (LFH)_ that is capable of dynamic hyperedge construction and attentive embedding update utilizing the heterogeneity attributes of the graph. Specifically, in our framework, the high-quality features are first generated by the pairwise fusion strategy that utilizes explicit graph structure information when generating initial node embedding. Afterwards, a hypergraph is constructed through the dynamic grouping of implicit hyperedges, followed by the type-specific hypergraph learning process. To evaluate the effectiveness of our proposed framework, we conduct comprehensive experiments on several popular datasets with eleven state-of-the-art models on both node classification and link prediction tasks, which fall into categories of homogeneous pairwise graph learning, heterogeneous pairwise graph learning, and hypergraph learning. The experiment results demonstrate a significant performance gain (average 12.5% in node classification and 13.3% in link prediction) compared with recent state-of-the-art methods. Heterogeneous Hypergraph, Representation Learning, Hypergraph Generation, Classification. ## I Introduction Graph learning has attracted tremendous attention in recent years owing to its prominent capability when modelling structure-based data. In particular, there has been witnessed an increasing use of graph models (i.e., GNN: Graph Neural Network) in many fields of applications, such as social network recommendation [1], medical diagnosis [2, 3], and text analysis [4]. The goal of graph learning is to encode the graph structure of different input data into an embedding space, where the representations can be used for downstream node/edge/graph tasks. There are both standard graphs and hypergraphs. Fig. 1 illustrates (a) a standard graph where one edge connects two vertices regardless of the edge type and (b) a hypergraph with three hyperedges. Standard graphs can be modeled using many existing GNN models such as graph convolution network (GCN) [5], graph attention network (GAT) [6], and heterogeneous graph learning (PC-HGN) [7]. Most of these models aim to model the pairwise relationships in well-structured graph data. However, this may not work for hypergraphs, as a hypergraph can encompass any number of vertices (nodes) at each edge type, presenting higher-order correlations with rich underlying information. Pairwise graph learning models, in this case, become intractable. Specifically, hypergraph presents more complex non-pairwise relationships, where these implicit relationships are not only dyadic (pairwise) but rather triadic or even higher [8]. In general, hypergraph learning can be regarded as a generalized form of traditional graph learning. Like pairwise graph learning models like GCN, hypergraph learning can also be seen as a process of uncovering and passing the structure information among different hyperedges, during which low-dimensional node/graph level embedding representations can be learned and used for downstream tasks, such as node classification [9], link prediction [10], and clustering [11]. Owing to the great potential of exploring high-order correlations among the data, hypergraph learning has drawn increasing attention from the academic community and has been employed in applications such as image classification [12], video segmentation [13], and hyperspectral image analysis [14]. How to construct hyperedges lies at the core of hypergraph learning. Existing works generally take two approaches, i.e., attribute-based construction and neighborhood-based construction. The early work focused on constructing the hypergraph based on the features of each vertex [15]. This strategy instructs the model to use only the feature information which inevitably sacrifices certain generality from the topological perspective. The neighborhood-based strategy addresses this weakness by Fig. 1: Comparison between (a) a standard graph with 5 nodes and 4 edges, and (b) a hypergraph with 3 hyperedges (the three colored node groups). enabling the locality concept. Specifically, a hyperedge is constructed through clustering a master vertex and its neighbor vertices [16]. However, the clustering method (e.g., \(k\)-nearest neighbors (\(k\)-NN)) separates hypergraph construction from the graph learning process, and thus is sensitive to noisy data, especially in visual classification tasks. Moreover, most neighborhood-based models fail to incorporate topological heterogeneity into the learning process, which limits the learning to a single type of hyperedges. To tackle these challenges, we propose a dynamic learning framework, _Learning from Heterogeneity (LFH)_, to improve the quality of representation learning for hypergraphs. Our LFH framework is composed of three key modules: 1) initial embedding generation, 2) dynamic hypergraph construction, and 3) attention-based heterogeneous hypergraph learning. Concretely, initial embedding generation aims to exploit the pairwise connectivities in the graph, helping to fuse the explicit topological information into the initial embedding space. Following that, as an integral part of representation learning, the heterogeneous hypergraph is constructed and then learned dynamically. In summary, the main contributions of this work are: * We propose a novel hypergraph learning framework LFH with three modules: initial embedding generation, dynamic hypergraph construction, and attention-based heterogeneous hypergraph learning. A pairwise fusion strategy is also considered to fully exploit the explicit pairwise graph information to generate high-quality initial embedding. * We design a dynamic learning process to generate different types of implicit hyperedges to construct the hypergraph and dynamically adapt the hypergraph construction during the learning process. The embedding is updated iteratively through a type-specific multi-head attention mechanism, so as to encode the heterogeneous attributes into the embedding space. * We conduct extensive experiments on two different downstream tasks, including node classification and linked prediction, and show that our LFH outperforms state-of-the-art homogeneous pairwise graph learning models, heterogeneous pairwise graph learning models, and hypergraph learning models consistently across all datasets by a large margin. The remainder of this paper is organized as follows. Section 2 reviews the related works of graph learning on homogenous, heterogenous, and hypergraphs. Section 3 provides the preliminary knowledge of hypergraphs, while Section 4 introduces the detail of our proposed LFH framework, along with analyses and discussions. The experimental results are reported and analyzed in Section 5. Finally, Section 6 concludes the paper. ## II Related Work ### _Homogeneous Graph Learning_ As a basic graph structure, a homogeneous graph consists of a single node type with a single relation. Typical GNN models for homogeneous graphs include GCN [5] and GAT [6]. GCN extends the traditional convolutional neural networks (CNN) to handle graph-structured data by performing convolutions in the spectral/spatial domain of the graph nodes to capture the structural information of the graph. On the other hand, GAT applies the self-attention mechanism to assign dynamic weights for the neighbors of a node and then take a weighted sum of their embeddings to obtain the node's representation. This enables GAT to learn adaptive neighborhood representations and capture complex relationships between nodes. The primary difference between various GNN models lies in the way how messages are passed between the nodes to learn the representation. Instead of training individual embeddings for each node, GraphSAGE [17] leverages the node features in the learning algorithm to train a set of aggregator functions to generate embeddings for entirely unseen nodes. Aiming for visual question answering (VQA) services, Li _et al._ proposed a GAT-based platform that encodes each image into a graph and models multi-type inter-object relations to learn relation representation from the graphs [18]. Jiang _et al._ presented a GCN-based framework (GLCN) for graph data representation learning. GLCN generates similarity-based graph structure by simultaneous graph learning and graph convolution in a unified network architecture [19]. Compared with the embedding frameworks that can only generate embeddings for a single fixed graph like transductive learning in GCN, Zeng _et al._ proposed GraphSAINT, a graph sampling based inductive representation learning method, to generalize across different graphs. GraphSAINT generates low-dimensional vector representations for the nodes and is beneficial for large graphs with rich node attribute information [20]. ### _Heterogeneous Graph Learning_ Different from homogeneous graphs, the nodes in a heterogeneous graph are usually connected with various types of neighbors via different types of relations. As such, representation learning on heterogeneous graphs is much more challenging. It needs to not only incorporate heterogeneous structure (graph) information but also consider the heterogeneous attributes associated with each node [21]. Zhang _et al._ proposed a heterogeneous GNN model (HetGNN) for capturing both structure and content heterogeneity. HetGNN first captures the strongly correlated heterogeneous neighbors of each node and then aggregates feature information of these sampled neighbors [22]. Zhao _et al._ proposed a GNN-based framework named HGSL that jointly performs heterogeneous graph structure learning and GNN parameter learning for classification. In HGSL, the feature similarity graphs, the feature propagation graphs, and the semantic graphs are generated separately so as to comprehensively learn an optimal heterogeneous graph [23]. Zhang _et al._ designed a relation-centered pooling and convolution (PC-HGN) operation that enables relation-specific sampling and cross-relation convolutions on heterogeneous graphs, from which the structural heterogeneity of the graph can be better encoded into the embedding space through the adaptive training process [7]. HAN (Heterogeneous graph Attention Network) [24] is a heterogeneous graph learning framework based on node-level and semantic-level attention mechanisms. Specifically, node-level attention learns the importance between a node and its meta-path based neighbors, while semantic-level attention learns the importance of different meta-paths. Hu _et al._ presented a heterogeneous graph transformer (HGT) for modeling heterogeneous graphs [25]. HGT introduces the node-type and edge-type dependent attention mechanism while different trainable parameters are assigned to each node and edge type. HGT can incorporate high-order heterogeneous neighbor information, which automatically learns the importance of implicit meta path. Considering relation-aware characteristics, Yu _et al._ proposed a relation-aware representation learning model for heterogeneous graphs (R-HGNN). This model derives a fine-grained representation from a group of relation-specific node representations reflecting the characteristics of the node associated with a specified relation [26]. ### _Hypergraph Learning_ Hypergraph learning explores the high-order correlations in the data, which extends the traditional graph learning models to a high dimensional and more complete nonlinear space. It offers a promising solution for analyzing complex structured data with satisfactory performance in practice [16]. The construction of the hypergraph from the given data is a key step for hypergraph learning, which significantly affects the final learning performance. Zhang _et al._ proposed dynamic hypergraph structure learning (DHSL) to update hypergraph structure iteratively during the learning process [27]. It is essential to make dynamic modifications to the initial hypergraph structures from adjusted feature embedding. Feng _et al._ presented a framework named hypergraph neural network (HGNN) for handling complex and high-order correlations. In HGNN, the complex data correlation is formulated in a hypergraph structure, and a hyperedge convolution operation is used to exploit the high-order data correlation for representation learning [9]. To exploit high-order relations among the features, Jiang _et al._ proposed a dynamic hypergraph neural networks (DHGNN) framework that is composed of dynamic hypergraph construction (DHG) and hypergraph convolution (HGC). DHG utilizes the \(k\)-NN method to generate the basic hyperedge and extends the adjacent hyperedge set by \(k\)-means clustering, with which the local and global relations can be extracted. HGC is designed to encode high-order data relations in the hypergraph structure [28]. Cai _et al._ introduced a hypergraph structure learning (HSL) framework to optimize the hypergraph structure and HGNNs simultaneously in an end-to-end way. To efficiently learn the hypergraph structure, HSL adopts a hyperedge sampling strategy to prune the redundant hyperedges, which is followed by an incident node sampling for pruning irrelevant incident nodes and discovering potential implicit connections. The consistency between the optimized structure and the original structure is maintained by the intra-hyperedge contrastive learning module [29]. Gao _et al._ proposed a tensor-based dynamic hypergraph learning (t-DHL) model to efficiently learn dynamic hypergraphs. t-DHL utilizes a tensor representation to characterize the dynamic hypergraph structure more flexibly. During the optimization of the tensor representation, not only the weights but also the number and order of hyperedges can be adjusted. As an extended version of HGNN, a general hypergraph neural network framework named HGNN+ was proposed for modeling high-order representation among the data, which is achieved by bridging multi-modal/multi-type data and hyperedge with hyperedge groups [30]. Recent works mostly focus on neighborhood-based construction strategies when constructing hyperedges in the hypergraph. Specifically, hyperedge is often constructed through clustering a master vertex and its neighbor vertices. However, for the commonly adopted clustering methods like \(k\)-NN [9, 12, 28], they treat hypergraph construction and graph learning separately. As a result, these models are susceptible to noisy data which limits their applications. Apart from that, most neighborhood-based models fail to incorporate the heterogeneity of the topology, leading to a single type hyperedge [16]. In this work, we propose a heterogeneous hypergraph learning framework LFH to solve the above mentioned issues. LFH constructs the hyperedge based on different edge types, which can then be integrated into the learning process to improve the learned representations. ## III Preliminaries and problem formulation In this section, we first introduce the relevant definitions and then formulate the key problems associated with representation learning on heterogeneous hypergraphs. The notations used in this paper are listed in Table I. **Definition 1**.: _(**Heterogeneous Pairwise Graph**). A heterogeneous graph \(\hat{\mathcal{G}}=\{\hat{\mathcal{V}},\hat{\mathcal{E}},\hat{\mathcal{T}}_{e },\hat{\mathcal{T}}_{v}\}\) is defined as a graph with multiple node types \(\hat{\mathcal{T}}_{v}\) or edge types \(\hat{\mathcal{T}}_{e}\), where \(\hat{\mathcal{V}}=\{\mathrm{v}_{1},\mathrm{v}_{2},...,\mathrm{v}_{N}\}\) denotes the set of nodes and \(\hat{\mathcal{E}}=\left\{\left(\mathrm{v}_{i},\mathrm{v}_{j}\right)|\mathrm{v }_{i},\mathrm{v}_{j}\in\hat{\mathcal{V}}\right\}\) denotes the set of edges. We define the node type mapping function \(\hat{\phi}:\hat{\mathcal{V}}\rightarrow\hat{\mathcal{T}}_{v}\) and the edge type mapping function \(\psi:\hat{\mathcal{E}}\rightarrow\hat{\mathcal{T}}_{e}\), respectively._ **Definition 2**.: _(**Heterogeneous Hypergraph**). A heterogeneous hypergraph is defined as \(\mathcal{G}=\{\mathcal{V},\mathcal{E},\mathcal{T}_{e},\mathcal{T}_{e},\mathcal{ W}\}\), where \(\mathcal{V}=\{\mathrm{v}_{1},...,\mathrm{v}_{N}\}\) is the set of nodes and \(\mathcal{E}=\{\mathrm{e}_{1},...,\mathrm{e}_{M}\}\) is the set of hyperedges. \(\mathrm{N}\) and \(\mathrm{M}\) are the number of nodes and hyperedges, respectively. For any hyperedge \(\mathrm{e}\in\mathcal{E}\), it is composed of a subset of nodes \(\mathcal{V}\), which can be defined as \(\mathrm{e}=\{\mathrm{v}_{i},...,\mathrm{v}_{j}\}\subseteq\mathcal{V}\). We define the node type mapping function \(\phi:\mathcal{V}\rightarrow\mathcal{T}_{v}\) and the hyperedge type mapping function \(\psi:\mathcal{E}\rightarrow\mathcal{T}_{e}\), respectively. Each heterogeneous hypergraph presents multiple hyperedge types, i.e., \(|\mathcal{T}_{e}|>1\). A positive diagonal matrix \(\mathcal{W}\in\mathcal{R}^{M\times M}\) is used to denote the weight of each hyperedge, that is, \(diag\left(\mathcal{W}\right)=\left\{w\left(\mathrm{e}_{1}\right),...,w\left( \mathrm{e}_{M}\right)\right\}\). The relationship between nodes and hyperedges is usually represented by incidence matrix \(\mathrm{H}\in\mathcal{R}^{N\times M}\), with entries are continuous numbers within the range of \(\left[0,1\right]\). If the hyperedge \(e_{j}\) is incidenced with node \(v_{i}\), the value of \(\mathrm{H}\left(v_{i},\mathrm{e}_{j}\right)\) is positive ; otherwise, \(\mathrm{H}\left(\mathrm{v}_{i},\mathrm{e}_{j}\right)=0\). The element \(\mathrm{H}\left(\mathrm{v}_{i},\mathrm{e}_{j}\right)\) can be regarded as the importance of node \(\mathrm{v}_{i}\) for hyperedge \(\mathrm{e}_{j}\). We can define the degree of each node \(\mathrm{v}_{i}\in\mathcal{V}\) and the edge degree of hyperedge \(\mathrm{e}_{j}\in\mathcal{E}\) as:_ \[d\left(\mathrm{v}_{i}\right)=\sum_{\mathrm{e}_{p}\in\mathcal{E}}w\left(\mathrm{e }_{p}\right)\cdot\mathrm{H}\left(\mathrm{v}_{i},\mathrm{e}_{p}\right) \tag{1}\] \[\delta\left(\mathrm{e}_{j}\right)=\sum_{\mathrm{v}_{p}\in\mathcal{V}}\mathrm{H }\left(\mathrm{v}_{p},\mathrm{e}_{j}\right) \tag{2}\] Fig. 2 provides an illustrative comparison between a heterogeneous pairwise graph and a hypergraph. Fig. 2 (a) sketches the ACM dataset composed of three node types (_Author_, _Paper_, _Subject_) and four edge types (_Write_, _Writen by_, _Belong to_, _Contain_). By contrast, a heterogeneous hypergraph is capable of modeling a more complex graph structure by constructing different types of hyperedges, in which every hyperedge can connect any number of nodes to model a specific type of data relationship. As shown in Fig. 2 (b), the purple hyperedge models an author's preference for subjects, while the yellow one models the relation among different subjects. **Problem 1**.: _(**Hyperedge Generation in Heterogeneous Hypergraph**). While node set \(\mathcal{V}\) explicitly exists in graph data, the hyperedge set \(\mathcal{E}\) is normally implicit. Following Definition 2, any hyperedge \(\mathrm{e}\in\mathcal{E}\) contains a subset of nodes \(\mathcal{V}\). We aim to learn a hyperedge construction function \(f_{con}:\mathcal{V}\rightarrow\mathcal{E}\), where hyperedges are constructed based on the given nodes. Moreover, this function can be used for constructing incidence matrix \(\mathrm{H}\)._ **Problem 2**.: _(**Representation Learning on Heterogeneous Hypergraph**). Given a heterogeneous hypergraph \(\mathcal{G}=\{\mathcal{V},\mathcal{E},\mathcal{T}_{e},\mathcal{T}_{v}, \mathcal{W}\}\), we aim to learn a mapping function \(f_{emb}\left(\mathcal{G}\right)\rightarrow\mathcal{I}^{\mathcal{G}}\), where \(\mathcal{I}^{\mathcal{V}}\in\mathcal{R}^{d\times N}\) indicates representation embedding of all nodes in \(\mathcal{G}\). This representation embedding can be used for downstream predictive tasks such as node classification and link prediction._ ## IV Heterogeneous Hypergraph In this section, we introduce our proposed _Learning from Heterogeneity_ (LFH) framework for hypergraphs. Fig. 3 provides an overview of LFH. We first demonstrate the pairwise fusion process from the heterogeneous pairwise graph to obtain a high-quality initial node embedding. Afterward, we dynamically generate the heterogeneous hyperedges among the nodes with different node types through feature reconstruction. After the heterogeneous hypergraph is constructed, our proposed dynamic hypergraph learning can then be performed to train the node embedding space, in which node representations can be derived for downstream node classification and link prediction tasks. ### _Initial Node Embedding_ As demonstrated in Fig. 3, the input of this step is the heterogeneous pairwise graph. It is of great importance to generate high-quality initial node embedding to better exploit the implicit data relations in the hypergraph. Intuitively, fusing pairwise information from a heterogeneous pairwise graph into the initial node embeddings can help capture high-order correlations among the nodes and thus affects the construction of the hypergraph. Given a heterogeneous Fig. 2: An illustrative example of the ACM dataset: (a) Heterogeneous pairwise graph including four node types (_Author_, _Paper_, _Subject_) and four edge types (_Write_, _Writen by_, _Belong to_, _Contain_). (b) A heterogeneous hypergraph that models two implicit data relations: _preference for subjects_ and _the relation among the subjects._ pairwise graph \(\hat{\mathcal{G}}=\{\hat{\mathcal{V}},\hat{\mathcal{E}},\hat{\mathcal{T}}_{e},\hat{ \mathcal{T}}_{e}\}\) and its raw node features \(\hat{X}=[\hat{x}_{1},\hat{x}_{2},...,\hat{x}_{N}]\in\mathcal{R}^{D\times N}\), where the raw feature of node \(v_{i}\in\mathcal{V}\) is \(\hat{x}_{i}\in\mathcal{R}^{D}\). The pairwise fusion can be implemented by any GNN model that operates on heterogeneous pairwise graphs. Formally, the pairwise fusion process can be defined as: \[X=\textit{pairwise\_fusion}\left(\hat{\mathcal{G}},\hat{X};\Theta_{p}\right) \tag{3}\] where \(\Theta_{p}\) is the trainable parameters of any GNN model, such as GAT [6] and PC-HGN [7]. The output \(X=[x_{1},x_{2},...,x_{N}]\in\mathcal{R}^{d\times N}\) is the generated initial node embeddings that contain pairwise information. ### _Heterogeneous Hyperedge Generation_ The special structure of hyperedges enables hypergraphs to encode high-order data relations. By encircling a specific set of nodes with common attributes or close relationships within one hyperedge, we can effectively represent local group information among the nodes. Therefore, it is essential to discover the related nodes for different hyperedges. To facilitate hyperedge construction, we define two types of nodes: master node and slave node. A master node \(v_{i}^{m}\in\mathcal{V}\) serves as an anchor when constructing a hyperedge, which dynamically encircles a set of slave nodes \(\{v_{i}^{s}\}\) to jointly form the hyperedge. The collection of participant nodes is denoted as \(S\left(v_{i}^{m}\right)\), which is united with master node \(v_{i}^{m}\) to represent a complete data relation within a hyperedge. Previous works have adopted the \(k\)-NN method to select the \(k\)-th nearest nodes to form a hyperedge [27, 31]. Alternative approaches solve an independent optimization problem to generate the hyperedges [12, 32], however, this solution fails to incorporate the heterogeneity of the hyperedges, and thus cannot be integrated into the representation learning process. To address the above challenges, here we propose a dynamic learning process that generates heterogeneous hyperedges and adaptively reconstructs the hyperedge in different types. Specifically, for each master node \(v_{i}^{m}\in\mathcal{V}\), our framework encloses the related slave nodes of various types to encode heterogeneous topological structures with implicit relationships, based on different attributes among the nodes. Suppose the master node \(v_{i}^{m}\) has a set of \(n_{i}\) candidate slave nodes \(\hat{S}^{k}\left(v_{i}^{m}\right)\), where \(k=[1,...,n_{i}]\) with different types. The hyperedge can be reconstructed based on the node types of the master node and candidate slave node sets. The type of the candidate slave node set can be defined as \(\mathcal{T}\left(\hat{S}^{k}\left(v_{i}^{m}\right)\right)\) and clearly is a subset of node type \(\mathcal{T}_{v}\). Each candidate slave node set \(\tilde{S}^{k}\left(v_{i}^{m}\right)\) contains all nodes with one or several node types, which is denoted as \(\tilde{S}^{k}\left(v_{i}^{m}\right)=\left\{v|\phi\left(v\right)\in\mathcal{T} \left(\tilde{S}^{k}\left(v_{i}^{m}\right)\right),v\neq v_{i}^{m}\right\}\). Since each hyperedge is generated from the corresponding candidate slave node set of a certain type, the type number of the hyperedge is equal to the type number of the candidate slave node set. An exemplary hyperedge construction is demonstrated in the middle part of Fig. 3, where the master node encompasses \(|\mathcal{T}_{v}|\) candidate slave node sets with distinct types. For each type, a hyperedge (represented as the coloured triangle) is dynamically constructed. Mathematically, this process of generating a hyperedge from a master node \(v_{i}^{m}\) and one of its candidate slave node set \(\tilde{S}^{k}\left(v_{i}^{m}\right)\) can be denoted as: \[\hat{X}_{k}\left(v_{i}^{m}\right)=X\left(\tilde{S}^{k}\left(v_{i}^{m}\right) \right)\cdot p_{v_{i}^{m}}^{k} \tag{4}\] Fig. 3: An overview of our proposed LFH framework. It can be interpreted as the selection of slave nodes for this hypergraph through a linear combination of candidate slave node embedding \(X\left(\tilde{S}^{k}\left(v_{i}^{m}\right)\right)\in\mathcal{R}^{d\times|S^{k} \left(v_{i}^{m}\right)|}\) and trainable reconstruction coefficient vector \(p_{v_{i}^{m}}^{k}\in\mathcal{R}^{|\tilde{S}^{k}\left(v_{i}^{m}\right)|}\). Each element \(p_{v_{i}^{m}}^{k}\left(v\right)\) is the learned reconstruction coefficient associated with the node \(v\in\tilde{S}^{k}\left(v_{i}^{m}\right)\). Based on \(p_{v_{i}^{m}}^{k}\), the nodes in the candidate slave node set \(\tilde{S}^{k}\left(v_{i}^{m}\right)\) with reconstruction coefficient larger than zero are grouped to generate a hyperedge with the master node, which is denoted as \(S^{k}\left(v_{i}^{m}\right)=\left\{v|v\in\tilde{S}^{k}\left(v_{i}^{m}\right),p_ {v_{i}^{m}}^{k}\left(v\right)>0\right\}\). The reconstruction error \(c^{k}\left(v_{i}^{m}\right)\), which measures the difference between the master node and the reconstructed master node, is calculated as follows: \[c^{k}\left(v_{i}^{m}\right)=\|\hat{\mathcal{X}}_{k}\left(v_{i}^{m}\right)- \theta_{k}\cdot\mathcal{X}\left(v_{i}^{m}\right)\|_{2} \tag{5}\] \(\theta_{k}\in\mathcal{R}^{d\times d}\) is a type-specific trainable projection matrix and \(\|\cdot\|_{2}\) is the \(l_{2}\)-\(norm\) regularization of a vector. Note that the purpose of hyperedge construction is to dynamically group the most relevant nodes, forming a hyperedge in which the nodes have strong dependencies. Practically, we add \(l_{1}\)-\(norm\) regularization and \(l_{2}\)-\(norm\) regularization of \(p\) simultaneously. The \(l_{1}\)-\(norm\) regularization constrains the reconstruction coefficient to zero, which tends to group fewer slave nodes to reconstruct the master node. However, the \(l_{1}\)-\(norm\) regularization is sensitive to noise and outliers, which leads to non-smooth results. For this reason, we consider the \(l_{1}\)-\(norm\) regularization and \(l_{2}\)-\(norm\) regularization simultaneously and use a weight hyperparameter \(\gamma\) to trade off the effect of both. Overall, the loss of this process is defined as: \[\mathcal{L}_{recon}=\sum_{i=\left[1,...,N\right]}\sum_{k=\left[1,...,n_{i} \right]}\lambda c^{k}\left(v_{i}^{m}\right)+\|p_{v_{i}^{m}}^{k}\|_{1}+\gamma \|p_{v_{i}^{m}}^{k}\|_{2} \tag{6}\] where \(\|\cdot\|_{1}\) denotes the \(l_{1}\)-\(norm\) regularization of a vector, \(\lambda\) is the weight hyperparameter of the reconstruction error, and \(\gamma\) is the norm hyperparameter to trade off the \(l_{1}\)-\(norm\) regularization and \(l_{2}\)-\(norm\) regularization of the reconstruction coefficient vector \(p_{v_{i}^{m}}^{k}\). From the \(n_{i}\) candidate slave node sets of the master node \(v_{i}^{m}\), we thus can generate the \(n_{i}\) hyperedges of different types. Iteratively, each node \(v_{j}\) could act as the master node to generate several hyperedges of different types to lead to the final hypergraph. The corresponding incidence matrix \(H\) of the hypergraph is denoted as: \[H\left(v_{j},e^{k}\left(v_{i}^{m}\right)\right)= \begin{cases}1&v_{j}=v_{i}^{m}\\ p_{v_{i}^{m}}^{k}\left(v_{j}\right)&v_{j}\in e^{k}\left(v_{i}^{m}\right),v_{j }\neq v_{i}^{m}\\ 0&\text{otherwise}\end{cases} \tag{7}\] ### _Dynamic Hypergraph Learning_ As illustrated in Fig 3, dynamic hypergraph learning consists of two key components: hyperedge embedding updating and multi-head attention node embedding updating. This section elaborates on these components and loss function. #### Iii-C1 Hyperedge Embedding Updating We aggregate node embeddings to the hyperedge containing these nodes. The hyperedges embedding is denoted as \(E\in\mathcal{R}^{d\times M}\). The embedding of each hyperedge \(e^{k}\left(v_{i}^{m}\right)\) is obtained as follows: \[E\left(e^{k}\left(v_{i}^{m}\right)\right)=\frac{X\cdot H\left(e^{k}\left(v_{i} ^{m}\right)\right)}{\delta\left(e^{k}\left(v_{i}^{m}\right)\right)} \tag{8}\] \(H\left(e^{k}\left(v_{i}^{m}\right)\right)\in\mathcal{R}^{N\times 1}\) is a column of incidence matrix \(H\) (See Eq. 7), where each element is a coefficient that represents the importance of the node to the hyperedge \(e^{k}\left(v_{i}^{m}\right)\). The degree of edge \(\delta\left(e^{k}\left(v_{i}^{m}\right)\right)\) is the normalization factor in the process of hyperedge embedding updating. #### Iii-C2 Multi-head Attention Node Embedding Updating Previous works calculate the similarities among hyperedges and then use them as weighting coefficients when updating respective node embeddings [32, 33]. However, the heterogeneity in the hypergraph is omitted, especially with multiple types of hyperedge and nodes. To cope with that, we design a heterogeneous multi-head attention mechanism that can train the importance of heterogeneous hyperedges with respect to nodes as shown in Fig. 4. We use all hyperedges associated with master node \(v_{i}^{m}\) to update the node embedding. The hyperedges associated with node \(v_{i}^{m}\) is represented as \(\mathcal{E}\left(v_{i}^{m}\right)=\left\{e^{1}\left(v_{i}^{m}\right),e^{2}\left( v_{i}^{m}\right),...,e^{M_{i}}\left(v_{i}^{m}\right)\right\}\). Fig. 4 illustrates the calculation process of a master node \(v_{i}^{m}\) and two related hyperedges \(e^{1}\left(v_{i}^{m}\right)\) and \(e^{2}\left(v_{i}^{m}\right)\): we first calculate the multi-head attention between \(v_{i}^{m}\) and \(e^{1}\left(v_{i}^{m}\right)\), and then use the normalized attention as the weight of \(e^{1}\left(v_{i}^{m}\right)\). The calculation involves Eq.9 to Eq.13. \[\mathcal{Q}^{h}\left(v_{i}^{m}\right)=Q\text{-}Proj_{\phi\left(v_{i}^{m}\right) }^{h}\cdot X\left(v_{i}^{m}\right) \tag{9}\] where the dimension of \(X\left(v_{i}^{m}\right)\) is \(\mathcal{R}^{d\times 1}\), and \(Q\text{-}Proj_{\phi\left(v_{i}^{m}\right)}^{h}\in\mathcal{R}^{\frac{d}{K} \times d}\) represents the projection matrix on the \(h\)-th attention head, \(h\in[1,K]\). \(K\) is the number of attention heads. Note the projection matrices are distinguished on hyperedge types. Similarly, we also project the hyperedge \(e^{1}\left(v_{i}^{m}\right)\) through \(K\text{-}Proj_{\psi\left(e^{1}\left(v_{i}^{m}\right)\right)}^{h}\) into \(\mathcal{R}^{h}\left(e^{1}\left(v_{i}^{m}\right)\right)\) as: \[K^{h}\left(e^{1}\left(v_{i}^{m}\right)\right)=K\text{-}Proj_{\psi\left(e^{1} \left(v_{i}^{m}\right)\right)}^{h}\cdot\mathcal{E}\left(e^{1}\left(v_{i}^{m} \right)\right) \tag{10}\] where \(E\left(e^{1}\left(v_{i}^{m}\right)\right)\in\mathcal{R}^{d\times 1}\) is the embedding of hyperedge \(e^{1}\left(v_{i}^{m}\right)\). The dimension of \(K\text{-}Proj_{\psi\left(e^{1}\left(v_{i}^{m}\right)\right)}^{h}\) is \(\mathcal{R}^{\frac{d}{K}\times d}\) while the dimension of the output \(K^{h}\left(e^{1}\left(v_{i}^{m}\right)\right)\) is \(\mathcal{R}^{\frac{d}{K}}\). \[\begin{split} att^{h}(v_{i}^{m},e^{1}(v_{i}^{m}))=\left(\mathcal{ Q}^{h}(v_{i}^{m})\cdot\Theta_{\psi\left(e^{1}\left(v_{i}^{m}\right)\right)}^{att}\right)\\ \cdot\mathcal{K}^{h}\left(e^{1}\left(v_{i}^{m}\right)\right) \right)\cdot\frac{\mu_{\psi\left(e^{1}\left(v_{i}^{m}\right)\right)}}{\sqrt{d}} \end{split} \tag{11}\] Eq. 11 derives the attention value of the \(h\)-th head between \(v_{i}^{m}\) and \(e^{1}\left(v_{i}^{m}\right)\). The type-specific learnable matrix \(\Theta_{\psi\left(e^{1}\left(v_{i}^{m}\right)\right)}^{att}\in\mathcal{R}^{ \frac{d}{K}\times\frac{d}{K}}\) for edge type \(\psi\left(e^{1}\left(v_{i}^{m}\right)\right)\) represents the learnable semantic information for each edge type. \(\mu\) is a scaling factor for different hyperedge types. Moreover, since the magnitude of \(K\) and \(Q\) can increase the attention value significantly and eventually lead to the gradient explosion problem, we divide the obtained value by \(\sqrt{d}\) to stabilize the training process. Following that, the calculated attention value \(att^{h}\left(v_{i}^{m},e^{1}\left(v_{i}^{m}\right)\right)\) is able to quantify the importance of \(e^{1}\left(v_{i}^{m}\right)\) to \(v_{i}^{m}\) on the \(h\)-th adaptively. The weight of hyperederede \(e^{1}\left(v_{i}^{m}\right)\) is obtained as follows: \[\tilde{w}\left(v_{i}^{m},e^{1}\left(v_{i}^{m}\right)\right)=\underset{h\in[1,K ]}{\text{\emph{\text{\text{\text{\text{\text{ \text{ \text{ \text{ \text{ \text{ \text{ \text{ \text{ \text relationships between the master node and others. However, as pointed out above, this process is sensitive to noisy nodes. Over-reliance on reconstruction loss compromises the performance of our model. To enhance the generalization ability, we need to use a weight hyperparameter \(\alpha\), to balance the effects of loss of supervised downstream task and reconstruction loss. ### _Model Analysis and Discussion_ We analyze the computational cost and summarize the advantages of LFH as below: #### Iv-E1 Computational Cost In the process of heterogeneous hypergraph generation, the computational cost mainly lies in the reconstruction of the master node from its candidate slave node set (Eq. 4). As it is essential to generate hyperedges of different types for each node, and the computational cost of heterogeneous hypergraph generation is \(\textit{O}\left(d|\mathcal{T}_{e}|N\right)\), where \(N\) is the number of nodes, \(d\) is the dimensionality of node embedding, and \(|\mathcal{T}_{v}|\) is the number of node types. Additionally, for the process of dynamic hypergraph learning, we use the incidence matrix \(H\) and nodes embedding \(X\) to update hyperedges embedding, with the computational cost of \(\textit{O}\left(dMN\right)\). Note the number of hyperedges \(M\) is proportional to the product of the \(N\) and the number of edge types \(|\mathcal{T}_{e}|\), thus the computational cost of hyperedges embedding updating is \(\textit{O}\left(d|\mathcal{T}_{e}|N^{2}\right)\). Moreover, the computational cost of multi-head attention node embedding updating is \(\textit{O}\left(d^{2}|\mathcal{T}_{e}|N\right)\), where \(\mathcal{T}_{e}\) is the number of hyperedge types. Hence, the total computational cost of our proposal is \(\textit{O}\left(d|\mathcal{T}_{e}|N^{2}+\left(d|\mathcal{T}_{v}|+d^{2}| \mathcal{T}_{e}|\right)N\right)\). The scale of \(d\), \(\mathcal{T}_{e}\) and \(\mathcal{T}_{v}\) are usually far less than \(N\), This computational cost is on par with many existing hypergraph-based models, such as [12] and [34]. #### Iv-E2 Discussion Compared to pairwise graph learning models such as GAT [6] and PC-HGN [7], LFH is capable of modeling implicit high-order data relations. Moreover, LFH has two advantages. First, the process of hyperedges generation is conducted with hypergraph learning within a united training process while taking into account the heterogeneity of the graph. This enables an adaptive hypergraph modeling than HGNN [9], whose hyperedges are generated by clustering method \(k\)-NN. Other works that use static clustering methods for hyperedge generation adopt different strategies of \(k\)-means or a combination of \(k\)-NN and \(k\)-means to generate different hyperedges [27, 35]. The main limitation of this line of work is the static clustering methods are sensitive to noise and outliers. Another issue is the hyperparameters of clustering method may affect the hyperedge generation, and adaptively calibrating the hyperparamters of clustering method during training is non-trivial. LFH, on the other hand, can incorporate hyperedges information of different types in different heads, using a heterogeneous multi-head attention mechanism. The mechanism dynamically quantifies the weights of hyperedges based on the correlation between the node representation and the related hyperedges representation, making the weights more descriptive of the importance of the related hyperedges to the node. ## V Experiment In this section, we empirically evaluate the effectiveness of our LFH framework and analyze the impacts of its key components to the final performance. We conduct our experiments on three different graph datasets, including DBLP, IMDB, and ACM. Additionally, we compare the performance of LFH with eleven baseline models, which include homogenous graph learning, heterogeneous graph learning, and hypergraph learning models. ### _Experimental Setup_ #### V-A1 Dataset The experiments are conducted on three datasets, which are popularly used in classification tasks. The details of these datasets are summarized in Table II. * is an academic network from four research areas, including database, machine learning, data mining and information retrieval. It uses Paper (P), Author (A), and Conference (C) as different node types, while edges are presented as P-A, A-P, P-C and C-P with different edge types. Four research areas are used as labels for this dataset. The initial node features are calculated using bag-of-words. Footnote 1: [https://3.cnn-north-1.amazonaws.com/dgl-data/dataset/openhgnn/dblp4GTN.zip](https://3.cnn-north-1.amazonaws.com/dgl-data/dataset/openhgnn/dblp4GTN.zip) 2 [https://3.cnn-north-1.amazonaws.com/dgl-data/dataset/openhgnn/cmdGTN.zip](https://3.cnn-north-1.amazonaws.com/dgl-data/dataset/openhgnn/cmdGTN.zip) 3 [https://3.cnn-north-1.amazonaws.com/dgl-data/dataset/openhgnn/cmdGTN.zip](https://3.cnn-north-1.amazonaws.com/dgl-data/dataset/openhgnn/cmdGTN.zip) 4 **DBLP2* * shares a similar data characterises with DBLP. It contains Paper (P), Author (A) and Subject (S) as node types, along with four types of edges (P-A, A-P, P-S and S-P). The papers are labelled into three classes (Database, Wireless Communication, Data Mining). It also uses bag-of-words to construct the initial node features. Footnote 2: [https://3.cnn-north-1.amazonaws.com/dgl-data/dataset/openhgnn/mdbGTN.zip](https://3.cnn-north-1.amazonaws.com/dgl-data/dataset/openhgnn/mdbGTN.zip) * contains movie (M), actors (A) and directors (D). Each movie is labelled according to its genre (Action, Comedy, Drama). Node features are also initialized using bag-of-words. Footnote 4: [https://3.cnn-north-1.amazonaws.com/dgl-data/dataset/openhgnn/mdbGTN.zip](https://3.cnn-north-1.amazonaws.com/dgl-data/dataset/openhgnn/mdbGTN.zip) 2 **Baseline Models and Configurations:* * We compare with some state-of-the-art baselines, each of which reports promising results in the classification task. Although these baselines share the same goal of learning the representation of the graph, they are originally designed to fulfil graph learning on different graph data, including homogeneous pairwise graph learning, pairwise heterogeneous graph learning, and hypergraph learning. * **GCN**[5] is a graph convolutional network designed specifically for homogeneous pairwise graph learning. The depth of the layer in GCN is set to 2. Footnote 5: [https://3.cnn-north-1.amazonaws.com/dgl-data/dataset/openhgnn/mdbGTN.zip](https://3.cnn-north-1.amazonaws.com/dgl-data/dataset/openhgnn/mdbGTN.zip) * **GAT**[6] is the first work that introduces the attention mechanism in homogeneous graph learning. It enables the weighted message aggregations from neighbour nodes. The number of attention heads is set to 3. Footnote 6: [https://3.cnn-north-1.amazonaws.com/dgl-data/dataset/openhgnn/mdbGTN.zip](https://3.cnn-north-1.amazonaws.com/dgl-data/dataset/openhgnn/mdbGTN.zip) * **GraphSAGE**[17] designs a sampling approach when aggregating messages from neighbour nodes. It also supports different aggregation functions. The sample window of GraphSAGE is set to 10. Footnote 7: [https://3.cnn-north-1.amazonaws.com/dgl-data/dataset/openhgnn/mdbGTN.zip](https://3.cnn-north-1.amazonaws.com/dgl-data/dataset/openhgnn/mdbGTN.zip) * **GraphSAINT**[20] splits nodes and edges from a bigger graph into a number of subgraphs, on which the GCN is applied for node representation learning. We adopt the node sampling strategy for GraphSAINT and use 8000 as the node budget and 25 as the number of subgraphs. * **HAN**[24] uses attention techniques on heterogeneous graph learning, in which the node embeddings are updated through manually designed meta-paths. The number of attention heads is set to 8. * **HGT**[25] designs type-specific attention layers, assigning different trainable parameters to each node and edge type. For the best performance of HGT, the number of attention heads is set to 8. * **R-HGNN**[26] proposes a relation-aware model to learn the semantic representation of edges while discerning the node representations with respect to different relation types. It is designed to learn heterogeneous graph representations. The depth of the layer in R-HGNN is set to 2. * **PC-HGN**[7] employs a sampling-based convolution. It designs an efficient cross-relation convolution that allows message aggregations of a node from different connected relation types simultaneously. We set the number of kernels to 64 and pooling size to 2 as the best performance reported. * **HL**[36] uses the hypergraphs to represent complex relationships among the objects, where where attribute values are regarded as hyperedges. The value of the regularization factor is set to 0.1. * **HGNN**[9] performs convolution with a hypergraph Laplacian, which is further approximated by truncated Chebyshev polynomials to handle the data correlation during representation learning. The number of neighbors for generating hyperedges is set to 10. * **HGNN+**[30] utilizes the multi-modal/multi-type correlation from each modality/type, and conducts hypergraph convolution in the spatial domain to learn a general data representation for various task. The value of \(k\) for selecting \(k\)-hop neighbors to generate hyperedges is set to 1. The shared parameter setup of the framework can be found in Table III. We randomly split all datasets into train/validate/test with the ratio of 0.2/0.1/0.7, respectively. It is worth mentioning that the hyperparameters of all baseline models are set to the optimal values reported in their original papers. All models are trained with a fixed 100 epochs, using an early stopping strategy when the performance on the validation set is not improved for 30 consecutive epochs. All trainable parameters of the neural network are initialized through Xavier [37] and optimized using Adam [38] with the learning rate 2e-3. The dropout rate is set to be 0.3 [39]. The optimal selection of hyperparameters of our model is further analyzed in the following section. ### _Performance Analysis_ We evaluate the performance of our proposed framework on node classification and link prediction tasks. #### Iv-B1 Node Classification In the node classification task, we split the original data into training data and test data. The training data are then split into a range of [10%, 30%, 50%, 70%]. The strategy of generating the training dataset is the same as [41], where each randomly selected node at least has one pairwise relation with others. The best F1 results among all datasets are demonstrated in Table IV, in which the highest scores are marked in bold. We compared three different categories of graph learning models: homogeneous pairwise graph learning models, heterogeneous pairwise graph learning models, and hypergraph learning models. Our proposed LFH achieves performance gain over other baseline models in all datasets by 2%-35.8%. Our model outperforms all baseline models in all datasets by an average improvement of 12.5%. Compared to four homogeneous pairwise GNN baselines GCN, GAT, GraphSAGE and GraphSAINT, the average F1 score improvement of LFH for all datasets is 14.4%. Regarding the cherry-picked heterogeneous pairwise graph learning models and hypergraph learning models, our model achieved an average improvement of 9.7% and 13.7%, respectively, compared to the best-recorded results in all datasets. It is worth noting that HGNN+ [30] is the only hypergraph learning baseline that utilizes pairwise information for hyperedge generation, resulting in better performance compared to other hypergraph learning models, except for our model. Furthermore, the training process of our model, which incorporates type-specific hyperedge generation and attentive embedding updates that exploits the heterogeneity of the graph, enables LFH to improve HGNN+ by an average of 5.7% in all datasets. Additionally, compared to the latest reporting heterogeneous learning model PC-HGN [7], the best results of our model gains an average increase by 5.2%. For different split ratios of training data, we observe that LFH still outperforms all baselines. When the split ratios are 70%, 50%, 30% and 10%, respectively, our model outperforms other models by 8.6%, 8.7%, 10.9% and 17.7% on average. The obtained result demonstrates that the robustness and generalization capability of our model in the informative case. #### Iv-B2 Link Prediction To investigate the effectiveness of our proposed framework, we also apply our model to the link prediction task. In this task, a graph with a certain faction of edges removed is given, and thus the objective is to predict these missing edges. Following the experiment setting adopted in [40], 50% percent of existing edges are randomly set hidden as positive samples while ensuring that the residual graph after the edge being removed remains connected. To generate negative examples, we randomly sample an equal number of node pairs from the graph, which have no edge connecting them. Similar to the related works [10, 42], we use a logistic regression classifier as a predictor for link prediction. To get the representation of each edge, we use Hadamard operator to compute the element-wise product for the linked node pairs. We compare the performance of LFH with four baselines. Node2vec [40] is a classic model for link prediction task. The other three baselines, GAT, PC-HGN, and HGNN+, are the top-performing baselines in homogenous pairwise graph learning models, heterogeneous pairwise graph learning models, and hypergraphs learning models in node classification tasks. Experimental results are summarized in Table V. LFH obtains performance gains of 16.8%, 18.8%, 2.9% and 14.6%, compared with Node2vec, GAT, PC-HGN, and HGNN+ and outperforms four baseline models in three datasets by an average improvement of 13.3%. It shows the promising capability of our model to capture implicit data relations. It is worth noting that HGNN+ that performs well in node classification does not have comparable performance in link prediction. This is because the HGNN+ applied for link prediction task heavily relies on pairwise edges to generate hyperedges, and thus a sharp decrease in the number of existing edges leads to a deterioration in performance. ### _Impact of Hyperedge Construction_ In our proposed LFH, \(\lambda\) and \(\gamma\) are used as hyperparameters. Specifically, \(\lambda\) is the weight hyperparameter of the reconstruction error and \(\gamma\) is the norm hyperparameter to trade off \(l_{1}\)-\(norm\) regularization norm and \(l_{2}\)-\(norm\) regularization of the reconstruction coefficient vector (See Eq. 6). Fig. 5 and 6 illustrate the overall performance of LFH influenced by \(\lambda\) and \(\gamma\). Fig. 5 illustrates the influence of \(\lambda\) for different datasets on F1 score. In this figure, we fix \(\gamma\) as 0.02, 0.2 and 0.2 while \(\lambda\) ranges between 0.0001 and 1. When \(\lambda\) is small, the reconstruction loss of \(\lambda c^{k}\left(v_{i}^{m}\right)\) becomes trivial, and the reconstruction coefficient vector \(p\) will not experience large adjustments throughout the training phase, thus affecting the generation of hyperedges and deteriorating the performance of LFH. In contrast, when \(\lambda\) increases to around 0.2, reconstruction loss of \(\lambda c^{k}\left(v_{i}^{m}\right)\) accounts for the appropriate proportion of the total loss, thus achieving the best performance under this setup. When \(\lambda\) continually increases to 20, F1 score drops rapidly, which clearly demonstrates that the dominance of reconstruction loss compromises the generalization ability of LFH. On the other hand, Fig. 6 describes the influence of \(\gamma\). In this figure, we fix \(\lambda\) as 0.001, 0.01 and 0.1, while \(\gamma\) ranges between 0.002 and 20. When \(\gamma\) is very small, The decisive role of \(l_{1}\)-\(norm\) regularization enables our model to encircle fewer slave nodes to generate hyperedges, which makes our model more sensitive to noise and outliers. When \(\gamma\) is large, the dominance of \(l_{2}\)-\(norm\) regularization leads to the over-smoothing problem. With \(\gamma\) ranging in \([0.0001,1]\), the F1 score reaches the highest when \(\gamma=0.2\) for all fixed value of \(\lambda\) in all datasets, and decreases afterwards. The result demonstrates that considering \(l_{1}\)-\(norm\) regularization norm and \(l_{2}\)-\(norm\) regularization simultaneously can improve the generalization ability of our model. ### _Sensitivity Analysis_ We study the impact of key parameters in LFH including the node embedding size and the number of attention head as illustrated in Table VI and VII. #### Vi-A1 Impact of the node embedding size We test the impact of different node embedding sizes for all datasets with data split ratio of 10%. As shown in Table VI, the performance of the model increases at first and then starts to drop, reporting the best result with the embedding size 256. Intuitively, larger dimension sizes of node embedding bring in extra redundancies in the training process, thus causing unexpected performance drops. #### Vi-A2 Impact of the number of multi-head We analyze the effect of the number of multi-head \(K\). As observed in Table VII, when the number of attention heads increases, there is typically an improvement in the performance of our model. However, it is observed that the performance of LFH improves only marginally afterwards while causing large computational costs. When \(K=4\), it is an optimal point to trade off the performance and computational cost. Fig. 5: Performance of LFH as a function of \(\lambda\) for several values of \(\gamma\) for different datasets. Fig. 6: Performance of LFH as a function of \(\gamma\) for several values of \(\lambda\) for different datasets. ### _Ablation Study_ We study the impact of pairwise fusion in this part. We consider different candidates in the pairwise fusion process, aiming to find the best match that is capable of deriving high-quality features from the heterogeneous pairwise graph as the initial node embedding. We analyze the importance of pairwise fusion and implement different models introduced in Section V-A. Note that we applied the same hyperparameters of all these models according to the best performance reported in the respective papers and source codes. As shown in Fig. 7, using PC-HGN in pairwise fusion enlarges the performance gap over other models in all datasets, with a performance gain of more than 5.4%-14.4% compared to using GAT, a performance gain of more than 6.6%-18.0% compared to using HGT, and an overall 11.5% performance gain on average compared to using other models in pairwise fusion. This result demonstrates the initial node embedding generated by PC-HGN helps our framework achieve the best results in all three datasets. ### _Impact of Hyperparameter \(\alpha\)_ As defined in Eq. 6, the proposed united loss is a linear combination of the reconstruction loss in dynamic heterogeneous hyperedges generation and the supervised loss for the downstream task. We further study the impact of \(\alpha\) on the performance of LFH. In the experiment, the sensitivity of \(\alpha\) is explored to control the level of impact caused by the reconstruction of each master when training the model. When the value of \(\alpha\) is close to 1, it means that the model is optimized towards the reconstruction correctness. Fig. 8 reveals the change of F1 score along with \(\alpha\). With \(\alpha\) ranging in \([0.001,0.5]\), the performance first reaches to the top and then drops significantly, indicating that over-reliance on reconstruction loss aggravates the generalization ability of our model. We observe that F1 score reaches the highest when \(\alpha=0.1\) in all three datasets. The result demonstrates that uniting reconstruction loss in the training process to some extent can help improve the model performance. ## VI conclusion In this paper, we propose a heterogenous hypergraph learning framework for representation learning. This framework first generates the high-quality initial node embedding using the designated pairwise fusion function, aiming to exploit the pairwise graph information at most. Afterwards, the multi-type hyperedges are constructed dynamically, forming the hypergraph together. The embedding is then updated iteratively through type-specific attention, aiming to encode the heterogeneous attributes into the embedding space. We conduct comprehensive experiments on three widely-used public datasets and comparison with eleven baseline methods to demonstrate the effectiveness of our proposed framework. The results and analysis reveal that the proposed framework can achieve new state-of-the-art performance on both node classification and link prediction tasks.
2305.03268
Verify-and-Edit: A Knowledge-Enhanced Chain-of-Thought Framework
As large language models (LLMs) have become the norm in NLP, demonstrating good performance in generation and reasoning tasks, one of its most fatal disadvantages is the lack of factual correctness. Generating unfactual texts not only leads to lower performances but also degrades the trust and validity of their applications. Chain-of-Thought (CoT) prompting improves trust and model performance on complex reasoning tasks by generating interpretable reasoning chains, but still suffers from factuality concerns in knowledge-intensive tasks. In this paper, we propose the Verify-and-Edit framework for CoT prompting, which seeks to increase prediction factuality by post-editing reasoning chains according to external knowledge. Building on top of GPT-3, our framework lead to accuracy improvements in multiple open-domain question-answering tasks.
Ruochen Zhao, Xingxuan Li, Shafiq Joty, Chengwei Qin, Lidong Bing
2023-05-05T03:49:14Z
http://arxiv.org/abs/2305.03268v1
# Verify-and-Edit: A Knowledge-Enhanced Chain-of-Thought Framework ###### Abstract As large language models (LLMs) have become the norm in NLP, demonstrating good performance in generation and reasoning tasks, one of its most fatal disadvantages is the lack of factual correctness. Generating unfactual texts not only leads to lower performances but also degrades the trust and validity of their applications. Chain-of-Thought (CoT) prompting improves trust and model performance on complex reasoning tasks by generating interpretable reasoning chains, but still suffers from factuality concerns in knowledge-intensive tasks. In this paper, we propose the Verify-and-Edit framework for CoT prompting, which seeks to increase prediction factuality by post-editing reasoning chains according to external knowledge. Building on top of GPT-3, our framework lead to accuracy improvements in multiple open-domain question-answering tasks. For reproducing our results and extending the framework further, we make our codebase available at [https://github.com/RuochenZhao/Verify-and-Edit](https://github.com/RuochenZhao/Verify-and-Edit) ## 1 Introduction Large Language Models (LLMs) have become the new norm in many downstream NLP tasks. In utilizing these LLMs, Chain-of-Thought (CoT) prompting (Wei et al., 2022) is found to improve performances for tasks that require complex reasoning, such as math word problems, commonsense reasoning, and symbolic manipulation. At the same time, it is able to generate interpretable reasoning chains. Recent work further explored how to use these reasoning chains to select better predictions. However, the primary focus of these methods has been to improve end-task performance by utilizing generated CoTs as-is. For example, Ye and Durrett (2022) train a calibrator that tunes prediction probabilities based on rationale scores; Wang et al. (2022) sample multiple reasoning paths to find the most common (consistent) prediction. Only a few, such as Creswell et al. (2022) and Zhou et al. (2022), have explored ways to improve the quality of CoTs themselves. In fact, improving the CoT quality could be beneficial in enhancing both interpretability and end-task performance. Ye and Durrett (2022) point out that explanations judged as good by humans often indicate more accurate predictions. Intuitively, a better set of CoT prompts could provide better grounding and logically consistent thought pro Figure 1: The Verify-and-Edit framework consists of five steps: (1) pass predictions with lower-than-average consistency to the next stages while leaving highly consistent predictions as-is; (2) produce verifying questions; (3) retrieve external knowledge; (4) edit rationales with informed answers; and (5) produce new predictions. cesses, thus leading to more accurate predictions. To improve generation quality, one important aspect is _factual correctness_, which is currently one of the most fatal drawbacks of LLMs (OpenAI-Blog, 2022; Zhao et al., 2023). In answering user queries, LLMs such as GPT-3 (Brown et al., 2020) tend to make up facts and details, which is now flagged as a primary warning in their API usage. As a major use case of LLMs is the prospect of replacing traditional search engines and usage for more direct information access through question-answering, factuality concerns could largely undermine their validity and degrade users' level of trust (Marcus, 2022). Fixing this issue is challenging and the concerns still persist even after the models are instruction-tuned with human feedback (Ouyang et al., 2022). This is because the source of truth can be unavailable during the finetuning process (OpenAI-Blog, 2022). Thus, it is of urgent concern to better control the generation and increase the factual correctness of predictions. As LLMs could fail to recall accurate details when functioning as a knowledge base (Ye and Durrett, 2022; Creswell et al., 2022), if possible, knowledge from external sources could be introduced as assistance. Assisted thought process is also common in human reasoning: when humans answer questions, they often search (or revisit) external knowledge sources for supporting facts in order to refresh their (internal) memory. Inspired by this, in this work we propose a **Verify-and-Edit** (VE) framework to post-edit the reasoning chains for more factually aligned predictions. As shown in Fig. 1, we first select uncertain instances to edit, which have a less-than-majority-agree consistency. These instances, as implied by Wang et al. (2022), often consist of plausible-sounding statements, such as the sentence "John Nyskohus played for the Norwegian football team Odd Greenland" in Fig. 1. When editing, we first generate a question to verify this detail, such as "What team did John Nyskohus play for?" Then, to answer this query, we introduce external knowledge through open-domain retrieval systems. For example, the fact "John Nyskohus... played for Adelaide City.." is retrieved in this instance. Then, the rationales are edited by providing the retrieved facts in the prompts as memory refreshments. Thus, the edited rationales could be updated corresponding to the retrieved facts (Fig. 1). Given the edited rationales, the new prediction is generated, which considers more factually aligned reasoning traces. To our knowledge, our work is the first to post-edit CoT-style reasoning chains to enhance prediction performance. We perform experiments on two open-domain Question Answering (QA) tasks that require reasoning: Adversarial HotpotQA (Yang et al., 2018) and 2WikiMultihop (Ho et al., 2020). We also test its performance on the Fact Verification task using Fever (Thorne et al., 2018). We find that the model is able to benefit from more factual reasoning chains, thus generating more accurate predictions. For example, for open-domain QA, our model demonstrates 3.8x accuracy improvement compared to similar retrieval-augmented models on AdvHotpot. On 2WikiMultihop, Verify-and-Edit reaches 33.6% accuracy with open-domain search, while CoT Self-Consistency stands at 27.7%. ## 2 Related Work Chain-of-Thought or CoT (Wei et al., 2022) is a prompting method for improving the reasoning abilities of LLMs, which enables LLMs to decompose complex problems into multiple intermediate steps. CoT provides interpretability and has been proven to be more capable of solving complex problems than standard prompting methods. However, hallucination is a long-standing problem in NLP, especially for LLMs, which has drawn significant attention from the research communities. The decoding process of LLMs is auto-regressive, which unavoidably makes it output nonfactual content without controlled generation (Ye and Durrett, 2022; Wiegreffe et al., 2022). As such, the lack of supporting facts during the generation process of CoT could largely undermine the validity of the final answer (Golovneva et al., 2022). Ye and Durrett (2022) demonstrate that the accuracy of the final answers largely correlates with the factuality and consistency of the reasoning explanations. The commonly proposed methods to improve the factuality of CoT reasoning process can be grouped into two categories: prompt engineering and result calibration. Prompt engineering methods are usually applied to guide LLMs to generate better intermediate reasoning explanations. _ReAct_(Yao et al., 2022), which is the most comparable to our work, synergizes reasoning and acting in LLMs, where reasoning steps help the model induce and update actions, while action steps allow the model to consult additional information from Wikipedia for a factuality check. Compared to _ReAct_, we generate more natural and conversational CoTs for better interpretability and easier learning. As such, our framework requires a much shorter prompt to learn. Press et al. (2022) propose _self-ask_ by instructing the LLM to explicitly ask itself (and then answer) follow-up questions before answering the initial question. One natural way of solving a complex problem is to decompose the problem into sub-problems and solve them sequentially. Zhou et al. (2022) adopt the idea and propose _least-to-most_ prompting. However, both _self-ask_ and _least-to-most_ prompting still rely on repetitively retrieving internal knowledge learned by the LLM instead of connecting to external knowledge. Thus, their ability to improve factuality is limited. Result calibration functions on the output of the LLMs. Ye and Durrett (2022) train a calibrator to calibrate the weights of the final answers based on the factuality and consistency of the generated explanations, which efficiently improves the results. The decoding method in CoT is naive greedy, which simply outputs the next token with the highest probability. Wang et al. (2022) propose a _self-consistency_ decoding method, which samples a diverse set of reasoning paths and then selects the most consistent answer by marginalizing out the sampled reasoning paths. _Selection-Inference (SI)_(Creswell et al., 2022) framework is another state-of-the-art method that exploits LLMs as general processing modules. Out of all the methods, it is also the first to systematically improve the factual correctness of CoTs in order to predict more accurately. It alternates between selection and inference to generate a series of interpretable, causal reasoning steps leading to the final answer, which is proven to be efficient. However, it is not designed for open-domain or commonsense question answering. Moreover, another comparable line of work has been exploring retrieval-augmented language model pretraining (REALM) (Guu et al., 2020), which first retrieves documents from an external knowledge source and then utilizes retrieved documents to process question-answering tasks. Lazaridou et al. (2022) propose to include Google search results of the question in the prompt to improve the factuality of the generated answer. However, such methods may fail in complex questions as it does not utilize the reasoning capability of LLMs. Thus, we consider retrieval-augmented reasoning paths as a natural way to increase factual alignment. ## 3 Verify-and-Edit Framework Our goal is to make LLMs generate more factual reasoning chains with CoT prompting assisted with external knowledge, thereby also improving prediction accuracy of the final answer. We hypothesize that this can enhance LLMs' capability to solve complex knowledge-intensive tasks that require multiple reasoning steps to arrive at an answer. Generally, we hope to follow the human reasoning process: when a person answers a question, if he/she is unsure, he/she would search for a supporting fact and consider it before giving the final answer. Thus, we could separate the Verify-and-Edit (VE) framework into 3 different stages: finding uncertain predictions, editing their rationales by searching for supporting facts, and using the edited rationales to generate final answers (Fig. 1). In designing the stages, we hope to maximally preserve the LLMs' biggest advantage: their open-generation and reasoning ability. And we aim to design tasks and setups as natural and conversational as possible, thus making it easy to understand for humans and LLMs which are trained with natural texts. ### Deciding when to edit How can we identify when a model is unsure of its prediction? The self-consistency method (Wang et al., 2022) provides a solution. In sampling diverse reasoning paths and answers, self-consistency is found to be highly correlated with accuracy, suggesting that it could provide an uncertainty estimate and confer abilities for the model to "know when it doesn't know". Thus, we begin the VE framework by using the consistency method to sample \(n\) diverse reasoning paths for a prediction task. The highly consistent predictions are left as-is. When consistency is lower than \(\lceil n/2\rceil\), _i.e_. the majority cannot agree on the same answer, we label it as "uncertain". ### How to edit a specific rationale The rationale, _i.e_. the thought process (CoT), could be viewed in two parts: facts and reasoning which combines facts to derive a new claim. Thus, we consider improving the CoT from both aspects. \(\bullet\)**Facts** To make the thought process more factually correct, we search for supporting facts in external knowledge sources (_e.g_. Wikipedia, Google). First, to mimic a human's query when searching for validating facts, a natural question is generated to verify the rationale. For this, we use the in-context learning capability of the same LLM. The original question and the rationale are both provided in the prompt for verifying question generation to ensure that it asks for the most relevant information required to answer the original question, instead of other entities in the rationale. For example, if the rationale (wrong) is "the US president born on 4 August 1961 is John Kennedy." and the original question is "who is the spouse of the US president born on 4 August 1961", we expect the generated verifying question to be: "Who is the US president born on 4 August 1961?" instead of "When is John Kennedy's birthday?" By generating a relevant question instead of directly querying with the generated rationale, we eliminate potential noise brought by incorrect fact generation. In the example above, if one retrieves using the wrong claim "the US president born on 4 August 1961 is John Kennedy", the incorrect entity "John Kennedy" may obfuscate the search process. In this paper, we use relevant contexts retrieved from 3 systems: (_i_) DrQA (Chen et al., 2017), an open-domain question-answering system; (_ii_) Wikipedia search of relevant pages; and (_iii_) Google search, which demonstrates possibilities of combining LLMs and search engines. As the retrieved contexts from a retrieval system could be longer than desired, we use a pre-trained LM to rank and select the top-\(k\) sentences most similar to the verifying question query. \(\bullet\)**Reasoning** While methods such as Selection-Inference (Creswell et al., 2022) directly use retrieved facts as rationales, they are usually too verbose, longer than desired, or contain irrelevant details. Ye and Durrett (2022) have made similar observations: directly using supporting sentences is usually too verbose and not sufficient. To obtain more relevant and logical rationales, we again utilize a natural and generative approach, as reasoning abilities are believed to be already built into LLMs (Wei et al., 2022). In particular, by feeding in prompts in the format of "question, rationale, answer", the LLM learns to reason for a few steps before answer generation. Upon investigating the original rationales, we observe that, even when they contain incorrect facts, the logical reasoning component seems to be generally intact. Thus, we use the verifying questions (as logic) and retrieved facts (as information) to generate informed answers. The informed answers are then composed into a new rationale, providing potentially a more factual CoT. ### Answering again Finally, with the post-edited CoT, new answers are generated by prompting the LLM. A pseudocode of the overall procedure is given in Alg. 1, and illustrated with an example in Fig. 1. We can see that, by allowing the LLM to incorporate external knowledge, our method could result in more factually-grounded rationales. When prompted into the LLM as a CoT, it could bring in the information necessary to make a new prediction, which was originally not remembered correctly by the model. Compared to specifically designed prompts such as ReAct Yao et al. (2022), the Verify-and-Edit framework is simple and arguably more natural. Its conversational nature could allow humans to better understand the model's thought processes and have the potential for users to naturally interfere and revise at any stage of inference. In the experiments presented next, we also observe that such a setup is effective in mitigating factuality concerns and boosting end-task performances. ## 4 Experiment Setup ### Reasoning tasks As the Verify-and-Edit framework offers more knowledge-grounded reasoning steps, it should benefit tasks that fulfill the following two properties: (_i_) reliant on multi-hop reasoning to arrive at a later prediction, thus depending on rationale generation, and (_ii_) open-domain, thus needing to interact with an external knowledge source. Therefore, we validate the approach on three datasets: (_i_) **Adversarial HotpotQA**Yang et al. (2018), a multi-hop question answering dataset. We use the challenging subset proposed by Ye and Durrett (2022), where the correct and incorrect predictions are balanced using their model. (_ii_) **2Wiki-Multihop**Ho et al. (2020) a multi-hop question-answering dataset exploiting the structured format in Wikidata and use logical rules.1 (_iii_) **Fever**Thorne et al. (2018), a fact verification dataset that labels claims as "SUPPORTS", "REFUTES", or "NOT ENOUGH INFO" based on evidence paragraphs from Wikipedia. Similar to the HotpotQA setup, we sample a challenging set by balancing the samples where GPT3 CoT makes correct and incorrect predictions. Details on the processing and use of the datasets can be found in Appendix A. Footnote 1: We randomly sample 1,000 samples out of 12,576 dev samples for cost considerations. ### Compared methods To provide the most state-of-art performance estimates, we utilize the GPT-3 instruct series API text-davinci-003 Ouyang et al. (2022), the strongest and most up-to-date model at the time of experiments, as a backbone. The cost of experiments is stated in Appendix B. Adversarial HotpotQA and 2WikiMultihop experiments used 6-shot and Fever used 3-shot in-context learning, as Fever questions are shorter and easier to learn. We use the manual annotations provided for HotpotQA by Ye and Durrett (2022) and manually annotate few-shot examples for 2WikiMultihop and Fever in a similar format. Full prompts for baseline and our methods are provided in Appendix C. BaselinesTo provide a more comprehensive overview of where our framework stands, we use the following baselines: 1. **Standard Prediction** (Standard): Directly predicting the label based on input, given the same number of in-context learning examples. 2. **Original CoT**Wei et al. (2022): Predicting the label after generating the explanation. 3. **CoT with Self-Consistency**CoT-SC) Wang et al. (2022): Sampling 5 CoT trajectories with a decoding temperature of 0.7, which is recommended by the paper. 4. **Calibrator**Calib.) Ye and Durrett (2022): A calibrator that tunes the probabilities of a prediction based on the score of its prediction. 5. **ReAct**Yao et al. (2022): A reason-and-act framework that utilizes an external Wikipedia API. For this baseline, we use the reported results in the original paper, which uses the PaLM model Chowdhery et al. (2022), whose performance is similar to GPT-3.2 To add a more justified perspective, we report its performance improvement gained on top of the CoT-SC baseline. 3 Footnote 2: We could not use PaLM as it is not open-sourced. Footnote 3: it is worth noting that ReAct conducted experiments on the entire dataset, where we used a sampled version (see §4.1). Verify-and-Edit (VE)In implementing the VE framework, the same consistency baseline is employed to estimate when the model is uncertain. As stated in §3.1, we edit all instances with a self-consistency score below \(\lceil n/2\rceil\), where \(n\) is the number of sampled paths. Then, the verifying questions are produced using a 2-shot4 setup with in-context learning. The verifying answers are produced using the same number of examples in original answer generation and greedy decoding. To study the effect of knowledge retrieval systems on the results, we use four systems: 1. **Wikipedia-API** (wiki): Searching for the query entities and selecting top sentences from their Wikipedia pages. 2. **DrQA**[10]: A pre-trained open-domain QA model that combines bigram hashing, TF-IDF matching, and a multi-layer recurrent neural network model. We only utilize the contexts retrieved from it.5 Footnote 5: We selected DrQA by first conducting small-scale experiments with different open-domain QA models, including DPR [11]. DrQA is found to yield better performance. Thus, we consistently use it. 3. **Google**: Using top-\(k\) search results produced by Google as assistive contexts. This result is interesting in providing possibilities in combining search engines and LLMs. 4. **Dataset**: Selecting from the set of paragraphs provided in Adversarial HotpotQA and 2Wiki-MultihopQA, which includes ground-truth supporting contexts and distractor paragraphs. This is similar to an oracle setup, which provides an upper bound of the performance boost, assuming we have a good retrieval system. For 1, 2, and 4, after retrieving, we select the top 3 sentences most similar to the query ranked by the pre-trained Sentence BERT model [12] as context. ## 5 Results and Analysis ### Using Self-Consistency: know when it doesn't know For the first step in the Verify-and-Edit framework, consistency is used to measure the model's confidence in a prediction. Aligned with the findings from Wang et al. (2022), we hypothesize that when the consistency is low, the model is more uncertain and thus more likely to generate inaccurate predictions. To test whether this hypothesis holds, we plot the kernal density estimation plots for consistency distribution on the Adversarial HotpotQA dataset. As shown in Fig. 2, the incorrect samples show a left-skewed consistency distribution, where most incorrect predictions have low consistencies. On the other hand, the distribution of correct predictions shows a right-skewed tendency, where there are very few incorrect samples with higher consistencies. This effectively validates our hypothesis. In the main experiments, we use \(\lceil n/2\rceil\) as a majority threshold and edit all samples below it, which is at \(3\). To show the effects of different thresholds on the framework's performance, we also provide an ablation study later. ### Results on HotpotQA Reported in Table 1, we observe that CoT improves on top of the Standard few-shot setting. CoT-SC, on the other hand, does not demonstrate a good improvement on the baseline. Using the calibrator from Ye and Durrett (2022), AUC is improved as it learns to calibrate the answer weights based on ground-truth contexts provided in the dataset. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Method** & **knowledge** & **EM** & \(\Delta\)**EM** & **AUC** \\ \hline CoT-SC \(\rightarrow\) ReAct & Wiki. & 34.2\% & +0.8\% & - \\ ReAct \(\rightarrow\) CoT-SC & Wiki. & 35.1\% & +1.7\% & - \\ \hline Standard & - & 23.1\% & - & 43.24 \\ CoT & - & 31.8\% & - & 38.30 \\ CoT-SC & - & 31.2\% & - & 34.97 \\ CoT-SC + Calib. & Dataset & - & - & 49.00 \\ CoT-SC + VE & Wiki. & 35.7\% & +4.5\% & 45.62 \\ CoT-SC + VE & DRQA & 36.0\% & +4.8\% & 46.06 \\ CoT-SC + VE & Google & 37.7\% & +6.5\% & 47.98 \\ CoT-SC + VE & Dataset & **56.8\%** & **+25.6\%** & **60.94** \\ \hline \hline \end{tabular} \end{table} Table 1: Results on the Adversarial **HotpotQA** dataset. The best result for each model is underlined and the best result overall is bolded. \(\Delta\)EM represents the improvement on Exact Match from the CoT-SC baseline. The top two rows uses the PaLM model and the rest uses the GPT-3 davinci-003 model. Figure 2: Kernal density estimation plots for consistency on the Adversarial **HotpotQA** dataset. With kernal estimation, the curve extends its true distribution’s range, which is from 0 to 5 (as we sampled 5 paths). Thus, it should be compared with the last setup of VE, where we use dataset knowledge. In comparison, the calibrator results in a lower AUC and cannot improve the accuracy as it does not generate alternative answers in open-domain settings. Using the Verify-and-Edit framework, the retrieval systems Wikipedia and DrQA could generate an improvement of 4.5% and 4.8% respectively on top of the baseline, which is 2x the highest EM improvement for ReAct (1.7%). When we combine the search engine results from Google into the framework, the EM is increased by 6.5%, which is 3.8x the ReAct result. This shows a promising method for combining search engines and LLMs, which is a popular direction now. Search engines return factual results, but are less powerful in queries that require reasoning. On the other hand, LLMs are powerful in reasoning and abstraction but tend to generate plausible-sounding but incorrect statements [1, 13]. To combine the best of both worlds, we could utilize the long memory of LLMs, as many users have reported that GPT is able to remember inputs mentioned earlier in the dialogue. By providing factual results from the search engines as a memory refreshment, GPT is able to generate better and more factual predictions. Then, when we use the adversarially augmented paragraphs provided in the dataset, the model is able to demonstrate very high EM (56.8%) and AUC (60.94) at the same time. This setup shows that, if we have a highly compressed set of contexts and a nearly-ideal retrieval system, the Verify-and-Edit framework could potentially result in very strong performances. ### Results on 2WikiMultiHop As shown in Table 2, our method demonstrates even stronger performances on 2WikiMultiHop compared to HotpotQA. The Verify-and-Edit framework with open-domain retrieval is able to generate a high accuracy improvement, ranging from 3.4% to 5.9%. Selecting from paragraphs provided in the dataset, which includes supporting evidences and irrelevant paragraphs, the accuracy improvement is further increased to 9.5%. The calibrator, on the other hand, uses the dataset provided paragraphs but still lags behind all variations of our Verify-and-Edit framework. ### Results on fact verification Results on the Fever dataset are shown in Table 3. As the reasoning required by the Fever dataset is less multi-hop compared to HotpotQA and 2WikiMultiHop, we anticipate that it should demonstrate lower improvements compared to the other two. In the Fever dataset, the calibrator method completely fails, decreasing to 33.7%: it calibrates the prediction scores based on factuality estimates, which is produced by examining the overlap between the reasoning path and the provided context. However, in such Fact Verification datasets, there is no provided contexts. Thus, we calibrate using the original claim, which results in bad performances. It shows here that one limitation of the calibrator method is that it only applies to cases with provided relevant contexts. Even though this task does not require much reasoning, employing the Verify-and-Edit framework, we are able to observe consistent improvements over the baseline method. Similar to before, the Wikipedia retrieval is able to result in a larger improvement over DrQA, and Google search improves further at 1.9%. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Method** & **knowledge** & **EM** & \(\Delta\)**EM** & **AUC** \\ \hline Standard & - & 16.9\% & - & 35.89 \\ CoT & - & 28.4\% & - & 16.64 \\ CoT-SC & - & 27.7\% & - & 17.16 \\ CoT-SC + Calib. & Dataset & - & - & 24.13 \\ CoT-SC + VE & Wiki. & 33.1\% & +5.4\% & 28.32 \\ CoT-SC + VE & DRQA & 31.1\% & +3.4\% & 27.75 \\ CoT-SC + VE & Google & 33.6\% & +5.9\% & 30.06 \\ CoT-SC + VE & Dataset & **37.2\%** & **+9.5\%** & **32.28** \\ \hline \hline \end{tabular} \end{table} Table 2: Results on **2WikiMultiHopQA** dataset. \(\Delta\)EM represents the improvement on Exact Match from the CoT-SC baseline. All experiment uses the GPT-3 davinci-003 model. \begin{table} \begin{tabular}{l c c c} \hline \hline **Method** & **knowledge** & **Accuracy** & \(\Delta\)**Accuracy** \\ \hline CoT-SC \(\rightarrow\) ReAct & Wiki. & - & +4.2\% \\ ReAct \(\rightarrow\) CoT-SC & Wiki. & - & +1.6\% \\ \hline Standard & - & 46.8\% & - \\ CoT & - & 50.0\% & - \\ CoT-SC & - & 52.0\% & - \\ CoT-SC + Calib. & - & 33.7\% & \\ CoT-SC + VE & Wiki. & 53.6\% & +1.6\% \\ CoT-SC + VE & DRQA & 53.3\% & +1.3\% \\ CoT-SC + VE & Google & 53.9\% & +1.9\% \\ \hline \hline \end{tabular} \end{table} Table 3: Results on **Fever** dataset. \(\Delta\)Accuracy represents the improvement on Accuracy from the CoT-SC baseline. The top two rows uses the PaLM model and the rest uses the GPT-3 davinci-003 model. Compared to our method, ReAct is able to demonstrate a larger improvement on Fever. First of all, it has been mentioned before that Fever is less suited for the Verify-and-Edit framework as it requires less reasoning to solve the task. Secondly, ReAct prompts are much longer than our prompts, requiring more computational costs. ### Cost considerations As cost reduction is a main concern when interacting with LLMs, our method takes it into consideration and attempts to reduce computational costs from two aspects: Firstly, Verify-and-Edit only makes edits for selected instances, whereas others edit every time. Specifically, we only revise when the model is uncertain (judged by consistency), which occurs 40% of the time. As a comparison, other methods, such as ReAct, retrieve relevant information and edit for every single instance, resulting in higher costs. Secondly, Verify-and-Edit designs tasks that are natural and conversational, requiring only a few demonstrations and short prompts to learn. For example, other methods usually learn non-natural calls, such as [thought] and [action] tags in ReAct and API calls in Toolformer (Schick et al., 2023). Therefore, the LLM requires longer prompts, more demonstrations, or even fine-tuning to learn the format. On the other hand, we design Verify-and-Edit tasks to be as natural as possible, requiring minimal effort to learn. Our tasks only consist of asking and answering questions, with no synthetic tags or tasks to be learned. As a comparison, with the GPT-3 API, for editing one Fever instance, Verify-and-Edit costs $0.014, whereas ReAct costs $0.017. ### Evaluating the reasoning chains with human study To closely examine the faithfulness of the generated reasoning chains, we also conduct a small-scale human study experiment. During the experiment, two human volunteers are shown 50 randomly selected questions with generated reasoning chains from CoT-SC and Verify-and-Edit on the HotpotQA dataset. They are then asked to select the more factually consistent one. Volunteers are encouraged to use search engines as assistance. A detailed description on the setup is described in Appendix D. Shown in Table 4, humans select the reasoning chains produced by Verify-and-Edit as more factually consistent 53% of the time, compared to 17% for the CoT-SC baseline. The Cohen \(\kappa\) is at 0.25, showing fair agreement between the two annotators (McHugh, 2012). The annotators used Google search as an assistive tool 100% of the time, which shows the necessity of introducing external knowledge. Moreover, human annotations in this case require a lot of efforts. Annotators report 1.5 minutes on average to validate one data point. Thus, automating the Verify-and-Edit process is of benefits as an assistive tool to reduce human labor. To observe the qualitative effects of the Verify-and-Edit framework in detail, we also include several interesting examples in Appendix E, which show the effectiveness of our framework in correcting the original claims. ### Ablation study: editing at different consistency thresholds In the Verify-and-Edit framework, the only hyperparameter to select is the consistency threshold. Similar thresholds also exists in ReAct (Yao et al., 2022), where the CoT \(\rightarrow\) ReAct method is to employ ReAct-style prompting when "the majority answer among n CoT-SC samples occurs less than n/2 times". Using majority counts, however, is less fine-grained compared to using the original consistency formulated with log probablities. Thus, we employ the original score proposed by Wang et al. (2022), which is the unnormalized answer probabilities marginalized over the rationales' log \begin{table} \begin{tabular}{c c c c c} \hline \hline **\# Examples** & **Cohen \(\kappa\)** & **CoT-SC** & **Ours** & **Tie** \\ 50 & 0.25 & 17\% & **53\%** & 30\% \\ \hline \hline \end{tabular} \end{table} Table 4: Human study for factuality of CoTs on the HotpotQA dataset. “Ours” refers to the Verify-and-Edit model with Google retrieval. Figure 3: Ablation study on the effect of various consistency thresholds on task performances on Adversarial HotpotQA probabilities. To mimic a majority-vote threshold, we select \(\lceil n/2\rceil\), where \(n\) is the number of sampled paths. To study the effect of adjusting the consistency threshold on our framework, we show the ablation results of Adversarial HotpotQA in Fig. 3. As the threshold increases, accuracy first increases, reaching a peak close to \(\lceil n/2\rceil\), which is 3, before decreasing. The AUC scores demonstrate a similar trend. As shown in Fig. 2, when consistency is larger than majority (\(\lceil n/2\rceil\)), there are usually more correct predictions rather than incorrect predictions, and vice versa. Thus, as we increase the consistency threshold from 0 to \(\lceil n/2\rceil\), more uncertain and possibly incorrect samples are getting edited by introducing external knowledge. As we go beyond the ideal threshold \(\lceil n/2\rceil\), we are mostly re-editing correct samples, and the introduced noise may disrupt the original reasoning chains. Thus, we recommend a consistency threshold at \(\lceil n/2\rceil\) as an ideal level. ## 6 Conclusions In this paper, we introduce a Verify-and-Edit framework for open-domain question-answering. It is a first attempt to post-edit CoT-style reasoning chains for better end-task performance. By combining knowledge retrieval with reasoning, the framework edits CoTs in a natural and conversational way, which enhances prediction factuality. Combined with Google search, the framework also shows a promising direction that combines the open-generation ability of state-of-art LLMs with the updated facts provided by search engines. ## Limitations There are a few limitations to the current framework. Firstly, Verify-and-Edit works the best for open-domain question-answering tasks that require complex reasoning. Less complex datasets or commonsense datasets that do not require knowledge retrieval may not result in high improvements. Secondly, it is most ideal to edit a group of mostly incorrect samples, which we try to select by using consistency. Thus, our method is reliant on the consistency method's performance and its abilities to separate correct and incorrect predictions. Most often, it can demonstrate a larger improvement with a more challenging set of examples. To address these limitations, we plan to work on reducing the noise brought in the rationale-editing stage and utilize more knowledge resources, such as knowledge bases, as a follow-up. ## Ethics Statement The Verify-and-Edit framework can mitigate potential ethical concerns of LLM generation surrounding hallucinations and unfactual details. Some persisting concerns include: (1) As the framework uses google as one of the retrieval methods, it could retrieve potentially toxic information that exists in google search results. (2) As the framework uses GPT3 as a backbone, it could suffer from existing ethical concerns of GPT3, such as responding to toxic queries or exhibiting biased behavior. For knowledge retrieval, we used Wikipedia corpus and google search results. Permission is granted to copy, distribute and/or modify Wikipedia's text under the terms of the Creative Commons Attribution-ShareAlike 3.0 Unported License. For google search results, scraping publicly accessible data is legal considered by the U.S. appeals court. ## 7 Acknowledgement This research is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG-PhD/2021-01-001[T]).
2307.15273
Recovering high-quality FODs from a reduced number of diffusion-weighted images using a model-driven deep learning architecture
Fibre orientation distribution (FOD) reconstruction using deep learning has the potential to produce accurate FODs from a reduced number of diffusion-weighted images (DWIs), decreasing total imaging time. Diffusion acquisition invariant representations of the DWI signals are typically used as input to these methods to ensure that they can be applied flexibly to data with different b-vectors and b-values; however, this means the network cannot condition its output directly on the DWI signal. In this work, we propose a spherical deconvolution network, a model-driven deep learning FOD reconstruction architecture, that ensures intermediate and output FODs produced by the network are consistent with the input DWI signals. Furthermore, we implement a fixel classification penalty within our loss function, encouraging the network to produce FODs that can subsequently be segmented into the correct number of fixels and improve downstream fixel-based analysis. Our results show that the model-based deep learning architecture achieves competitive performance compared to a state-of-the-art FOD super-resolution network, FOD-Net. Moreover, we show that the fixel classification penalty can be tuned to offer improved performance with respect to metrics that rely on accurately segmented of FODs. Our code is publicly available at https://github.com/Jbartlett6/SDNet .
J Bartlett, C E Davey, L A Johnston, J Duan
2023-07-28T02:47:34Z
http://arxiv.org/abs/2307.15273v1
Recovering high-quality FODs from a reduced number of diffusion-weighted images using a model-driven deep learning architecture ###### Abstract Fibre orientation distribution (FOD) reconstruction using deep learning has the potential to produce accurate FODs from a reduced number of diffusion-weighted images (DWIs), decreasing total imaging time. Diffusion acquisition invariant representations of the DWI signals are typically used as input to these methods to ensure that they can be applied flexibly to data with different b-vectors and b-values; however, this means the network cannot condition its output directly on the DWI signal. In this work, we propose a spherical deconvolution network, a model-driven deep learning FOD reconstruction architecture, that ensures intermediate and output FODs produced by the network are consistent with the input DWI signals. Furthermore, we implement a fixel classification penalty within our loss function, encouraging the network to produce FODs that can subsequently be segmented into the correct number of fixes and improve downstream fixed-based analysis. Our results show that the model-based deep learning architecture achieves competitive performance compared to a state-of-the-art FOD super-resolution network, FOD-Net. Moreover, we show that the fixel classification penalty can be tuned to offer improved performance with respect to metrics that rely on accurately segmented of FODs. Our code is publicly available at [https://github.com/Jbarlett6/SDNet](https://github.com/Jbarlett6/SDNet). Diffusion MRI, model-based deep learning, FOD reconstruction ## I Introduction Fibre orientation distributions (FODs) relate signal attenuation in diffusion-weighted magnetic resonance images to the volume fractions and orientations of fibre populations in the brain [1, 2, 3]. Their flexibility and capacity to discern intra-voxel fibre populations facilitates a range of subsequent quantitative analyses; tractography algorithms can be used to obtain tractograms, and FOD segmentation can provide discrete fibre bundle elements (fixels) [4, 5]. Multi-shell, high angular resolution diffusion imaging datasets are required to fit FODs with sufficient angular detail, and for separating the contribution of different tissue types [6, 3]. The approximately linear relationship between the time a subject spends in the scanner and the number of diffusion-weighted images (DWIs) collected means acquiring such datasets is time consuming. Deep learning can help to alleviate this issue by performing FOD reconstruction, the task of fitting high-fidelity FODs to a reduced number of DWI signals. To ensure their flexibility, such deep learning methods should be invariant to changes in diffusion MRI acquisition arising due to inter-facility variability or DWI volume corruption. Resampling techniques such as spherical harmonics (SH) [7, 8, 9] and nearest neighbour [10] interpolation have been explored to resample arbitrary DWI acquisitions onto a pre-defined spherical grid. Alternatively, an SH representation of the signal can be used as input to the network [11, 12, 13, 14]. FOD super-resolution methods [15, 16, 17] perform constrained spherical deconvolution (CSD) as a pre-processing step and take the SH representation of the FOD as input. Results in the literature vary due to the range of acquisitions and CSD algorithms used to fit the FODs such as: single-shell-single-tissue [15], two-tissue [16] and single-shell-three-tissue [17] FODs. High computational costs and the risk of overfitting mean it is not feasible to process all signals in the spatial and diffusion-acquisition dimensions concurrently. By predicting the central FOD from a limited spatial neighbourhood of the input [11, 17, 13], a compromise can be found between reducing the computational burden and exploiting the abundance of spatial correlations present in the data. Such methods commonly utilise a 3D convolutional neural network (CNN) for feature extraction, followed by fully connected or transformer layers for FOD prediction [9]. It is common practice for FODs to be fit using CSD with a maximum SH order of eight [17, 11] in order to to capture angular frequency content of the DWI signal at a maximum b-value of \(3000\) s/mm\({}^{2}\)[6]. Some tractography algorithms require only the orientations of fibre populations in each voxel as input, so a number of FOD reconstruction algorithms predict only these quantities [7, 10]. Alternatively, an unsupervised loss function with sparsity inducing regularisation can be used to reconstruct FODs with an increased maximum order of 20 [8]. Whilst improving the angular separation, these methods change the FOD model, meaning it is likely that fixel-derived scalars, such as apparent fibre density and peak amplitude, also deviate. Therefore, it would be infeasible to apply such methods within a fixel-based analysis pipeline. Model-based deep learning exploits domain knowledge of a process to inspire neural network architectures. Many approaches alternate between CNN-based denoising and data consistency blocks [18, 19, 20, 21]. Data consistency blocks use prior knowledge of an appropriate forward model to ensure a network produces solutions consistent with the input signal. When calculating acquisition invariant representations of the DWI signal, fitting errors are incurred. We conjecture that such errors lead to the degradation of FOD reconstruction performance since the subsequently applied neural networks cannot directly condition their output on the true DWI signal, and model-based deep learning has the potential to lessen the impact of these errors by ensuring intermediate and output FODs are consistent with the DWI signal. In the context of FOD reconstruction, data consistency blocks minimise a linear combination of the CSD data consistency and an additional, deep learning based, regularisation term. Current implementations use a pre-trained autoencoder based regularisation term [15], however this means the network will not be optimised for FOD reconstruction performance. Model-based deep learning has to this point not been combined with techniques proven successful in end-to-end FOD reconstruction architecture. In this paper **S**pherical **D**econvolution **N**etwork (SDNet) is introduced, a model-based deep learning architecture that utilises spatial information from surrounding voxels and is optimised to perform FOD reconstruction of multi-shell data. Additionally, we propose a fixed classification penalty within our loss function to improve angular separation without distorting the shape of the reconstructed FODs, which can be tuned to suit the requirements of the reconstructed FODs. The efficacy is evaluated by extensive comparisons with a state-of-the-art FOD super-resolution method, FOD-Net, as well as an ablation study. Our results show that including model-based deep learning improves the performance of the network. ## II Method ### _Network Architecture_ Constrained spherical deconvolution is used to fit FODs to DWI signal by optimising the following objective function: \[\min_{\mathbf{c}}\frac{1}{2m}\|\mathcal{A}\mathcal{Q}\mathbf{c}-\mathbf{b}\|_ {2}^{2}+\mathcal{R}\left(\mathbf{c}\right) \tag{1}\] where \(\mathbf{c}\in\mathbb{R}^{n}\) are the SH coefficients of the FOD, \(\mathbf{b}\in\mathbb{R}^{m}\) are the DWI signals, and \(\mathcal{A}\mathcal{Q}\in\mathbb{R}^{m\times n}\) spherically convolves the FOD with the response functions of the tissue types being modelled. To facilitate a data-driven regularisation term, optimised for FOD reconstruction, we consider an arbitrary regularisation term, \(\mathcal{R}(\cdot)\), in place of the ubiquitous non-negativity constraint. In the following we outline how the variable splitting methods used in Jia et al. [20], Duan et al. [21] can be adapted to solve (1). First, we introduce an auxiliary splitting variable \(\mathbf{w}\in\mathbb{R}^{n}\), converting (1) into the following equivalent form: \[\min_{\mathbf{c},\mathbf{w}}\frac{1}{2m}\|\mathcal{A}\mathcal{Q}\mathbf{c}- \mathbf{b}\|_{2}^{2}+\mathcal{R}\left(\mathbf{w}\right)\,s.t.\,\,\mathbf{c}= \mathbf{w} \tag{2}\] Using the penalty function method, we add these constraints back into the model and minimise the joint objective: \[\min_{\mathbf{c},\mathbf{w}}\frac{1}{2m}\|\mathcal{A}\mathcal{Q}\mathbf{c}- \mathbf{b}\|_{2}^{2}+\mathcal{R}\left(\mathbf{w}\right)+\frac{\lambda}{2}\| \mathbf{c}-\mathbf{w}\|_{2}^{2} \tag{3}\] Eq. (3) can be solved for \(\mathbf{c}\) and \(\mathbf{w}\) using an alternating optimisation scheme: \[\left\{\begin{array}{l}\mathbf{c}^{k+1}=\arg\min_{\mathbf{c}}\frac{1}{2m}\| \mathcal{A}\mathcal{Q}\mathbf{c}-\mathbf{b}\|_{2}^{2}+\frac{\lambda}{2}\| \mathbf{c}-\mathbf{w}^{k}\|_{2}^{2}\\ \mathbf{w}^{k+1}=\arg\min_{\mathbf{w}}\frac{\lambda}{2}\|\mathbf{c}^{k+1}- \mathbf{w}\|_{2}^{2}+\mathcal{R}\left(\mathbf{c}^{k+1}\right)\end{array} \right.. \tag{4}\] The first convex optimisation can be solved using matrix inversion. The second equation is a denoising problem with arbitrary regularisation, the optimal form of which is unknown. In order to learn the regularisation to improve FOD reconstruction performance, the iterative process can be unrolled and the denoising step solved using a neural network, \(\mathcal{NN}(\cdot)\): \[\left\{\begin{array}{l}\mathbf{c}^{k+1}=\left(\frac{1}{m}\mathcal{Q}^{ \mathcal{T}}\mathcal{A}^{\mathcal{T}}\mathcal{A}\mathcal{Q}+\lambda\mathcal{ I}\right)^{-1}\left(\frac{1}{m}\mathcal{Q}^{\mathcal{T}}\mathcal{A}^{ \mathcal{T}}\mathbf{b}+\lambda\mathbf{w}^{k}\right)\\ \mathbf{w}^{k+1}=\mathcal{NN}\left(\mathbf{c}^{k+1}\right)\end{array}\right.. \tag{5}\] The network architecture (Fig. 1) takes nine voxels in each spatial dimension for 30 different diffusion gradients, resulting in a \(9\times 9\times 30\) volume of DWI signals as input, and passes them through alternating DWI consistency and deep regularisation blocks. The network outputs a vector \(\hat{\mathbf{c}}\in\mathbb{R}^{n}\), a high-fidelity prediction of the FOD from the central voxel of the \(9\times 9\times 9\) input patch. #### Ii-A1 DWI Consistency Each DWI consistency block solves the matrix inversion in (5) independently for each voxel, maintaining spatial resolution. The initial DWI consistency block optimises only for the first three even orders of spherical harmonic coefficients \((l_{max}=4)\) to ensure robustness to aggressive DWI undersampling. #### Ii-A2 Deep Regularisation Each deep regularisation block is applied to a concatenation of the previous two DWI consistency blocks, meaning the block is conditioned on both earlier representations. Validation tests showed these connections improve network performance (data not shown). The initial \(3\times 3\times 3\) convolution kernels are applied with one layer of zero padding in each dimension, as to maintain spatial resolution, and are followed by 3D batch normalisation layers and parametric rectified linear unit (PReLU) activation functions. The number of channels is increased in this manner until it has reached 448 (Fig. 1). No padding is applied in the final \(3\times 3\times 3\) convolution kernel followed by a PReLU function, reducing the resolution in each spatial dimension by two. Finally, a \(1\times 1\times 1\) convolution kernel is then applied to the 512-channel feature maps to obtain a 94-channel input to a gated linear unit (GLU) activation function,which is the output of the block. Residual connections, referencing the output of the previous DWI consistency block, are used to improve gradient flow through the network. The deep regularisation block reduces each spatial dimension of its input by two. ### _Loss Functions_ In addition to the customary MSE loss, a fixed classification penalty is proposed to give greater control over the angular separation of the reconstructed FODs. The mechanics of this method can be considered similar to the microstructure sensitive loss proposed for DWI signal reconstruction in [22]. To overcome the inherent non-differentiable nature of the fast marching level set FOD segmentation algorithm [23], a fixed classification network is applied to predict the number of pixels each voxel contains. The output is passed into a cross-entropy component of the loss function. Since we are concerned with the white matter components of the FODs, the loss function and performance metrics are not functions of the grey matter and cerebrospinal fluid components of the FOD. For notational simplicity, from this point onwards \(\mathbf{c}\) refers only to the white matter component of the FOD. The overall loss function is as follows: \[\mathcal{L}(\hat{\mathbf{c}})=\frac{1}{N_{batch}}\sum_{i=1}^{N_{batch}}\left(\| \hat{\mathbf{c}}_{i}-\mathbf{c}_{i}\|_{2}^{2}+\kappa\mathcal{E}(\hat{\mathbf{ f}}(\hat{\mathbf{c}}_{i}),\mathbf{f}_{i})\right) \tag{6}\] where \(N_{batch}\) is the number of data points in the mini-batch, \(\hat{\mathbf{c}}_{i}\), \(\mathbf{c}_{i}\in\mathbb{R}^{n-2}\) are the reconstructed and fully sampled white matter FODs, \(\mathcal{E}(\cdot,\cdot)\) is the cross-entropy, \(\hat{\mathbf{f}}(\hat{\mathbf{c}}_{i}),\ \mathbf{f}_{i}\in\mathbb{R}^{5}\) are the predicted logits and the one-hot encoding of the number of fixels respectively and \(\kappa\) is a hyperparameter to balance the two components of the loss function. When training the fixed classification network, the number of fixels in each voxel were thresholded to four (Tab. I), reducing the inclusion of spurious peaks and class imbalance. A simple, fully-connected architecture was used, with layers containing {45, 1000, 800, 600, 400, 200, 100, 5} neurons. Between each layer there are ReLU activation and 1D batch normalisation functions, other than between the penultimate and final layer where the batch normalisation is omitted. A softmax activation function, followed by cross-entropy loss, were then applied to the output of the network. The classification network was trained using the same training set as SDNet. Fully sampled FODs were used as the input, and the ground truth targets were calculated using the fast level set marching algorithm [23]. ### _Implementation Details_ To demonstrate the impact of the fixed classification penalty, experiments were carried out with \(\kappa=0\) and \(\kappa=1.6\times 10^{-4}\). The ADAM optimiser [24], with learning rate warm-up, was used for parameter optimisation, with an initial learning rate of \(10^{-6}\), increasing to \(10^{-4}\) after \(10,000\) iterations. To minimise hyperparameter tuning, \(\lambda\) was optimised simultaneously with the network weights. From validation experiments (data not included), we found that the most effective way to utilise the classification loss to train SDNet was to initially train the model with \(\kappa=0\) and then to increase \(\kappa\) to its final value after this initial training stage. To do so we trained SDNet with only MSE loss until convergence, then trained the network until convergence with \(\kappa=1.6\times 10^{-4}\). ## III Experiments ### _Dataset_ A subset of the WU-Minn Human Connectome Project (HCP) dataset [25], consisting of 30 subjects, was split \(20/3/7\) and used for training, validation, and testing, respectively. The HCP images have \(1.25\)mm isotropic resolution with 90 gradient directions for \(b=1000,2000\) and \(3000\) s/mm\({}^{2}\) and 18 \(b_{0}\) images. The HCP dataset was minimally pre-processed in accordance with [26]. Additionally, prior to applying SDNet, each subject's data was normalised using MRtrix3's _duinormalise_ function. The fully sampled FODs were fit to all 288 DWIs; first, the response functions were calculated using the method proposed in [27], then the FODs calculated using MSMT-CSD [3]. White matter response functions and FODs were modelled with \(l_{max}=8\) and the grey matter and cerebrospinal fluid \begin{table} \begin{tabular}{|c|c|c|c|} \hline Number & Count & \begin{tabular}{c} Percentage \\ before thresholding \\ \end{tabular} & \begin{tabular}{c} Percentage \\ after thresholding \\ \end{tabular} \\ \hline 1 & 310994 & 49\% & 49\% \\ 2 & 200673 & 32\% & 32\% \\ 3 & 76672 & 12\% & 12\% \\ 4 & 24095 & 4\% & 6.7\% \\ 5 & 10975 & 2\% & - \\ 6 & 4979 & 0.8\% & - \\ 7 & 1800 & 0.3\% & - \\ \hline \end{tabular} \end{table} TABLE I: Count and percentage values of fixels in white matter voxels of an individual from the HCP dataset before and after thresholding at 4 kresels. Before thresholding there is a severe class imbalance. Fig. 1: SDNet architecture, made up of alternating deep regularisation blocks and DWI consistency blocks. Each DWI consistency block is made up of 3D convolution blocks, the values above each set of layers represents the number of channels, which increase as follows: \(\{94,128,192,256,320,384,448\}\). The DWI consistency block shows the matrix inversion that is solved for each voxel independently. component response functions and FODs were modelled with \(l_{max}=0\), resulting in a total of 47 SH coefficients. The sampling pattern Caruyer et al. [28] utilised in the HCP is such that for any \(k\), selection of the first \(k\) DWI volumes results in evenly spread b-vectors. To prepare the input data, the first 9 DWIs from each non-zero shell were selected with an additional 3 \(b_{0}\) images, resulting in a total of 30 DWI signals. Only patches in which the central voxel is classified as grey matter or white matter are used for training. The grey matter voxels were included to improve performance near the boundary of the two tissue types, as highlighted in [17]. The grey and white matter masks were calculated using the method outlined in [29], which is implemented using the FSL software package [30]. From this point onwards, for notational convenience, SDNet (\(\kappa=0\)) will be referred to as SDNet and SDNet (\(\kappa=1.6\times 10^{-4}\)) will be referred to as SDNet\({}_{\kappa}\). To evaluate the performance of the introduced methods, SDNet, SDNet\({}_{\kappa}\), FOD-Net [17], and super-resolved MSMT CSD, referred to as MSMT CSD for notational simplicity, were all compared. In the original implementation, FOD-Net maps FODs fit using the single shell three tissue CSD algorithm [27] to 32 DWIs (4 \(b_{0}\) and \(28\)\(b=1000/2000/3000\) s/mm\({}^{2}\)) to the desired MSMT CSD obtained FODs. To allow a fair comparison between FOD-Net and the proposed networks, FOD-Net was trained using the same training set as SDNet. Since the final block in the SDNet architecture is a DWI consistency block, it cannot map to normalised FODs, therefore the target training data is not normalised. It should be noted that the normalisation can still be performed as a post-processing step. Otherwise, the same configuration settings found in the Github repository released by the FOD-Net authors were used. ### _Performance Metrics_ To evaluate the performance of the FOD reconstruction algorithms, performance metrics were calculated voxel-wise then averaged over regions of interest. The regions considered were the white matter and intersections of individual tracts within the white matter. The tracts considered were: the corpus callosum (CC), the middle cerebellar peduncle (MCP), the corticospinal tract (CST), and the superior longitudinal fascicle (SLF). To understand how the algorithm performs in voxels containing different numbers of fibres, we considered the intersections of these tracts as in [17]. For voxels containing a single fibre, we considered voxels in the CC containing a single fixed, which we refer to as ROI-1-CC. For two crossing fibres, we considered voxels in the intersection of the MCP and CST containing two fixels, which we refer to as ROI-2-MCP. For three crossing fibres, we considered voxels in the intersection of the SLF, CST and CC containing three fixels, which we refer to as ROI-3-SLF. The white matter mask was calculated using the FSL five tissue type segmentation algorithm in MRtrix3. The segmentation masks for the white matter fibre tracts were obtained using TractSeg [31]. The SSE between the reconstructed FODs, \(\hat{\mathbf{c}}\), and the fully sampled FODs, \(\mathbf{c}\), was computed as follows: \[\text{SSE}\left(\mathbf{c},\hat{\mathbf{c}}\right)=\left\|\mathbf{c}-\hat{ \mathbf{c}}\right\|_{2}^{2} \tag{7}\] The angular correlation coefficient (ACC) [32] was computed as follows: \[\text{ACC}(\mathbf{c},\hat{\mathbf{c}})=\frac{\sum\limits_{i=1}^{4}\sum \limits_{j=-2i}^{2i}\mathbf{c}_{2i,j}\hat{\mathbf{c}}_{2i,j}}{\sqrt{\left( \sum\limits_{i=1}^{4}\sum\limits_{j=-2i}^{2i}\mathbf{c}_{2i,j}^{2}\right)\left( \sum\limits_{i=1}^{4}\sum\limits_{j=-2i}^{2i}\hat{\mathbf{c}}_{2i,j}^{2} \right)}} \tag{8}\] Fig. 2: Qualitative results showing reconstructed FODs for HCP subject 130821 centred at voxel [38,98,70]. The top row consists of the **a.** Fully Sampled, **b.** SDNet, **c.** SDNet\({}_{\kappa}\), **d.** FOD-Net, and **e.** MSMT CSD FODs. The bottom row shows a zoomed-in area of FODs, corresponding to the region highlighted by the white square, consisting of the **f.** Fully Sampled, **g.** SDNet, **h.** SDNet\({}_{\kappa}\), **i.** FOD-Net, and **j.** MSMT CSD FODs. Where \(\kappa\) is the hyperparameter which balances the SH error and fixed classification penalty terms in the loss function as per Eq. 6. We refer to SSE and ACC as _FOD-based performance_ metrics, since they compare the SH representation of the FODs prior to any further processing. Fixel-based analysis requires each FOD to be segmented into fixels, each of which has associated apparent fibre density and peak amplitude [23]. To calculate the associated error metrics, peak amplitude and apparent fibre density vectors must be assembled. Each vector consists of the respective scalar for each fixed ordered according to the peak amplitude and are padded to a fixed length. The remaining metrics are referred to as _fixed-based performance_ metrics since they require the FOD to be segmented into fixels prior to evaluation. Fixel accuracy was defined for a region of interest as the proportion of voxels in which the FOD is segmented into the correct number of fixels. The peak amplitude error (PAE) was calculated between the reconstructed, \(\hat{\mathbf{f}}^{P}\), and fully sampled FOD's, \(\mathbf{f}^{P}\), peak amplitude vectors: \[\text{PAE}\left(\mathbf{f}^{P},\hat{\mathbf{f}}^{P}\right)=\sum_{i}\left|f_{i}^{P}- \hat{f}_{i}^{P}\right| \tag{9}\] The apparent fibre density error (AFDE) was calculated between the reconstructed, \(\hat{\mathbf{f}}^{A}\), and fully sampled FOD's, \(\mathbf{f}^{A}\), apparent fibre density vectors: \[\text{AFDE}\left(\mathbf{f}^{A},\hat{\mathbf{f}}^{A}\right)=\sum_{i}\left|f_{i}^{A}- \hat{f}_{i}^{A}\right| \tag{10}\] ### _Ablation Study_ To investigate the impact of the DWI consistency block on the performance of the network, an ablation study was conducted. The network was trained without the DWI consistency blocks, and all other aspects of the architecture and network training remained the same. We compared this model to SDNet with the DWI consistency blocks included. ### _Statistical Analysis_ Shapiro-Wilk tests for normality (\(\alpha=0.05\)) were applied for each performance metric and method; unless otherwise stated there is insufficient evidence to reject the null hypothesis that the groups are normally distributed. Since the data was normally distributed, and each method was applied to the same set of test subjects, a repeated measures one-way ANOVA (\(\alpha=0.05\)) was applied to each performance metric to determine whether there was a main effect between the conditions. Finally, to determine which methods contributed to the main effect, post-hoc t-tests with Bonferroni correction (adjusted for \(\alpha=0.05\)) were used to identify effects between the FOD reconstruction algorithms. ## IV Results ### _Qualitative Results_ The qualitative results comparing all methods (Fig. 2) show that the deep learning methods reconstructed FODs that more closely resembled the ground truth when compared to MSMT CSD. The primary difference is the presence of spurious peaks produced by MSMT CSD, whereas the deep learning based algorithms coherently captured the major tracts in this region due to their denoising effect. The highlighted region in Fig. 2 (panels **f-j.**) shows an area where FOD-Net produced distorted FODs compared to SDNet and SDNet\({}_{\kappa}\). MSMT CSD reconstructed particularly noisy FODs in this area, which the results obtained by FOD-Net resembled some similarities to. The FODs produced by SDNet underestimated the amplitude in this region but more accurately distinguished between fibre populations and captured their directions. In this region, which contains dominant fibre populations with large angular separation, the impact of increasing \(\kappa\) on the reconstructed FODs is minimal; only a small change in the direction of the fibres is observed. In the larger tracts in panels Fig. 2 **a-e.**, such as the green fibre Fig. 3: FODs taken from HCP subject 130821, centred at voxel [84,110,70]. The top row are the **a.** Fully Sampled, **b.** SDNet, and **c.** SDNet\({}_{\kappa}\) FODs. The second row consists a zoomed-in region of FODs, corresponding to the region highlighted by the white square the contain the **d.** Fully Sampled, **e.** SDNet, and **f.** SDNet\({}_{\kappa}\). FODs. Where \(\kappa\) balances the SH error and fixed classification penalty terms in the loss function as per Eq. 6. population going upwards in the bottom left corner, all deep learning methods performed similarly. The qualitative results comparing SDNet with SDNet\({}_{\kappa}\) (Fig. 3) illustrate that SDNet\({}_{\kappa}\) better separated fibre populations. The presence of fibre populations going from the lower left to upper right of panels Fig. 3**d.-f.** are separated from the larger fibre population by SDNet\({}_{\kappa}\) but not by SDNet without the fixed classification penalty. The FODs reconstructed in the broader region, captured in panels Fig. 3**a.-c.**, show that larger fibre populations are reconstructed similarly for both SDNet and SDNet\({}_{\kappa}\). ### _FOD-based Results_ The SSE error maps (Fig. 4) show that lower SSE is achieved throughout the brain by all deep learning methods compared to MSMT CSD. SDNet generally achieved smaller errors than the other deep learning methods. This is particularly evident in, but not restricted to, the areas highlighted by the red arrows. The error maps produced by SDNet\({}_{\kappa}\) and FOD-Net are similar. The average FOD-based performance results (Fig. 5 and Tab. II) show that SDNet reconstructed FODs with significantly lower SSE and higher ACC than the compared methods in all regions of interest considered. The training curves (Fig. 6) show that increasing \(\kappa\) caused the validation ACC to decrease over the validation set. In the white matter voxels, SDNet achieved the lowest SSE by a statistically significant margin over all compared methods, followed by SDNet\({}_{\kappa}\) and FOD-Net, between which there was no statistically significant difference in SSE. SDNet also achieved the strongest ACC performance in the white matter, where it improved over all other methods by a statistically significant margin. There was no statistically significant difference between SDNet\({}_{\kappa}\) and FOD-Net with respect to ACC in the white matter. In all of ROI-1-CC, ROI-2-MCP, and ROI-3-SLF, SDNet achieved the strongest SSE and ACC results (Fig. 5 and Tab. II) by a statistically significant margin. FOD-Net and SDNet\({}_{\kappa}\) showed no statistically significant differences with respect to SSE and ACC in ROI-1-CC and ROI-2-MCP but in ROI-3-SLF SDNet\({}_{\kappa}\) achieved a statistically significant improvement over FOD-Net with respect to both SSE and ACC. In all regions, all deep learning based FOD reconstruction methods outperformed MSMT CSD with respect to SSE and ACC by a statistically significant margin. ### _Fixed-based Results_ The fixed-based performance results (Fig. 7 and Tab. II) show greater variation between regions and an increased dependence on \(\kappa\). The training curves (Fig. 6) show that increasing \(\kappa\) caused the validation fixed accuracy to increase over the validation set. In the white matter, SDNet\({}_{\kappa}\) achieved the strongest fixed accuracy by a significant margin, followed by SDNet and FOD-Net between which there was no statistically significant difference. In ROI-1-CC, ROI-2-MCP, and ROI-3-SLF, we see that the fixed accuracy of the deep learning FOD reconstruction methods decreased as the number of pixels increased. In ROI-1-CC, SDNet achieved the strongest performance by a statistically significant margin, followed by FOD-Net and SDNet\({}_{\kappa}\), between which there is no statistically significant difference in fixed accuracy in the same region. Fig. 4: SSE and fixed difference error maps for slice 72 from HCP subject 130821. **Top row:** SSE error maps between the fully sampled FODs and the FODs reconstructed by **a.** SDNet, **b.** SDNet\({}_{\kappa}\), **c.** FOD-Net and **d.** MSMT CSD. **Bottom row:** Number of kicks calculated for the fully sampled FOD minus the number of pixels calculated from the FODs reconstructed by **e.** SDNet, **f.** SDNet\({}_{\kappa}\), **g.** FOD-Net, and **h.** MSMT CSD. Where \(\kappa\) balances the SH error and fixed classification penalty terms in the loss function as per Eq. 6. Blue voxels indicate underestimates, and red areas overestimates, of the number of fixels. Large SSE and fixed differences are highlighted by the black and red arrows respectively. As the number of fixels in the ROIs increased, the fixed accuracy of \(\text{SDNet}_{\kappa}\) increased relative to other methods. In ROI-2-MCP, \(\text{SDNet}_{\kappa}\) achieved the highest fixed accuracy but not by a statistically significant margin over FOD-Net. Both methods outperformed SDNet by a statistically significant margin. In ROI-3-SLF this pattern continued as \(\text{SDNet}_{\kappa}\)'s performance further improved, and it achieved a statistically significant fixed accuracy increase over the other deep learning methods. There was no statistically significant difference in fixed accuracy between FOD-Net and SDNet in ROI-3-SLF. In all regions other than ROI-3-SLF, MSMT performed worse than all other methods by a statistically significant margin. For AFDE in the white matter, \(\text{SDNet}_{\kappa}\) achieved the lowest error by a statistically significant margin, followed by FOD-Net and SDNet between which there is no statistically significant difference in AFDE in the white matter. For PAE in the white matter, \(\text{SDNet}_{\kappa}\) achieved the lowest error, which was a statistically significant improvement over SDNet but not FOD-Net. For both AFDE and PAE in the white matter, MSMT CSD achieved a higher error than all compared methods by a statistically significant margin. In ROI-1-CC, ROI-2-MCP and ROI-3-SLF, both AFDE and PAE generally increased as the number of fixels increased. In ROI-1-CC, SDNet achieved strongest results with respect to both AFDE and PAE and in ROI-2-MCP all three deep learning methods performed similarly with respect to both AFDE and PAE. In ROI-3-SLF, SDNet and \(\text{SDNet}_{\kappa}\) achieved similar AFDE and PAE, with no statistically significant difference between them, but both achieved a statistically significant improvement compared to FOD-Net. ### _Ablation Study_ The results of the ablation study (Tab. III) clearly demonstrate that removing the DWI consistency blocks from the SDNet architecture caused the performance of the network to degrade significantly with respect to all metrics. The greatest relative degradation of performance occurred with respect to SSE, however consistent reductions in the performance of all other metrics was also observed. ## V Discussion SDNet is a model-based deep learning architecture that employs DWI consistency blocks to ensure intermediate FODs are consistent with the DWI signal, whilst making use of spatial information and multi-shell DWI data to reconstruct FODs. We compared our network to FOD-Net [17], a FOD super-resolution network, which fits FODs to the DWI signal prior to the network's forward pass. Our results show that SDNet improved over FOD-Net in terms of FOD-based performance, and performed similarly with respect to most fixed-based metrics. We conjecture that FOD-Net loses some details of the DWI signal in the FOD fitting stage. Our qualitative results (Fig. 2) support this since the FODs reconstructed by FOD-Net more closely resembled the unstable input MSMT-CSD FODs, whereas by ensuring consistency with the DWI signal, SDNet more robustly reconstructed FODs which closely resembled the ground truth. The quantitative results collected from our Fig. 5: Mean test-time FOD-based performance (**Left:** ACC, **Right:** SSE) in the white matter (WM), ROI-1-CC: corpus callosum containing a single fixed, ROI-2-MCP: intersection between the middle cerebellar peduncle and superior longitudinal fascicle containing 2 fixels, and ROI-3-SLF: intersection between the superior longitudinal fascicle, corticospinal tract and the corpus callosum containing 3 fixels. \(\kappa\) balances the SH error and fixed classification penalty terms in the loss function as per Eq. 6. Error bars indicate the standard error of the metrics, which have been averaged over the 7 test subjects. Fig. 6: Validation training curves for SDNet, the red cross marks the point when \(\kappa\) is increased from \(0\) to \(1.6\times 10^{-4}\) comparison and ablation studies highlighted the improvement in FOD-based performance enabled by including DWI consistency blocks. The ultimate goal of deep learning based FOD reconstruction is to produce FODs that are useful for quantitative analysis. FOD registration [33], a key component of longitudinal and group FOD analyses, relies on \(L_{2}\) distance between SH coefficients to captures FOD similarity. By achieving a low SSE, the SH representations will bear increased similarity to the ground truth FODs. We therefore anticipate that SDNet will help ensure that FOD registration is minimally impacted by DWI undersampling, and so too the subsequent analysis. Another factor that may impact such analyses is data containing abnormalities, such as pathologies. Such data will likely not be abundant in the datasets used for training deep learning based FOD reconstruction networks, and as a consequence, reduced performance caused by overfitting becomes probable. Since the DWI consistency blocks ensure that solutions will be consistent with the measured DWI data, we expect that SDNet will be less likely to overfit therefore performing comparatively well compared to networks without DWI consistency blocks. However, further investigation is beyond the scope of the current work. The outcome of such quantitative analysis is also dependent on the post-registration steps in the pipeline, which, in the case of a fixed-based analysis [4], will be predominantly impacted by the fixed-based performance. Comparing multiple FOD reconstruction algorithms revealed that strong FOD-based performance doesn't directly translate to strong fixed-based performance. The disconnect between FOD and fixed-based performance is evident in the statistically significant difference in SSE over the white matter between SDNet and FOD-Net, but the absence of a statistically significant effect in fixed accuracy over the same set of voxels. This effect can be attributed to FOD segmentation's dependence on the angular separation of the FOD lobes, which is dependent on the higher order SH coefficients, which only contribute a small amount to the SSE. This highlights that SSE loss alone may not be optimal for reconstructing FODs that are to be used in a fixed-based analysis pipeline. By introducing an additional loss component, which penalises reconstructed FODs judged to be made up of the incorrect number of pixels, we have demonstrated that fixed-based performance can be improved. The impact of the proposed loss function is illustrated by the statistically significant increase in fixed accuracy in the white matter achieved by SDNet, compared to SDNet and FOD-Net. The qualitative results (Fig. 3) highlighted the improved angular separation of fibres with low angular separation. It is also evident that the overall shape of the FOD is captured, as opposed to discrete, or Dirac-like FODs [8; 7; 10]. Furthermore, statistically significant improvements were recorded in fixed accuracy, PAE and AFDE by SDNet, across the white matter. However, the introduction of fixed classification penalty in ROI-1-CC led to a reduction in fixed-based performance. This highlighted a potential bias of SDNet towards over-estimating the number of fixels in each voxel. The input of FOD reconstruction networks are necessarily derived from a DWI acquisition with low angular resolution, so do not have sufficient information to reconstruct FODs that contain all kirels, as observed in Fig. 4. Therefore, the effect of the fixed classification penalty will generally be to correct these underestimations by encouraging the network to increase the number of kirels. Since ROI-1-CC contains only single fixed voxels, the fixed-classification penalty may have increased the number of over-estimations in this region, which, when combined with the already strong performance of SDNet and FOD-Net, led to the observed decrease in performance. On the other hand, in ROI-3-SLF, a region containing 3 crossing fibres, the use of fixed classification penalty improved performance compared to the other two deep learning methods, and despite worse performance in ROI-1-CC, SDNet\({}_{\kappa}\) resulted in an improvement in performance over the white matter voxels for all fixed-based performance metrics. In the current work, the fixed classification network is trained on the ground truth data alone, which, depending on the efficacy of the FOD reconstruction algorithm, will have a different distribution to the reconstructed FODs. One possible approach to further improving performance is to devise an algorithm to jointly train the FOD reconstruction network and the fixed classification network, similar to the method used to train generative adversarial networks [34]. The fixed classification penalty component of the loss function appears to share some characteristics with regularisation terms that are ubiquitous in model-based methods for solving ill-posed inverse problems. In particular, to minimise a combination of SSE loss and the fixed classification penalty, a decrease in SSE was incurred, and we have identified in our validation experiments that the extent of such a sacrifice can be controlled by the adjustment of \(\kappa\) (data not included). This suggests that the solution that obtains the lowest SSE may fail to capture certain desirable features of the FOD. In this work, we have highlighted this impact on the separation of fibre populations with similar orientations, but it is possible other features such as the continuity of fibre populations through space could also be improved using similar methods. ## VI Conclusion In this work we have proposed SDNet, a model-based deep learning architecture optimised for FOD reconstruction. In addition to the learned regularisation blocks, are trained directly in an end-to-end fashion and therefore optimised for the task of FOD reconstruction, the network also takes a neighbourhood of multi-shell DWI signals as input to an architecture containing multiple cascades. We further show that there is a trade-off between FOD-based and fixed-based performance, and propose a fixed classification penalty term in our loss function, as implemented in SDNet\({}_{\kappa}\), as a method of controlling the the trade-off between these performance metrics. We show that, when compared to a state-of-the-art FOD super-resolution network, FOD-Net, gains in FOD-based and fixed-based performance were achieved by SDNet and SDNet\({}_{\kappa}\), respectively. ## Acknowledgment We would like to thank Xi Jia from University of Birmingham for the fruitful discussion on network architecture and parameter tuning in this research. The computations described in this research were performed using the Baskerville Tier 2 HPC service ([https://www.baskerville.ac.uk/](https://www.baskerville.ac.uk/)). Baskerville was funded by the EPSRC and UKRI through the World Class Labs scheme (EP/T022221/1) and the Digital Research Infrastructure programme (EP/W032244/1) and is operated by Advanced Research Computing at the University of Birmingham.
2308.14930
Application of Quantum Pre-Processing Filter for Binary Image Classification with Small Samples
Over the past few years, there has been significant interest in Quantum Machine Learning (QML) among researchers, as it has the potential to transform the field of machine learning. Several models that exploit the properties of quantum mechanics have been developed for practical applications. In this study, we investigated the application of our previously proposed quantum pre-processing filter (QPF) to binary image classification. We evaluated the QPF on four datasets: MNIST (handwritten digits), EMNIST (handwritten digits and alphabets), CIFAR-10 (photographic images) and GTSRB (real-life traffic sign images). Similar to our previous multi-class classification results, the application of QPF improved the binary image classification accuracy using neural network against MNIST, EMNIST, and CIFAR-10 from 98.9% to 99.2%, 97.8% to 98.3%, and 71.2% to 76.1%, respectively, but degraded it against GTSRB from 93.5% to 92.0%. We then applied QPF in cases using a smaller number of training and testing samples, i.e. 80 and 20 samples per class, respectively. In order to derive statistically stable results, we conducted the experiment with 100 trials choosing randomly different training and testing samples and averaging the results. The result showed that the application of QPF did not improve the image classification accuracy against MNIST and EMNIST but improved it against CIFAR-10 and GTSRB from 65.8% to 67.2% and 90.5% to 91.8%, respectively. Further research will be conducted as part of future work to investigate the potential of QPF to assess the scalability of the proposed approach to larger and complex datasets.
Farina Riaz, Shahab Abdulla, Hajime Suzuki, Srinjoy Ganguly, Ravinesh C. Deo, Susan Hopkins
2023-08-28T23:08:32Z
http://arxiv.org/abs/2308.14930v1
# Application of Quantum Pre-Processing Filter for Binary Image Classification with Small Samples ###### Abstract Over the past few years, there has been significant interest in Quantum Machine Learning (QML) among researchers, as it has the potential to transform the field of machine learning. Several models that exploit the properties of quantum mechanics have been developed for practical applications. In this study, we investigated the application of our previously proposed quantum pre-processing filter (QPF) to binary image classification. We evaluated the QPF on four datasets: MNIST (handwritten digits), EMNIST (handwritten digits and alphabets), CIFAR-10 (photographic images) and GTSRB (real-life traffic sign images). Similar to our previous multi-class classification results, the application of QPF improved the binary image classification accuracy using neural network against MNIST, EMNIST, and CIFAR-10 from 98.9% to 99.2%, 97.8% to 98.3%, and 71.2% to 76.1%, respectively, but degraded it against GTSRB from 93.5% to 92.0%. We then applied QPF in cases using a smaller number of training and testing samples, i.e. 80 and 20 samples per class, respectively. In order to derive statistically stable results, we conducted the experiment with 100 trials choosing randomly different training and testing samples and averaging the results. The result showed that the application of QPF did not improve the image classification accuracy against MNIST and EMNIST but improved it against CIFAR-10 and GTSRB from 65.8% to 67.2% and 90.5% to 91.8%, respectively. Further research will be conducted as part of future work to investigate the potential of QPF to assess the scalability of the proposed approach to larger and complex datasets. Farina Riaz, Shahab Abdulla, Hajime Suzuki, Srinjoy Ganguly, Ravinesh, C. Deo, and Susan Hopkins ## 1 Introduction Over the past few years, there has been significant interest in Quantum Machine Learning (QML), with various algorithms proposed for image processing [1]. Quantum machine learning has been a hot topic recently [2], especially since quantum hardware development has gradually accelerated [3]. The application of quantum technology in image processing is crucial for efficiently extracting valuable information from real-world scenarios. Numerous approaches have been developed for quantum image classification, such as quantum neural networks [4], quantum convolutional neural network [5], hybrid quantum classical convolutional neural network [6], quantum generative adversarial network [7; 8] and quantum support vector machines [9]. The goal of using QML in images is to extract essential features from the image. To achieve this, a classical kernel approach can first be used to estimate unsolvable quantum kernels on a quantum device. Secondly, different models can be created that process the feature vectors using quantum models based on variational circuits. These models gain their strengths by outsourcing nonlinearity into the process of encoding inputs into a quantum state or the quantum feature map. This combination of quantum computing with kernel theory will help in developing QML algorithms that offer potential quantum speedup on near-term quantum devices [10]. Of the various suggested ways to merge classical machine learning techniques with quantum computing, the method introduced by Henderson et al. in [11] offers several advantages. It can be implemented on quantum circuits with fewer qubits and shallow gate depths, yet it can be applied to more practical use cases. This method employs quantum circuits as transformation layers to extract features for image classification using convolutional neural networks (CNNs). The transformation layers are referred to as quanvolutional layers, and the method is referred as a quanvolutional neural network (QuanvNN) in this research article. A crucial query arose regarding whether the features generated by quanvolutional layers could enhance the classification accuracy of machine learning models. To investigate this, Henderson et al. have conducted a study where randomly generated quantum circuits were used to compare the classification accuracy of QuanvNN with a standard CNN. However, the findings did not demonstrate a clear advantage in classification accuracy over the classical model [11]. In a subsequent study [12], QuanvNN was updated, implemented on quantum hardware (Rigetti's Aspen-7-25Q-B quantum processing unit), and evaluated on a satellite imagery classification task. Nevertheless, the image classification accuracy of QuanvNN was not improved in comparison to that of a traditional CNN algorithm. The work of Mari [13] provided an implementation of QuanvNN on a software quantum computing simulator called PennyLane [14]. Their approach differs from that of Henderson et al. in that the output of the quantum circuit, which is a set of expectation values, is directly fed into the subsequent neural network (NN) layer, whereas Henderson et al. [11] transformed it into a single scalar value using a classical method. The proposed method was tested on the MNIST dataset [15], which consists of handwritten digits, using 50 training and 30 test images. However, no clear improvement in classification accuracy by QuanvNN over NN was shown in Mari's study. In our previous research [16], we extended Mari's QuanvNN by utilising a randomly generated quantum circuit with four qubits, 20 single axis rotations, and 10 controlled NOTs (CNOTs) to enhance image classification ac curacy when compared to a classical fully connected NN. Specifically, the extended QuanvNN approach improved the accuracy of MNIST and CIFAR-10 datasets (photographic 10 class image dataset [17]) from 92.0% to 93.0% and from 30.5% to 34.9%, respectively [16]. We also proposed a new model, neural network with quantum entanglement (NNQE), that incorporates a strongly entangled quantum circuit with four qubits, 20 three axis rotations, 20 CNOTs, and Hadamard gates. This model further increased image classification accuracy against MNIST and CIFAR-10 to 93.8% and 36.0%, respectively [16]. However, using QuanvNN or NNQE was found to degrade the image classification accuracy when applied to a more complicated German Traffic Sign Recognition Benchmark (GTSRB) dataset (43 class real-life traffic sign images [18]) in comparison with the classical NN accuracy from 82.2% to 71.9% (QuanvNN) and to 73.4% (NNQE) [16]. The concept of using a quantum circuit as a pre-processing filter for image classification tasks has been extended by the introduction of quantum pre-processing filter (QPF) by the authors in [19]. In [19], a much simplified quantum circuit, i.e. a four qubit quantum circuit with Y rotations for encoding and two CNOTs, was introduced. By applying the QPF approach, the results showed that the image classification accuracy based on MNIST and EMNIST (handwritten 47 class digits and letters [20]) datasets were improved against classical NN from 92.0% to 95.0% and from 68.9% to 75.8%, respectively. However, tests using the proposed QPF approach against GTSRB showed again a degradation in the classification accuracy from 81.4% to 77.1% [19]. In this study, we first extend the application of QPF using two CNOTs from multi-class classification to binary classification against all possible different pairs of image classes. For 10 classes, e.g. MNIST, the total number of pairs is \(10\times 9=90\). For 43 classes, e.g. GTSRB, the total number of pairs is \(43\times 42=1,806\). The proposed method achieves a higher image classification accuracy of 98.9% compared to 92.5% against MNIST using NN. The image classification accuracy was further improved to 99.2% by the application of QPF. While the image classification against GTSRB was improved from 81.4% to 93.5% using the proposed binary image classification method, the application of QPF degraded the image classification accuracy from 93.5% to 92.0%, similar to our previous results. We note that practical application of the proposed binary classification approach requires an additional categorisation method to extract training and testing images corresponding to the chosen classes from larger samples. This additional categorisation method is outside of the scope of the current study and is left for further study. In addition, we have applied the two CNOTs QPF to CIFAR-10 and EMNIST datasets and observed the binary image classification accuracy improvements from 71.2% to 76.1% and from 97.8% to 98.3%, respectively. Secondly, we apply QPF to cases using a smaller number of training and testing samples, i.e. 80 training samples and 20 testing samples per class. The use of a smaller number of samples is considered in application where faster training and testing is required. In order to derive statistically stable results, we conducted the experiment with 100 trials choosing randomly differ ent training and testing samples and averaged the results. The result showed that the application of QPF did not improve the image classification accuracy against MNIST and EMNIST but improved it against CIFAR-10 and GTSRB from 65.8% to 67.2% and from 90.5% to 91.8%, respectively. While the exact cause of this phenomenon is currently under investigation, this result is significant in understanding the effects of QPF in machine learning methods. In order to support our claims, we have made our source codes available at [https://github.com/hajimesuzuki999/qpf-bic](https://github.com/hajimesuzuki999/qpf-bic). The structure of this research paper is as follows: Section 2 outlines the methodology of our proposed model. Section 3 provides a detailed account of our experimental setup. Section 4 contains the results and discussion. Finally, in Section 5, we present our conclusions. ## 2 Methodology The architecture of QPF was first proposed in [19]. For the sake of completeness, we reproduce the description of QPF in this section. Figure 1 shows the architecture of the proposed QPF. The method assumes that the input image is a two-dimensional matrix with size \(m\)-by-\(m\), and the pixel value \(x\), follows \(0\leq x\leq 1\). An extension to a multi-channel pixel image is considered as straightforward. A section of size \(n\)-by-\(n\) is extracted from the input image. The proposed QPF uses \(n=2\). This \(2\times 2\) section of the input image is referred as QPF window. The outputs from the Y rotation gates are fed to the quantum circuit referred as \(U\) in Figure 1. Measurements, referred as \(M\) in Figure 1, are performed on the output of the quantum circuit \(U\). The structure of the quantum circuit \(U\) is further detailed in Figure 2. In [19], we conducted experiments with different CNOTs arrangement (quantum entanglement property of quantum mechanics) and found that the arrangement as given in Figure 2 showed superior improvements in image classification accuracy. Figure 1: The architecture of the QPF model. The outputs from the measurement operations are given as expectation values between \(-1\) and \(1\), and form output features. We note that the total number of parameters in the input image \((m\times m)\) is the same as the total number of parameters in the output features \((4\times(m/2)\times(m/2))\). The output features are made into a one-dimensional vector by the fatten layer. The number of nodes of the output of the flattening layer is \(m\times m\). The nodes are fully connected by the first fully connected layer 1. The output of the fully connected layer 2 has the number of nodes equal to the number of classes. ## 3 Experiment The method proposed has been implemented using MATLAB and Python. The Adam optimiser and a batch size of 128 have been used for training the network. Four datasets were utilised: MNIST, EMNIST, CIFAR-10 and GTSRB. The MNIST dataset comprises of 60,000 training and 10,000 testing images of handwritten digits ranging from 0 to 9 [15]. Each image is of size 28 by 28 pixels. The original images are represented in grayscale with pixel values between 0 and 255, which are normalised by dividing them by 255. Figure 3 shows some examples of images from the MNIST dataset. The EMNIST dataset comprises 112,800 training and 18,800 test images of handwritten digits and letters making up 47 classes [20]. The image size and scaling are the same as MNIST dataset. The CIFAR-10 dataset comprises 50,000 training and 10,000 test images of 10 class photographic images [17]. The original images are in RGB color, which were converted into grayscale between 0 and 255 and then scaled by dividing them by 255. The GTSRB dataset [18] comprises 34,799 training and 12,630 test images Figure 2: QPF with two CNOTs. of 43 different classes of traffic signs. These images are actual pictures of traffic signs captured under different conditions. The size of the original images varies between \(15\times 15\) and \(222\times 193\) pixels. However, in this experiment, all images have been scaled to a size of \(32\times 32\) pixels. The images in the dataset are initially in RGB colour format, but they were converted into grayscale, with pixel values ranging between 0 and 255. Then, the pixel values were scaled down by dividing them by 255 to normalise the data. Figure 4 provides some examples of images from the GTSRB dataset. Table 1 summarises the parameters of the three image datasets used in the experiment. \begin{table} \begin{tabular}{c|c c c c} & **MNIST** & **EMNIST** & **CIFAR-10** & **GTSRB** \\ \hline **Image size** & \(28\times 28\) & \(28\times 28\) & \(32\times 32\) & \(32\times 32\) \\ **Number of colour channel** & 1 & 1 & 3 & 3 \\ **Number of classes** & 10 & 47 & 10 & 43 \\ **Number of class pairs** & 90 & 2,162 & 90 & 1,806 \\ \hline \end{tabular} \end{table} Table 1: Parameters of image datasets used in the experiment. Figure 4: Example GTSRB dataset images Figure 3: Example MNIST dataset images. Results and Discussion First, we use all available training and testing samples to perform binary image classification against all different pairs of classes using NN. The results are shown in Figure 5 (a) for MNIST dataset. In this graph, the testing accuracy for the given pair is shown by a different colour. For example, classifying the number 0 against 1 achieves close to 100% accuracy, as shown in light yellow. In comparison, we can observe that the testing accuracy for number 5 against 8 is poor, about 96% accuracy. This is due to the similarity in the shapes of the handwritten numbers 5 and 8. Additionally, other class pairs such as 3 and 5, 3 and 8, 4 and 9, and 7 and 9 also have similar shapes, leading to lower testing accuracy for those pairs. On average, the binary image classification using classical NN against MNIST achieved 98.9% testing accuracy using all data. Figure 5 (b) shows the results for QPF-NN against MNIST using all data. A similar result is obtained with an improvement in average image classification accuracy of 99.2%. When it is applied to EMNIST and CIFAR-10, QPF-NN improved the testing accuracy over NN from 97.8% to 98.3% and from 71.2% to 76.1%, respectively. Figure 6 (a) shows the NN results against GTSRB using all data. We can identify pairs of classes that produce high testing accuracy and those that do not. For example, class 7 shows lower testing accuracy against many of the classes between 20 to 43, compared to class 6 or 8, as indicated by the red oval. Referring to [18], Fig. 1, class 7 corresponds to 80 km/h sign with a diagonal strip. This class also has a smaller number of samples (Approx. 1/3) compared to class 6 or 8. Detailed examination of this graph may provide further insights, however, we focus on the effects of the application of QPF, and hence this is left for a future study. The average testing accuracy over all different pairs is 93.5%. In comparison, Figure 6 (b) shows the QPF-NN results against GTSRB using all data. Similar results were obtained with a reduced average testing accuracy of 92.0%. Secondly, we performed 100 trials with each trial extracting 80 training samples and 20 testing samples per class randomly to perform training and testing. Figure 7 shows the variation of testing accuracy as a function of a trial index when NN and QPF-NN are used against MNIST. We observe that variation is relatively large (approximately 3%) which shows the importance of performing multiple trials and averaging the results to obtain statistically stable results. On average, the testing accuracy of 94.7% and 94.5% was obtained for NN and QPF-NN, respectively. In this case, the application of QPF shows minimal effects. Similarly, we observed minimal effects of using QPF-NN over NN against EMNIST having the same testing accuracy of 94.0%. Figure 8 shows the results against GTSRB. We observe that the variation is relatively small (approximately 1%) which may be due to a larger number of class pairs (1,806 for GTSRB compared to 90 for MNIST) over which the testing accuracy is averaged for each trial. Importantly, the application of QPF shows improvement over NN against GTSRB, which was not observed in any of our previous experiments. It is also notable that QPF-NN always improved the Figure 5: Testing accuracy against MNIST using all data. Figure 6: Testing accuracy against GTSRB using all data. testing accuracy over NN in any of the 100 trials. We note that the same set of training and testing samples was used for NN and QPF-NN for each trial. We have observed a similar result with CIFAR-10 with an improved test accuracy from 65.8% to 67.2%. A summary of the testing accuracy results is shown in Table 2. ## 5 Conclusion This study aimed to evaluate the performance of a proposed binary image classification method using a QPF model with 4 qubits and 2 CNOTs. In our previous research we have shown that QPF is used for efficient image feature extraction while existing quantum circuits demand high computation and multiple layers to extract image features. Similar to the previously reported multi-class classification case, the proposed QPF model improved binary image classification accuracy against MNIST, EMNIST, and CIFAR-10 but we observed a slight decrease in the performance against GTSRB using all training and testing sam \begin{table} \begin{tabular}{c|c c c c} & **MNIST** & **EMNIST** & **CIFAR-10** & **GTSRB** \\ \hline **All data, NN** & 98.9\% & 97.8\% & 71.2\% & 93.5\% \\ **All data, QPF-NN** & 99.2\% & 98.3\% & 76.1\% & 92.0\% \\ **100 samples, NN** & 94.7\% & 94.0\% & 65.8\% & 90.5\% \\ **100 samples, QPF-NN** & 94.5\% & 94.0\% & 67.2\% & 91.8\% \\ \hline \end{tabular} \end{table} Table 2: A summary of testing accuracy results. Figure 7: Testing accuracy against MNIST using 100 samples and 100 trials. ples. However, when applied to the cases with a smaller number of training and testing samples, QPF improved image classification performance against CIFAR-10 and GTSRB, which shows better generalisation of our QPF model for smaller number of samples compared to previous classical NN models, which mostly requires a larger number of sample to generalise [21]. The results presented in this article provide further insights into the effects QPF on machine learning algorithms. Further research will be conducted as part of future work to investigate the potential of QPF to assess the scalability of the proposed approach to larger and complex datasets. ## 6 Acknowledgment This research has been supported by Australian government Research Training Program and Commonwealth Scientific Industrial and Research Organization.
2306.02426
Resilient Constrained Learning
When deploying machine learning solutions, they must satisfy multiple requirements beyond accuracy, such as fairness, robustness, or safety. These requirements are imposed during training either implicitly, using penalties, or explicitly, using constrained optimization methods based on Lagrangian duality. Either way, specifying requirements is hindered by the presence of compromises and limited prior knowledge about the data. Furthermore, their impact on performance can often only be evaluated by actually solving the learning problem. This paper presents a constrained learning approach that adapts the requirements while simultaneously solving the learning task. To do so, it relaxes the learning constraints in a way that contemplates how much they affect the task at hand by balancing the performance gains obtained from the relaxation against a user-defined cost of that relaxation. We call this approach resilient constrained learning after the term used to describe ecological systems that adapt to disruptions by modifying their operation. We show conditions under which this balance can be achieved and introduce a practical algorithm to compute it, for which we derive approximation and generalization guarantees. We showcase the advantages of this resilient learning method in image classification tasks involving multiple potential invariances and in heterogeneous federated learning.
Ignacio Hounie, Alejandro Ribeiro, Luiz F. O. Chamon
2023-06-04T18:14:18Z
http://arxiv.org/abs/2306.02426v4
# Resilient Constrained Learning ###### Abstract When deploying machine learning solutions, they must satisfy multiple requirements beyond accuracy, such as fairness, robustness, or safety. These requirements are imposed during training either implicitly, using penalties, or explicitly, using constrained optimization methods based on Lagrangian duality. Either way, specifying requirements is hindered by the presence of compromises and limited prior knowledge about the data. Furthermore, their impact on performance can often only be evaluated by actually solving the learning problem. This paper presents a constrained learning approach that adapts the requirements while simultaneously solving the learning task. To do so, it relaxes the learning constraints in a way that contemplates how much they affect the task at hand by balancing the performance gains obtained from the relaxation against a user-defined cost of that relaxation. We call this approach _resilient constrained learning_ after the term used to describe ecological systems that adapt to disruptions by modifying their operation. We show conditions under which this balance can be achieved and introduce a practical algorithm to compute it, for which we derive approximation and generalization guarantees. We showcase the advantages of this resilient learning method in image classification tasks involving multiple potential invariances and in heterogeneous federated learning. ## 1 Introduction Requirements are integral to engineering and of growing interest in machine learning (ML) [1]. This growing interest is evident in, e.g., the advancement towards designing ML systems that are fair [2], robust [3], and safe [4], as well as numerous applications in which we want to attain good performance with respect to more than one metric [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24]. Two concrete applications that we will use as examples (Section 5) are _heterogeneous federated learning_, where each agent realizes a different loss due to distribution shifts (as in, e.g., [18; 19; 20]) and _invariant learning_, where we seek to achieve good performance even after the data has undergone a variety of transformations (as in, e.g., [21; 22; 23; 15]). The goal in these settings is to strike a compromise between some top-line objective metric and the requirements. To this end, an established approach is to combine the top-line and requirement violation metrics in a single training loss. This leads to _penalty methods_ that are ubiquitous in ML, as attested by, e.g., fairness [25; 26; 27; 28] and robustness [29; 30] applications. Another approach to balance objective and requirements is formulating and solving constrained learning problems. Though less typical in ML practice, they are not uncommon [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 21; 24]. In particular, they have also been used in fair [31; 32; 33; 34] and robust [12; 35] learning, to proceed with a common set of applications. It is worth noting that penalty and constrained methods are not unrelated. They are in fact equivalent in convex optimization, in the sense that every constrained problem has an equivalent penalty-based formulation. A similar result holds in non-convex ML settings for sufficiently expressive parametrizations [7; 8]. In either case, the compromise between objective and different requirements are specified by (hyper)parameters, be they penalty coefficients or constraint levels (Section 2). Finding penalties or constraints specifications that yield reasonable trade-offs is particularly difficult in ML, which often involves statistical requirements, such as fairness and robustness, that have intricate dependencies with the model and unknown data distributions. Case in point, consider invariant learning for image classification [23]. While we know that invariance to rotations and translations is desirable, we do not know _how much_ invariance to these transformations is beneficial. This depends on the level of invariance of the data distribution, its prevalence in the dataset, and the capability of the model to represent invariant functions. The standard solution to this problem involves time consuming and computationally expensive hyperparameter searches. This paper addresses this issue by automating the specification of constraint levels during training. To do so, it begins by interpreting constraints as nominal specifications that can be relaxed to find a better compromise between objective and requirements (Section 3). We call this approach _resilient constrained learning_ after the term used to describe ecological systems that adapt to disruptions by modifying their operation [36, 37]. Our first contribution is the following insight: 1. We relax constraints according to their relative difficulty, which we define as the sensitivity of the objective to perturbations of the constraint (Section 3.1). That difficult constraints should be relaxed more is a natural choice. The value of (C1) is in defining what is a difficult constraint. We then seek constraint levels such that the objective loss is relatively insensitive to changes in those levels. This relative insensitivity incorporates a user-defined cost that establishes a price for relaxing nominal specifications. The learning problem implied by (C1) seems challenging. Our next contribution is to show that it is not: 1. We use duality and perturbation theory to present reformulations of the resilient learning problem from (C1) (Section 3.2) that lead to a practical resilient learning algorithm (Section 4) for which we derive statistical approximation bounds (Thm. 1). Our final contribution is the experimental evaluation of the resilient learning algorithm: 1. We evaluate resilient formulations of federated learning and invariant learning (Section 5). Our experiments show that (C1)-(C2) effectively relaxes constraints according to their _difficulty_, leading to solutions that are _less sensitive_ to the requirement specifications. It illustrates how resilient learning constitutes an interpretable and flexible approach to designing requirements while contemplating performance trade-offs. ## 2 Learning with Constraints Let \(\mathcal{D}_{0}\) be a distribution over data pairs \((\mathbf{x},y)\) composed of the feature vector \(\mathbf{x}\in\mathcal{X}\subset\mathbb{R}^{d}\) and the corresponding output \(y\in\mathcal{Y}\subset\mathbb{R}\). Let \(f_{\theta}:\mathcal{X}\rightarrow\mathbb{R}^{k}\) be the function associated with parameters \(\theta\in\Theta\subset\mathbb{R}^{p}\) and \(\ell_{0}:\mathbb{R}^{k}\times\mathcal{Y}\rightarrow[-B,B]\) be the loss that evaluates the fitness of the estimate \(f_{\theta}(\mathbf{x})\) relative to \(y\). Let \(\mathcal{F}_{\theta}=\{f_{\theta}\mid\theta\in\Theta\}\) be the hypothesis class induced by these functions. Different from traditional (unconstrained) learning, we do not seek \(f_{\theta}\) that simply minimizes \(\mathbb{E}[\ell_{0}(\phi(\mathbf{x}),y)]\), but also account for its expected value with respect to additional losses \(\ell_{i}:\mathbb{R}^{k}\times\mathcal{Y}\rightarrow[-B,B]\) and distributions \(\mathcal{D}_{i}\), \(i=1,\ldots,m\). These losses/distributions typically encode statistical requirements, such as robustness (where \(\mathcal{D}_{i}\) denote distribution shifts or adversarial perturbations) and fairness (where \(\mathcal{D}_{i}\) are conditional distributions of protected subgroups). Explicitly, the constrained statistical learning (CSL) problem is defined as \[\begin{split}\mathsf{P}^{\star}=\min_{\theta\in\Theta}& \quad\mathbb{E}_{(\mathbf{x},y)\sim\mathcal{D}_{0}}\Big{[}\ell_{0}\big{(}f_{ \theta}(\mathbf{x}),y\big{)}\Big{]}\\ \mathrm{subject\ to}&\quad\mathbb{E}_{(\mathbf{x},y) \sim\mathcal{D}_{i}}\Big{[}\ell_{i}\big{(}f_{\theta}(\mathbf{x}),y\big{)} \Big{]}\leq 0,\quad i=1,\ldots,m.\end{split}\] (P) Without loss of generality, we stipulate the nominal constraint specification to be zero. Other values can be achieved by offsetting \(\ell_{i}\), i.e., \(\mathbb{E}\big{[}\tilde{\ell}(f_{\theta}(\mathbf{x}),y)\big{]}\leq c\) is obtained using \(\ell_{i}(\cdot)=\tilde{\ell}(\cdot)-c\) in (P). We also let \(\mathsf{P}^{\star}\) take values on the extended real line \(\mathbb{R}\cup\{\infty\}\) by defining \(\mathsf{P}^{\star}=\infty\) whenever (P) is infeasible, i.e., whenever for all \(\theta\in\Theta\) there exists \(i\) such that \(\mathbb{E}_{\mathcal{D}_{i}}\big{[}\ell_{i}(f_{\theta}(\mathbf{x}),y)\big{]}>0\). A challenge in formulating meaningful CSL problems lies in specifying the constraints, i.e., the \(\ell_{i}\). Indeed, while a solution \(f_{\theta^{\star}}\) always exists for unconstrained learning [\(m=0\) in (P)], there may be no \(\theta\) that satisfies the constraints in (P). This issue is exacerbated when solving the problem using data: even arbitrarily good approximations of the expectations in (P) may introduce errors that hinder the estimation of \(\mathsf{P}^{\star}\) (see Appendix A for a concrete example). In practice, landing on feasible requirements may require some constraints to be relaxed relative to their initial specification. Then, in lieu of (P), we would use the relaxed problem \[\mathsf{P}^{\star}(\mathbf{u})=\min_{\theta\in\Theta} \mathbb{E}_{(\mathbf{x},y)\sim\mathcal{D}_{0}}\Big{[}\ell_{0} \big{(}f_{\theta}(\mathbf{x}),y\big{)}\Big{]}\] ( \[\text{P}_{\mathbf{u}}\] ) \[\text{subject to} \mathbb{E}_{(\mathbf{x},y)\sim\mathcal{D}_{i}}\Big{[}\ell_{i} \big{(}f_{\theta}(\mathbf{x}),y\big{)}\Big{]}\leq u_{i},\quad i=1,\dots,m,\] where \(\mathbf{u}\in\mathbb{R}^{m}_{+}\) collects the relaxations \(u_{i}\geq 0\). The value \(\mathsf{P}^{\star}(\mathbf{u})\) is known as the _perturbation function_ of (P) since it describes the effect of the relaxation \(\mathbf{u}\) on the optimal value. Given \(\mathsf{P}^{\star}(\mathbf{0})=\mathsf{P}^{\star}\) and (\(\mathbf{P}_{\mathbf{0}}\)) is equivalent to (P), this abuse of notation should not lead to confusion. It is ready that \(\mathsf{P}^{\star}(\mathbf{u})\) is a componentwise non-increasing function, i.e., that for comparable arguments \(v_{i}\leq w_{i}\) for all \(i\) (denoted \(\mathbf{v}\preceq\mathbf{w}\)), it holds that \(\mathsf{P}^{\star}(\mathbf{v})\geq\mathsf{P}^{\star}(\mathbf{w})\). However, large relaxations \(\mathbf{u}\) drive (\(\mathbf{P}_{\mathbf{u}}\)) away from (P), the learning problem of interest. Thus, relaxing (P) too much can be as detrimental as relaxing it too little (see example in Appendix A). The goal of this paper is to exploit properties of \(\mathsf{P}^{\star}(\mathbf{u})\) together with a relaxation cost to strike a balance between these conflicting objectives. We call this balance a _resilient_ version of (P). ## 3 Resilient Constrained Learning We formulate resilient constrained learning using a functional form of (\(\mathsf{P}_{\mathbf{u}}\)). Explicitly, consider a _convex_ function class \(\mathcal{F}\supseteq\mathcal{F}_{\theta}\) and define the relaxed functional problem \[\tilde{\mathsf{P}}^{\star}(\mathbf{u})=\min_{\phi\in\mathcal{F}} \mathbb{E}_{(\mathbf{x},y)\sim\mathcal{D}_{0}}\Big{[}\ell_{0} \big{(}\phi(\mathbf{x}),y\big{)}\Big{]}\] ( \[\text{subject to} \mathbb{E}_{(\mathbf{x},y)\sim\mathcal{D}_{i}}\Big{[}\ell_{i} \big{(}\phi(\mathbf{x}),y\big{)}\Big{]}\leq u_{i},\quad i=1,\dots,m.\] The difference between (\(\mathsf{P}_{\mathbf{u}}\)) and (\(\tilde{\mathsf{P}}_{\mathbf{u}}\)) is that the latter does not rely on a parametric model. Instead, its solutions take values on the convex space of functions \(\mathcal{F}\). Still, if \(\mathcal{F}_{\theta}\) is a sufficiently rich parameterization of \(\mathcal{F}\) (see Section 4 for details), then \(\mathsf{P}^{\star}(\mathbf{u})\) and \(\tilde{\mathsf{P}}^{\star}(\mathbf{u})\) are close (38, Sec. 3). Throughout the rest of the paper, we use \(\sim\) to identify these functional problems. The advantage of working with (\(\tilde{\mathsf{P}}_{\mathbf{u}}\)) is that under mild conditions the perturbation function \(\tilde{\mathsf{P}}^{\star}\) is convex. This holds readily if the losses \(\ell_{i}\) are convex (e.g., quadratic or cross-entropy) (39, chap. 5). However, the perturbation function is also convex for a variety of non-convex programs, e.g., (\(\tilde{\mathsf{P}}_{\mathbf{u}}\)) with non-convex losses, non-atomic \(\mathcal{D}_{i}\), and decomposable \(\mathcal{F}\) (see Appendix B). For conciseness, we encapsulate this hypothesis in an assumption. **Assumption 1**.: The perturbation function \(\tilde{\mathsf{P}}^{\star}(\mathbf{u})\) is a convex function of the relaxation \(\mathbf{u}\in\mathbb{R}^{m}_{+}\). A consequence of Assumption 1 is that the perturbation function \(\tilde{\mathsf{P}}^{\star}\) has a non-empty subdifferential at every point. Explicitly, let its subdifferential \(\partial\tilde{\mathsf{P}}^{\star}(\mathbf{u}^{o})\) at \(\mathbf{u}^{o}\in\mathbb{R}^{m}_{+}\) be defined as the set of vectors describing supporting hyperplanes at \(\mathbf{u}^{o}\) of the epigraph of \(\tilde{\mathsf{P}}^{\star}\), i.e., \[\partial\tilde{\mathsf{P}}^{\star}(\mathbf{u}^{o})=\Big{\{}\,\mathbf{p}\in \mathbb{R}^{m}_{+}\;\big{|}\;\tilde{\mathsf{P}}^{\star}(\mathbf{v})\geq\tilde{ \mathsf{P}}^{\star}(\mathbf{u})+\mathbf{p}^{T}(\mathbf{v}-\mathbf{u}),\,\text{ for all }\mathbf{v}\in\mathbb{R}^{m}_{+}\,\Big{\}}. \tag{1}\] The elements of \(\partial\tilde{\mathsf{P}}^{\star}(\mathbf{u}^{o})\) are called _subgradients_ of \(\tilde{\mathsf{P}}^{\star}\) at \(\mathbf{u}^{o}\). If \(\tilde{\mathsf{P}}^{\star}\) is differentiable at \(\mathbf{u}^{o}\), then it has a single subgradient that is equal to its gradient, i.e., \(\partial\tilde{\mathsf{P}}^{\star}(\mathbf{u}^{o})=\{\nabla\tilde{\mathsf{P}}^{ \star}(\mathbf{u}^{o})\}\). In general, however, the subdifferential is a non-singleton set (40). Further notice that since \(\tilde{\mathsf{P}}^{\star}(\mathbf{u})\) is componentwise nonpositive the subgradients are componentwise negative, \(p_{\mathbf{u}}\preceq\mathbf{0}\). Next, we use this property to formalize resilient constrained learning as a compromise between reducing \(\tilde{\mathsf{P}}^{\star}(\mathbf{u})\) by increasing \(\mathbf{u}\) and staying close to the original problem (\(\tilde{\mathsf{P}}_{\mathbf{0}}\)). ### Resilient Equilibrium Consider the effect of increasing the value of a specific relaxation \(u_{i}\) in (\(\mathsf{\tilde{P}_{u}}\)) while keeping the rest unchanged. The solution of (\(\mathsf{\tilde{P}_{u^{\prime}}}\)) is allowed to suffer higher losses on the constraints (\(\ell_{i}\) for \(i\geq 1\)), which may lead to a smaller objective loss (\(\ell_{0}\)). Hence, while larger relaxations are detrimental because they violate more the requirements, they are also beneficial because they reduce the objective loss. To balance these conflicting outcomes of constraint relaxations, we introduce a function \(h\) to capture their costs. Then, since relaxing both increases the costs and decreases the objective value, we conceptualize resilience as an equilibrium between these variations. **Definition 1** (Resilient Equilibrium).: Let \(h:\mathbb{R}^{m}_{+}\to\mathbb{R}_{+}\) be a convex, differentiable, normalized (i.e., \(h(\mathbf{0})=0\)), and componentwise increasing (i.e., \(h(\mathbf{v})<h(\mathbf{w})\) for \(\mathbf{v}\prec\mathbf{w}\)) function. A resilient equilibrium of (\(\mathsf{\tilde{P}_{u}}\)) is a relaxation \(\mathbf{u}^{\star}\) satisfying \[\nabla h(\mathbf{u}^{\star})\in-\partial\mathsf{\tilde{P}^{\star}}(\mathbf{ u}^{\star}). \tag{2}\] The resilient constrained learning problem amounts to solving (\(\mathsf{\tilde{P}_{u^{\star}}}\)), i.e., solving (\(\mathsf{\tilde{P}_{u}}\)) for a relaxation \(\mathbf{u}^{\star}\) that satisfies (2). We call this equilibrium _resilient_ because it describes how far (\(\mathsf{\tilde{P}_{0}}\)) can be relaxed before we start seeing diminishing returns. Indeed, \(\mathbf{u}^{\star}\) from (2) is such that relaxing by an additional \(\boldsymbol{\epsilon}\succ\mathbf{0}\) would incur in a relaxation cost at least \(\nabla h(\mathbf{u}^{\star})^{T}\boldsymbol{\epsilon}\) larger, whereas tightening to \(\mathbf{u}^{\star}-\boldsymbol{\epsilon}\) would incur in an optimal value increase of at least the same \(\nabla h(\mathbf{u}^{\star})^{T}\boldsymbol{\epsilon}\). Notice that resilient constrained learning is defined in terms of _sensitivity_. Indeed, the resilient equilibrium in Def. (1) specifies a learning task that is as sensitive to changes in its requirements as it is sensitive to changes in the relaxation cost. This has the marked advantage of being invariant to constant translations of \(\ell_{0}\), as is also the case for solutions of (\(\mathsf{\tilde{P}_{u}}\)). Sensitivity also measures the difficulty of satisfying a constraint, since \(\partial\mathsf{\tilde{P}^{\star}}(\mathbf{u})\) quantifies the impact of each constraint specification on the objective loss. Hence, the equilibrium in (2) has the desirable characteristic of affecting stringent requirements more. Two important properties of the equilibrium in Def. 1 are summarized next (proofs are provided in appendices D.1 and D.2). **Proposition 1**.: _Under Ass. 1, the resilient equilibrium (2) exists. If \(h\) is strictly convex, it is unique._ **Proposition 2**.: _Let \(\mathbf{v},\mathbf{w}\in\mathbb{R}^{m}_{+}\) be such that \([\mathbf{v}]_{i}=[\mathbf{w}]_{i}\), for \(i\neq j\), and \([\mathbf{v}]_{j}<[\mathbf{w}]_{j}\). Under Ass. 1, (i) \([\nabla h(\mathbf{v})]_{j}\leq[\nabla h(\mathbf{w})]_{j}\) and (ii) \([-\mathbf{p}_{v}]_{j}\geq[-\mathbf{p}_{w}]_{j}\) for all \(\mathbf{p}_{v}\in\partial\mathsf{\tilde{P}^{\star}}(\mathbf{v})\) and \(\mathbf{p}_{w}\in\partial\mathsf{\tilde{P}^{\star}}(\mathbf{w})\)._ Prop. 1 shows that the equilibrium in (2) is well-posed. Prop. 2 states that, all things being equal, relaxing the \(j\)-th constraint increases the sensitivity of the cost \(h\) to it, while simultaneously decreasing its effect on the objective value \(\mathsf{\tilde{P}^{\star}}\). To illustrate these points better, Fig. 1 considers prototypical learning problems with a single constraint [\(m=1\) in (\(\mathsf{\tilde{P}_{u}}\))], differentiable \(\mathsf{\tilde{P}^{\star}}(u)\), and relaxation cost \(h(u)=u^{2}/2\). According to (2), the resilient relaxations are obtained at \(h^{\prime}(u^{\star})=u^{\star}=\mathsf{\tilde{P}^{\star\prime}}(u^{\star})\), where we let \(g^{\prime}(u)=dg(u)/du\) denote the derivative of the function \(g\). As per Prop. 2, \(h^{\prime}\) is increasing and \(-\mathsf{\tilde{P}^{\star\prime}}\) is decreasing. Further observe that the sensitivity \(-\mathsf{\tilde{P}^{\star\prime}}(u)\) diverges as \(u\) approaches the value that makes the problem infeasible and vanishes as the constraint is relaxed. These two curves must therefore intersect, claimed by Prop. 1. Figure 1: Resilient equilibrium from Def. 1 for \(h(u)=u^{2}/2\). The shaded area indicates infeasible specifications: (a) nominal specification (\(u=0\)) is feasible and easy to satisfy; (b) nominal specification is feasible but difficult to satisfy (close to infeasible); (c) nominal specification is infeasible. The illustrations in Fig. 1 represent progressively more sensitive/difficult problems. In Fig. 0(a), the nominal problem (\(u=0\)) is easy to solve (small \(\tilde{\mathsf{P}}^{\star}(0)\)), making the resilient equilibrium \(u_{a}^{*}\approx 0\). The original problem and the relaxed problem are essentially equivalent. In Fig. 0(b), the nominal problem is difficult to solve (large \(\tilde{\mathsf{P}}^{\star\prime}(0)\)), inducing a significant change in the constraint and objective losses. In Fig. 0(c), the nominal problem is unsolvable, but the resilient relaxation recovers a feasible problem. Having motivated the resilient compromise in Def. 1 and proven that it is well-posed, we proceed to obtain equivalent formulations that show it is also computationally tractable. These formulations are used to show traditional learning tasks that can be seen as resilient learning problems (Sec. 3.3), before deriving a practical algorithm to tackle the resilient learning problem (\(\tilde{\mathsf{P}}_{\mathbf{u}^{\star}}\)) (Sec. 4). ### Equivalent Formulations While we have shown that the resilient relaxation from Def. 1 exists (Prop. 1), the equilibrium in (2) does not provide a straightforward way to compute it. In this section, we show two more computationally amenable reformulations of Def. 1 by relating \(\mathbf{u}^{\star}\) to the Lagrange multipliers of (\(\tilde{\mathsf{P}}_{\mathbf{u}}\)) and to the solution of a related optimization problem. Let \(\boldsymbol{\lambda}\in\mathbb{R}_{+}^{m}\) collect multipliers \(\lambda_{i}\) associated to the \(i\)-th constraint of (\(\tilde{\mathsf{P}}_{\mathbf{u}}\)) and define the Lagrangian \[\mathcal{L}(\phi,\boldsymbol{\lambda};\mathbf{u})=\mathbb{E}_{\mathcal{D}_{0}} \big{[}\ell_{0}\big{(}\phi(\mathbf{x}),y\big{)}\big{]}+\sum_{i=1}^{m}\lambda_{i }\Big{[}\mathbb{E}_{\mathcal{D}_{i}}\big{[}\ell_{i}\big{(}\phi(\mathbf{x}),y \big{)}\big{]}-u_{i}\Big{]}. \tag{3}\] Based on (3), define dual functions \(g(\boldsymbol{\lambda};\mathbf{u})\) and dual problems \(\tilde{\mathsf{D}}^{\star}(\mathbf{u})\) for given constraint level \(\mathbf{u}\), \[\tilde{\mathsf{D}}^{\star}(\mathbf{u})\ =\ \max_{\boldsymbol{\lambda}\succeq \mathbf{0}}\ g(\boldsymbol{\lambda};\mathbf{u})\ =\ \max_{\boldsymbol{\lambda}\succeq \mathbf{0}}\ \min_{\phi\in\mathcal{F}}\ \mathcal{L}(\phi,\boldsymbol{\lambda};\mathbf{u}).\] ( \[\tilde{\mathsf{D}}_{\mathbf{u}}\] ) While \(\tilde{\mathsf{D}}^{\star}\leq\tilde{\mathsf{P}}^{\star}\) (weak duality) in general, there are cases in which \(\tilde{\mathsf{D}}^{\star}=\tilde{\mathsf{P}}^{\star}\) (strong duality), e.g., in convex optimization. The constrained (\(\tilde{\mathsf{P}}_{\mathbf{u}}\)) is then essentially equivalent to (\(\tilde{\mathsf{D}}_{\mathbf{u}}\)) that can be tackled by solving an unconstrained, penalized problem (minimizing (3) with respect to \(\phi\) and \(\mathbf{u}\)), while adapting the weights \(\lambda_{i}\) of the penalties (maximizing (3) with respect to \(\boldsymbol{\lambda}\)). This is, in fact, the basis of operation of primal-dual constrained optimization algorithms (Stein **Proposition 4**.: _A relaxation \(\mathbf{u}^{\star}\) satisfies the resilient equilibrium (2) if and only if it is a solution of_ \[\begin{split}\tilde{\mathsf{P}}^{\star}_{\mathsf{R}}& =\min_{\phi\in\mathcal{F},\,\mathbf{u}\in\mathbb{R}^{m}_{+}}& \mathbb{E}_{(\mathbf{x},y)\sim\mathcal{D}_{0}}\Big{[}\ell_{0} \big{(}\phi(\mathbf{x}),y\big{)}\Big{]}+h(\mathbf{u})\\ &\mathrm{subject\ to}&\mathbb{E}_{(\mathbf{x},y)\sim \mathcal{D}_{i}}\Big{[}\ell_{i}\big{(}\phi(\mathbf{x}),y\big{)}\Big{]}\leq u_{ i},\quad i=1,\ldots,m.\end{split}\] ( \[\tilde{\mathsf{P}}\] -RES ) _The corresponding minimizer \(\phi^{\star}\) is a resilient solution of the functional learning problem (\(\tilde{\mathsf{P}}_{\mathbf{u}}\))._ Prop. 4 shows that it is possible to simultaneously find a resilient relaxation \(\mathbf{u}^{\star}\) and solve the corresponding resilient learning problem. Indeed, a resilient solution of a constrained learning problem can be obtained by incorporating the relaxation cost in its objective. This is reminiscent of first-phase solvers found in interior-point methods used to tackle convex optimization problems [39, Chap. 11]. Note, once again, that this is not the same as directly adding the constraints in the objective as penalties or regularizations. Indeed, recall from Def. 1 that the resilient relaxation balances the marginal effects of \(h\) on \(\tilde{\mathsf{P}}^{\star}\) and not \(\ell_{0}\). Before using Prop. 4 to introduce a practical resilient learning algorithm and its approximation and generalization properties (Sec. 4), we use these equivalent formulations to relate resilient learning to classical learning tasks. ### Relation to Classical Learning Tasks **(Un)constrained learning**: Both traditional unconstrained and constrained learning can be seen as limiting cases of resilient learning. Indeed, if \(h\equiv 0\), \(\mathbf{u}\) has no effect on the objective of (\(\tilde{\mathsf{P}}\)-RES). We can then take \(u_{i}=B\) for \(i=1,\ldots,m\), which reduces (\(\tilde{\mathsf{P}}\)-RES) to an unconstrained learning problem (recall that all losses are \([-B,B]\)-valued). On the other hand, if \(h\) is the indicator function of the non-negative orthant (i.e., \(h(\mathbf{u})=\mathbf{0}\) for \(\mathbf{u}\preceq\mathbf{0}\) and \(h(\mathbf{u})=\infty\), otherwise), then it must be that \(\mathbf{u}=\mathbf{0}\) in (\(\tilde{\mathsf{P}}\)-RES) as long as this specification is feasible. Neither of these relaxation costs satisfy the conditions from Def. 1, since they are not componentwise increasing or differentiable, respectively. Still, there exists valid relaxation costs that approximate these problems arbitrarily well (see Appendix D.5). **Penalty-based methods**: Rather than the constrained formulations in (\(\mathsf{P}\)) or (\(\tilde{\mathsf{P}}_{\mathbf{u}}\)), requirements are often incorporated directly into the objective of learning tasks using fixed penalties as in \[\begin{split}\operatorname*{minimize}_{\phi\in\mathcal{F}}& \ \mathbb{E}_{\mathcal{D}_{0}}\big{[}\ell_{0}\big{(}\phi(\mathbf{x}),y\big{)} \big{]}+\sum_{i=1}^{m}\gamma_{i}\mathbb{E}_{\mathcal{D}_{i}}\big{[}\ell_{i} \big{(}\phi(\mathbf{x}),y\big{)}\big{]},\end{split} \tag{4}\] where the fixed \(\gamma_{i}>0\) represent the relative importance of the requirements. It is immediate from Prop. 3 that resilient learning with a linear relaxation cost \(h(\mathbf{u})=\sum_{i}\gamma_{i}u_{i}\) is equivalent to (4) as long as \(\mathbb{E}_{\mathcal{D}_{i}}\big{[}\ell_{i}\big{(}\phi^{\star}(\mathbf{x}),y \big{)}\big{]}\geq 0\). From Def. 1, this is the same as fixing the marginal effect of relaxations on the perturbation function. **Soft-margin SVM**: Linear relaxation costs are also found in soft-margin SVM formulations, namely \[\begin{split}\operatorname*{minimize}_{\theta\in\Theta,\, \mathbf{u}\in\mathbb{R}^{m}_{+}}&\frac{1}{2}\left\|\theta\right\| ^{2}+\gamma\sum_{i=1}^{m}u_{i}\\ \mathrm{subject\ to}& 1-y_{i}\theta^{T}\mathbf{x}_{i} \leq u_{i},\quad i=1,\ldots,m.\end{split}\] (PI) Though written here in its parametrized form, the hypothesis class underlying (PI) (namely, linear classifiers) is convex. It can therefore be seen as an instance of (\(\tilde{\mathsf{P}}\)-RES) where each loss \(\ell_{i}\) represent a classification requirement on an individual sample. Soft-margin SVM is therefore a resilient learning problem as opposed to its hard-margin version, where \(u_{i}=0\) for \(i=1,\ldots,m\). ## 4 Resilient Constrained Learning Algorithm We defined the resilient equilibrium \(\mathbf{u}^{\star}\) in (2) and the equivalent resilient learning problems (\(\tilde{\mathsf{P}}_{\mathbf{u}^{\star}}\)) and (\(\tilde{\mathsf{P}}\)-RES) in the context of the convex functional space \(\mathcal{F}\). Contrary to (\(\mathsf{P}_{\mathbf{u}}\)), that are defined on the finite dimensional \(\mathcal{F}_{\theta}\), these problems are not amenable to numerical solutions. Nevertheless, we have argued that as long as \(\mathcal{F}_{\theta}\) is a good approximation of \(\mathcal{F}\), the values of (\(\mathsf{P}_{\mathbf{u}}\)) and (\(\tilde{\mathsf{P}}_{\mathbf{u}}\)) are close. We use this idea to obtain a practical primal-dual algorithm (Alg. 1) to approximate solutions of (\(\hat{\text{P}}\)-RES) (and thus (\(\hat{\text{P}}_{\mathbf{u}^{*}}\))). The main result of this section (Thm. 1) establishes how good this approximation can be when using only samples from the \(\mathcal{D}_{i}\). Explicitly, consider a set of \(N\) i.i.d. sample pairs \((\mathbf{x}_{n,i},y_{n,i})\) drawn from \(\mathcal{D}_{i}\) and define the parametrized, empirical Lagrangian of the resilient learning problem (\(\hat{\text{P}}\)-RES) as \[\hat{L}_{\theta}(\theta,\boldsymbol{\lambda};\mathbf{u})=h(\mathbf{u})+\frac{ 1}{N}\sum_{n=1}^{N}\ell_{0}\big{(}f_{\theta}(\mathbf{x}_{n,0}),y_{n,0}\big{)}+ \sum_{i=1}^{m}\lambda_{i}\bigg{(}\frac{1}{N}\sum_{n=1}^{N}\ell_{i}\big{(}f_{ \theta}(\mathbf{x}_{n,i}),y_{n,i}\big{)}-u_{i}\bigg{)}. \tag{5}\] The parametrized, empirical dual problem of (\(\hat{\text{P}}\)-RES) is then given by \[\hat{\text{D}}_{\mathbf{R}}^{\star}=\max_{\boldsymbol{\lambda}\in\mathbb{R}_{ +}^{m}}\;\min_{\theta\in\Theta,\mathbf{u}\in\mathbb{R}_{+}^{m}}\;\hat{L}_{ \theta}(\theta,\boldsymbol{\lambda};\mathbf{u}).\] ( \[\hat{\text{D}}\] -RES ) The gap between \(\hat{\text{D}}_{\mathbf{R}}^{\star}\) and the optimal value \(\tilde{\text{P}}_{\mathbf{R}}^{\star}\) of the original problem (\(\text{P}_{\mathbf{u}}\)) can be bounded under the following assumptions. **Assumption 3**.: The loss functions \(\ell_{i}\), \(i=0\ldots m\), are \(M\)-Lipschitz continuous. **Assumption 4**.: For every \(\phi\in\mathcal{F}\), there exists \(\theta^{\dagger}\in\Theta\) such that \(\mathbb{E}_{\mathcal{D}_{i}}\big{[}\lvert\phi(\mathbf{x})-f_{\theta^{\dagger} }(\mathbf{x})\rvert\big{]}\leq\nu\), for all \(i=0,\ldots,m\). **Assumption 5**.: There exists \(\xi(N,\delta)\geq 0\) such that for all \(i=0,\ldots,m\) and all \(\theta\in\Theta\), \[\bigg{\lvert}\mathbb{E}_{\mathcal{D}_{i}}\big{[}\ell_{i}(\phi(\mathbf{x}),y) \big{]}-\frac{1}{N}\sum_{n=1}^{N}\ell_{i}\big{(}\phi(\mathbf{x}_{n,i}),y_{n,i} \big{)}\bigg{\rvert}\leq\xi(N,\delta)\] with probability \(1-\delta\) over draws of \(\{(\mathbf{x}_{n,i},y_{n,i})\}\). Although the parameterized functional space \(\mathcal{F}_{\theta}\) can be non-convex, as is the case for neural networks, Ass. 4 requires that it is rich in the sense that the distance to its convex hull is bounded. When \(\xi\) exists for all \(\delta>0\) and is a decreasing function of \(N\), Ass. (5) describes the familiar uniform convergence from learning theory. Such generalization bounds can be derived based on bounded VC dimension, Rademacher complexity, or algorithmic stability, to name a few [41]. We can now state the main result of this section, whose proof is provided in Appendix C. **Theorem 1**.: _Consider \(\tilde{\text{P}}_{\mathbf{R}}^{\star}\) and \(\mathbf{u}^{\star}\) from (\(\hat{\text{P}}\)-RES) and \(\hat{\text{D}}_{\mathbf{R}}^{\star}\) from (\(\hat{\text{D}}\)-RES). Under Ass. 1-5, it holds with probability of \(1-(3m+2)\delta\) that_ \[\big{\lvert}\tilde{\text{P}}_{\mathbf{R}}^{\star}-\hat{\text{D}}_{\mathbf{R}} \big{\rvert}\leq h(\mathbf{u}^{\star}+\mathds{1}\cdot M\nu)-h(\mathbf{u}^{ \star})+M\nu+(1+\Delta)\xi(N,\delta). \tag{6}\] Thm. 1 suggests that resilient learning problems can be tackled using (\(\hat{\text{D}}\)-RES), which can be solved using saddle point methods, as in Alg. 1. Even if the inner minimization problem is non-convex, dual ascent methods can be shown to converge as long as its solution can be well approximated using stochastic gradient descent [38, Thm. 2], as is often the case for overparametrized NNs [42, 43, 44]. A more detailed discussion on the algorithm can be found in Appendix E. Next, we showcase its performance and our theoretical results in two learning tasks. Numerical Experiments We investigate the numerical properties of resilient constrained learning. As illustrative case studies we consider federated learning under class imbalance in this section and invariance constrained learning on Appendix G. ### Heterogeneous Federated Learning Federated learning [45] entails learning a common model in a distributed manner by leveraging data samples from different _clients_. Usually, average performance across all clients is optimized under the assumption that data from different clients is identically distributed. In practice, heterogeneity in local data distributions can lead to uneven performance across clients [46, 47]. Since this may be undesirable, a sensible requirement in this setting is that the loss of the model is _similar_ for all clients. Let \(\mathfrak{D}_{i}\) be the distribution of data pairs for Client \(i\), and \(R_{i}(f_{\mathbf{\theta}})=\mathbb{E}_{(\mathbf{x},y)\sim\mathfrak{D}_{i}}\left[ \ell(f_{\mathbf{\theta}}(\mathbf{x}),y)\right]\) its statistical risk. We denote the average performance as \(\overline{R}(f_{\mathbf{\theta}}):=(1/C)\sum_{i=1}^{C}R_{i}(f_{\mathbf{\theta}})\), where \(C\) is the number of clients. As proposed in [18] heterogeneity issues can be tackled by imposing a proximity constraint between the performance of each client \(R_{i}\), and the loss averaged over all clients \(\overline{R}\). This leads to the constrained learning problem: \[\min_{\mathbf{\theta}\in\Theta} \overline{R}(f_{\mathbf{\theta}})\] ( \[P\] -FL) s. to \[R_{i}(f_{\mathbf{\theta}})-\overline{R}(f_{\mathbf{\theta}})-\epsilon \leq 0,\qquad i=1,\dots,C,\] where \(C\) is the number of clients and \(\mathfrak{D}_{i}\) is the distribution of data pairs for client \(i\), and \(\epsilon\) is a small (fixed) positive scalar. It is ready to see that this problem is of the form in P; see Appendix F. As shown by [18] this problem can be solved via a primal-dual approach in a privacy preserving manner, and with a negligible communication overhead, that amounts to communicating dual variables. However, because the heterogeneity of the data distribution across clients is unknown a priori, it can be challenging to specify a single constraint level for all clients that results in a reasonable trade-off between overall performance and differences among clients. In addition, the performance among all clients may be significantly reduced by a few clients for which the constraint is hard to satisfy. The heuristic adopted in [18] is to clip dual variables that exceed a fixed value. We thus propose to solve a resilient version of (\(P\)-FL), by adding a relaxation \(\mathbf{u}_{i},i=1,\dots,C\) for each constraint and a quadratic relaxation cost \(h(\mathbf{u})=\alpha\|\mathbf{u}\|_{2}^{2}\). The resilient version of problem (\(P\)-FL) can also be solved in a privacy preserving manner as long as \(h(\mathbf{u})\) is separable in each of the constraint perturbations \(\mathbf{u}_{i}\). In that case \(\mathbf{u}_{i}\) can be updated locally by client \(i\). Following the setup of [18], heterogeneity across clients is generated through class imbalance. More specifically, samples from different classes are distributed among clients using a Dirichlet distribution. Experimental and algorithmic details along with privacy considerations are presented in Appendix F. ### Results **Constraint Relaxation and Relative difficulty:** Our approach relaxes constraints according to their relative difficulty. In the context of class imbalance, the majority classes exert a stronger influence on the overall objective (average performance). Therefore, superior overall performance can be attained by minimizing losses in the majority classes more than in the minority classes. As a result, meeting the proximity constraint for clients with a higher fraction of training samples in the minority classes can be more costly in terms of overall performance. In Figure 2(left), we demonstrate that the constraint is effectively relaxed more for these clients. **Controlling the performance vs. relaxation trade-off:** Through the choice of the relaxation cost function, we can control the trade-off between relaxing requirements and performance. In order to illustrate this, we perform an ablation on the coefficient \(\alpha\) in the quadratic relaxation cost \(h(\mathbf{u})=\alpha\|\mathbf{u}\|_{2}^{2}\). As sown in Figure 2(right) smaller values of \(\alpha\) enable better performance at the cost of larger relaxations. As \(\alpha\) increases, the perturbation goes to zero and the problem approaches the original constrained problem. In this manner, the resilient approach enables navigating this trade-off by changing a single hyperparameter. Still, the optimal relaxation for each client is determined by their local data distribution, i.e. the relative difficulty of the constraint. **Sensitivity to Problem Specification:** The resilient approach is less sensitive to the specification of the constraints. Dual variables indicate the sensitivity of the objective with respect to constraint perturbations. As shown by figure 3(left) the resilient approach yields smaller dual variables, irrespectively of the tolerance \(\epsilon\) in the constraint specification. The same holds across different levels of heterogeneity among clients, as shown in Appendix F.4. **Constraint violation and Generalization:** Relaxing stringent requirements not only makes the empirical problem easier to solve, but it can also lead to a better empirical approximation of the underlying statistical problem. As shown in Figure 3(right) the resilient approach has a smaller fraction of clients that are infeasible at the end of training. In addition, when constraints are evaluated on the test set, larger constraint violations are observed for some clients in the constrained approach. In Appendix F.4 we show that this holds across different problem settings. In addition, we also show that in this can be attributed partly to the fact that larger generalization gaps for the constraints were observed for the constrained approach. That is, overly stringent requirements can result not only in large dual variables, but can also harm generalization. ## 6 Conclusion This paper introduced a method to specify learning constraints by balancing the marginal decrease in the objective value obtained from relaxation with the marginal increase in a relaxation cost. This resilient equilibrium has the effect of prioritizing the relaxation of constraints that are harder to satisfy. The paper also determined conditions under which this equilibrium exists and provided an algorithm to automatically find it during training, for which approximation and statistical guarantees were derived. Experimental validations showcased the advantages of resilient constrained learning for classification with invariance requirements and federated learning. Future work includes exploring different relaxation costs and applications to robustness against disturbances and outliers. Figure 3: (Left) Dual variables after training with respect to constraint specification \(\epsilon\). (Right) Constraint violations on train and test sets for \(\epsilon=0.002\). Both plots correspond to 200 clients. Figure 2: (Left) Constraint relaxation and relative difficulty for federated learning under heterogeneous class imbalance across clients (crosses). We plot the perturbation \(\mathbf{u}_{i}\) against the fraction of data from minority classes for each client, which is associated to how difficult it is to satisfy the constraint since minority classes typically have higher loss. (Right) Relaxation cost parameter (\(h(\mathbf{u})=\alpha\|\mathbf{u}\|^{2}\)) vs. final training loss and perturbation norm.
2310.01811
The Laplacian spectral moments of power hypergraphs
The $d$-th order Laplacian spectral moment of a $k$-uniform hypergraph is the sum of the $d$-th powers of all eigenvalues of its Laplacian tensor. In this paper, we obtain some expressions of the Laplacian spectral moments for $k$-uniform power hypergraphs, and these expressions can be represented by some parameters of graphs. And we show that some graphs can be determined by their high-order Laplacian spectrum by using the Laplacian spectral moments of power hypergraphs.
Jueru Liu, Lixiang Chen, Changjiang Bu
2023-10-03T05:54:53Z
http://arxiv.org/abs/2310.01811v1
# The Laplacian spectral moments of power hypergraphs ###### Abstract The \(d\)-th order Laplacian spectral moment of a \(k\)-uniform hypergraph is the sum of the \(d\)-th powers of all eigenvalues of its Laplacian tensor. In this paper, we obtain some expressions of the Laplacian spectral moments for \(k\)-uniform power hypergraphs, and these expressions can be represented by some parameters of graphs. And we show that some graphs can be determined by their high-order Laplacian spectrum by using the Laplacian spectral moments of power hypergraphs. keywords: hypergraph, spectral moment, Laplacian tensor, trace. _AMS classification (2020):_ 05C50, 05C65, 15A69. ## 1 Introduction For a \(k\)-uniform hypergraph \(\mathcal{H}\), the \(d\)-th order (Laplacian) spectral moment of \(\mathcal{H}\) is equal to the sum of the \(d\)-th powers of all eigenvalues of its adjacency (Laplacian) tensor. Since the \(d\)-th order trace of a tensor is equal to the sum of the \(d\)-th powers of its all eigenvalues [1], the \(d\)-th order (Laplacian) spectral moment of \(\mathcal{H}\) is equal to the \(d\)-th order trace of its adjacency (Laplacian) tensor. In 2013, Shao et al. [2] gave a formula for the trace of tensors in terms of some graph parameters. The coefficients of characteristic polynomial and topological index of hypergraphs can be studied by spectral moments of hypergraphs [3; 4; 5; 6]. In 2021, Clark and Cooper [4] expressed the spectral moments of hypergraph by the number of Veblen multi-hypergraphs and obtained the Harary-Sachs coefficient theorem for hypergraph. A formula for the spectral moment of a hypertree was given in terms of the number of some subhypertrees [5], and some high-order cospectral invariants of trees were given by the spectral moments of hypertrees [7]. In [6], the Estrada index and subgraph centrality of uniform hypergraphs were studied, which are closely related to the traces of the adjacency tensor. For Laplacian spectral moments of hypergraphs, the expressions of the first \(k\) orders traces of the Laplacian tensors were given by the degree sequence of \(k\)-uniform hypergraphs [3]. And an expression of the \(k+1\)-st order trace of Laplacian tensor of \(k\)-uniform hypergraphs was given in [8]. In this paper, we study Laplacian spectral moments of power hypergraphs. The expressions of the first \(2k\) orders Laplacian spectral moments of \(k\)-uniform power hypergraphs are given, which can be represented by some parameters of graphs. And we show that some graphs, which are not determined by (signless) Laplacian spectrum, can be determined by their high-order (signless) Laplacian spectrum by considering the (signless) Laplacian spectral moments of power hypergraphs. ## 2 Preliminaries Next, we introduce some notations and concepts for tensors and hypergraphs. For a positive integer \(n\), let \([n]=\{1,2,\ldots,n\}\) and \([n]^{k}=\{i_{1}i_{2}\cdots i_{k}|i_{j}\in[n],j=1,\ldots,k\}\). A \(k\)-order \(n\)-dimension complex _tensor_\(\mathcal{T}=(t_{i\alpha})\) is a multi-dimensional array with \(n^{k}\) entries on complex number field \(\mathbb{C}\), where \(i\in[n]\) and \(\alpha\in[n]^{k-1}\). A _hypergraph_\(\mathcal{H}=(V,E)\) consists of vertex set \(V=\{1,2,\ldots,n\}\) and edge set \(E=\{e_{1},e_{2},\ldots,e_{m}\}\), where \(e_{j}\subseteq V(\mathcal{H})\) for \(j\in[m]\). If \(|e_{j}|=k\) for each \(j\in[m]\) and \(k\geq 2\), then \(\mathcal{H}\) is called a \(k\)-_uniform_ hypergraph. For a \(k\)-uniform hypergraph \(\mathcal{H}\) with \(n\) vertices, its _adjacency tensor_\(\mathcal{A}_{\mathcal{H}}=(a_{i\alpha})\) is a \(k\)-order \(n\)-dimension tensor has entries \[a_{i\alpha}=\left\{\begin{array}{ll}\frac{1}{(k-1)!},&\mbox{if $\{i,i_{2}, \ldots,i_{k}\}\in E(\mathcal{H})$ for $\alpha=i_{2}\cdots i_{k}$},\\ 0,&\mbox{otherwise}.\end{array}\right.\] The spectrum and eigenvalues of \(\mathcal{A}_{\mathcal{H}}\) is called the spectrum and eigenvalues of \(\mathcal{H}\), repectively [9]. For a vertex \(i\in V(\mathcal{H})\), the _degree_ of \(i\) is the number of edges of \(\mathcal{H}\) containing the vertex \(i\), denoted by \(d_{i}\). The _degree tensor_\(\mathcal{D}_{\mathcal{H}}=\mbox{diag}(d_{1},\ldots,d_{n})\) of \(\mathcal{H}\) is a \(k\)-order \(n\)-dimension diagonal tensor. And tensor \(\mathcal{L}_{\mathcal{H}}=\mathcal{D}_{\mathcal{H}}-\mathcal{A}_{\mathcal{H}}\) is the _Laplacian tensor_ of \(\mathcal{H}\)[10]. In 2005, Lim [11] and Qi [12] introduced the eigenvalues of tensors independently. Denote the set of \(n\)-dimension complex vectors and the set of \(k\)-order \(n\)-dimension complex tensors by \(\mathbb{C}^{n}\) and \(\mathbb{C}^{[k,n]}\), respectively. For a tensor \(\mathcal{T}=(t_{i\alpha})\in\mathbb{C}^{[k,n]}\) and \(x=(x_{1},\ldots,x_{n})^{\sf T}\in\mathbb{C}^{n}\), \({\cal T}x^{k-1}\) is a vector in \(\mathbb{C}^{n}\) whose \(i\)-th component is \[({\cal T}x^{k-1})_{i}=\sum_{\alpha\in[n]^{k-1}}t_{i\alpha}x^{\alpha},\] where \(x^{\alpha}=x_{i_{1}}\cdots x_{i_{k-1}}\) if \(\alpha=i_{2}\cdots i_{k-1}\). For a complex number \(\lambda\in\mathbb{C}\), if there is a vector \(x\in\mathbb{C}^{n}\setminus\{0\}\) such that \[{\cal T}x^{k-1}=\lambda x^{[k-1]},\] then \(\lambda\) is called an _eigenvalue_ of \({\cal T}\) and \(x\) is an _eigenvector_ of \({\cal T}\) associated with \(\lambda\), where \(x^{[k-1]}=(x_{1}^{k-1},\ldots,x_{n}^{k-1})^{\sf T}\). The multi-set of all eigenvalues of tensor \({\cal T}\) is the _spectrum_ of \({\cal T}\), denoted by \(\sigma({\cal T})\). In [13], an expression of \(d\)-th order trace for tensors is given. And Hu et al. [1] proved that the \(d\)-th order trace of a \(k\)-order \(n\)-dimension tensor \({\cal T}\) is equal to the sum of the \(d\)-th powers of its all eigenvalues, that is, \({\rm Tr}_{d}({\cal T})=\sum_{\lambda\in\sigma({\cal T})}\lambda^{d}\). In 2013, Shao et al. [2] gave a formula for \({\rm Tr}_{d}({\cal T})\). Next, we introduce some related notations. For a positive integer \(d\), let \[{\cal F}_{d}=\{(i_{1}\alpha_{1},\ldots,i_{d}\alpha_{d})|\ 1\leq i_{1}\leq \cdots\leq i_{d}\leq n;\alpha_{1},\ldots,\alpha_{d}\in[n]^{k-1}\}.\] For \(f=(i_{1}\alpha_{1},\ldots,i_{d}\alpha_{d})\in{\cal F}_{d}\) and a \(k\)-order \(n\)-dimension tensor \({\cal T}=(t_{i\alpha})\), let \(\pi_{f}({\cal T})=\prod_{j=1}^{d}t_{i_{j}\alpha_{j}}\). Suppose \(i_{j}\alpha_{j}=i_{j}v_{1}^{(j)}\cdots v_{k-1}^{(j)}\), let \(E_{j}(f)=\{(i_{j},v_{1}^{(j)}),\ldots,(i_{j},v_{k-1}^{(j)})\}\) be the set of arcs from \(i_{j}\) to \(v_{1}^{(j)},\ldots,v_{k-1}^{(j)}\) and \(E(f)=\bigcup_{j=1}^{d}E_{j}(f)\) be an arc multi-set. Let \(V_{j}(f)=\{i_{j},v_{1}^{(j)},\ldots,v_{k-1}^{(j)}\}\) and \(V(f)=\bigcup_{j=1}^{d}V_{j}(f)\) be a vertex set. Let multi-digraph \(D(f)=(V(f),E(f))\). Let \(b(f)\) be the product of the factorials of the multiplicities of all the arcs in \(D(f)\). Let \(c(f)\) be the product of the factorials of the outdegrees of all the vertices in \(D(f)\). Let \(W(f)\) be the set of all closed walks with the arc multi-set \(E(f)\). In this paper, if a multi-set A contains \(m\) distinct elements \(a_{1},\ldots,a_{m}\) with multiplicities \(r_{1},\ldots,r_{m}\) respectively, then we write \({\rm A}=\{a_{1}^{r_{1}},\ldots,a_{m}^{r_{m}}\}\). The formula for the \(d\)-th order trace of tensors given by Shao et al. is shown as follows. **Lemma 2.1**.: [2] Let \({\cal T}=(t_{i\alpha})\) be a \(k\)-order \(n\)-dimension tensor. Then \[{\rm Tr}_{d}({\cal T})=(k-1)^{n-1}\sum_{f\in{\cal F}_{d}}\frac{b(f)}{c(f)}\pi_ {f}({\cal T})|W(f)|. \tag{2.1}\] Since the \(d\)-th order Laplacian spectral moment of \(\mathcal{H}\) is equal to the \(d\)-th order trace of its Laplacian tensor, we study the Laplacian spectral moment of uniform hypergraphs by considering the trace formula of tensor given by Shao et al. For a \(k\)-uniform hypergraph \(\mathcal{H}\) with \(n\) vertices, let \(\mathcal{L}_{\mathcal{H}}\) be the Laplacian tensor of \(\mathcal{H}\). When \(\mathcal{T}=\mathcal{L}_{\mathcal{H}}\) in Eq.(2.1), the \(d\)-th order Laplacian spectral moment of \(\mathcal{H}\) is \[\mathrm{Tr}_{d}(\mathcal{L}_{\mathcal{H}})=(k-1)^{n-1}\sum_{f\in\mathcal{F}_{ d}}\frac{b(f)}{c(f)}\pi_{f}(\mathcal{L}_{\mathcal{H}})|W(f)|. \tag{2.2}\] Next, we simplify Eq.(2.2) by classifying \(f\) and introduce some related concepts. For \(i_{j}\alpha_{j}\in[n]^{k}\) and a \(k\)-order \(n\)-dimension tensor \(\mathcal{T}=(t_{i\alpha})\), the entry \(t_{i_{j}\alpha_{j}}\) in tensor \(\mathcal{T}\) is called the corresponding entry of \(i_{j}\alpha_{j}\). Suppose \(\alpha_{j}=v_{1}^{(j)}\cdots v_{k-1}^{(j)}\), for a \(k\)-uniform hypergraph \(\mathcal{H}\), \(e=\{i_{j},v_{1}^{(j)},\ldots,v_{k-1}^{(j)}\}\) is called the corresponding edge of tuple \(i_{j}\alpha_{j}\) if the corresponding entry of \(i_{j}\alpha_{j}\) in its adjacency tensor is not equal to zero. Let \(\pi_{f}(\mathcal{L}_{\mathcal{H}})|W(F)|\neq 0\) for \(f=(i_{1}\alpha_{1},\ldots,i_{d}\alpha_{d})\in\mathcal{F}_{d}\). Since \(\pi_{f}(\mathcal{L}_{\mathcal{H}})=\prod_{j=1}^{d}l_{i_{j}\alpha_{j}}\neq 0\), we know \(l_{i_{j}\alpha_{j}}\neq 0\) for all \(j\in[d]\). Then the tuple \(i_{j}\alpha_{j}(j=1,\ldots,d)\) in \(f\) corresponds either to a diagonal entry of \(\mathcal{L}_{\mathcal{H}}\) or to an edge of \(\mathcal{H}\). According to the number of the tuples which correspond to the diagonal entries of \(\mathcal{L}_{\mathcal{H}}\), the set \(\{f\in\mathcal{F}_{d}|\ \pi_{f}(\mathcal{L}_{\mathcal{H}})\neq 0\}\) can be represented as the union of three disjoint sets, that is, \[\{f\in\mathcal{F}_{d}|\ \pi_{f}(\mathcal{L}_{\mathcal{H}})\neq 0\}=\mathcal{F}_{d }^{(1)}\cup\mathcal{F}_{d}^{(2)}\cup\mathcal{F}_{d}^{(3)}, \tag{2.3}\] where \(\mathcal{F}_{d}^{(1)}=\{f\in\mathcal{F}_{d}|\) all tuples in \(f\) correspond to diagonal entry of \(\mathcal{L}_{\mathcal{H}}\}\), \(\mathcal{F}_{d}^{(2)}=\{f=(i_{1}\alpha_{1},\ldots,i_{d}\alpha_{d})\in\mathcal{ F}_{d}|\ \alpha_{j}=v_{1}^{(j)}\cdots v_{k-1}^{(j)}\ \text{and}\ \{i_{j},v_{1}^{(j)},\ldots,v_{k-1}^{(j)}\}\in E( \mathcal{H})\ \text{for}\ j=1,\ldots,d\}\), \(\mathcal{F}_{d}^{(3)}=\{f\in\mathcal{F}_{d}|\ \pi_{f}(\mathcal{L}_{\mathcal{H}})\neq 0\} \setminus(\mathcal{F}_{d}^{(1)}\cup\mathcal{F}_{d}^{(2)})\). **Lemma 2.2**.: Let \(\mathcal{H}\) be a \(k\)-uniform hypergraph with \(n\) vertices. And the degree sequence of \(\mathcal{H}\) is \(d_{1},d_{2},\ldots,d_{n}\). Then \[(k-1)^{n-1}\sum_{f\in\mathcal{F}_{d}^{(1)}}\frac{b(f)}{c(f)}\pi_{f}(\mathcal{L }_{\mathcal{H}})|W(f)|=(k-1)^{n-1}\sum_{i=1}^{n}d_{i}^{d}, \tag{2.4}\] \[(k-1)^{n-1}\sum_{f\in\mathcal{F}_{d}^{(2)}}\frac{b(f)}{c(f)}\pi_{f}(\mathcal{ L}_{\mathcal{H}})|W(f)|=(-1)^{d}\mathrm{Tr}_{d}(\mathcal{A}_{\mathcal{H}}). \tag{2.5}\] Proof.: For \(f\in\mathcal{F}_{d}^{(1)}\), if \(f=(i_{1}i_{1}\cdots i_{1},\ldots,i_{d}i_{d}\cdots i_{d})\), since the arc multi-set \(E(f)\) only includes loops \((i_{j},i_{j})\)\((j=1,\ldots,d)\), we know that \(|W(f)|\neq 0\) if and only if \(i_{1}=\cdots=i_{d}\). Let \(f_{i}=(ii\cdots i,\ldots,ii\cdots i)\in{\cal F}_{d}(i=1,\ldots,n)\), then \({\cal F}_{d}^{(1)}=\{f_{1},\ldots,f_{n}\}\). For \(f_{i}\in{\cal F}_{d}^{(1)}\), since \(b(f_{i})=c(f_{i})=(d(k-1))!\), \(|W(f_{i})|=1\) and \(\pi_{f_{i}}({\cal L}_{\cal H})=l_{ii\cdots i}^{d}=d_{i}^{d}\), Eq.(2.4) can be obtained directly. For \(f=(i_{1}\alpha_{1},\ldots,i_{d}\alpha_{d})\in{\cal F}_{d}^{(2)}\), where \(\alpha_{j}=v_{1}^{(j)}\cdots v_{k-1}^{(j)}\) for \(j=1,\ldots,d\). Since \(\{i_{j},v_{1}^{(j)},\ldots,v_{k-1}^{(j)}\}\in E({\cal H})\) for \(j=1,\ldots,d\), we have \(\pi_{f}({\cal L}_{\cal H})=\prod_{j=1}^{d}l_{i_{j}\alpha_{j}}=(-\frac{1}{(k-1 )!})^{d}=(-1)^{d}\pi_{f}({\cal A}_{\cal H}).\) And \(\pi_{f}({\cal A}_{\cal H})\neq 0\) if and only if \(\{i_{j},v_{1}^{(j)},\ldots,v_{k-1}^{(j)}\}\in E({\cal H})\) for \(j=1,\ldots,d\), that is, \(f\in{\cal F}_{d}^{(2)}\), then Eq.(2.5) can be obtained. According to Lemma 2.2, in order to obtain the expressions of the first \(2k\) orders Laplacian spectral moments for \(k\)-uniform power hypergraphs, we should give some expressions of the spectral moments for \(k\)-uniform power hypergraphs. For a graph \(G\) and a positive integer \(k\geq 3\), the \(k\)_-power hypergraph_ of \(G\), denoted by \(G^{(k)}\), is a \(k\)-uniform hypergraph obtained by adding \(k-2\) new vertices whose degrees are \(1\) to each edge of \(G\)[14]. The spectrum of a hypergraph is said to be \(k\)_-symmetric_, if its spectrum is invariant under a rotation of an angle \(2\pi/k\) in the complex plane. Shao et al. [2] gave a characterization (in terms of the traces of the adjacency tensors) of the \(k\)-uniform hypergraphs whose spectrum are \(k\)-symmetric, that is, the spectrum of a \(k\)-uniform hypergraph \({\cal H}\) is \(k\)-symmetric if and only if \({\rm Tr}_{d}({\cal A}_{\cal H})=0\) for \(k\nmid d\). It is obvious that the spectrum of a \(k\)-uniform power hypergraph is \(k\)-symmetric. Then, the \(d\)-th spectral moments of \(G^{(k)}\) are equal to \(0\) for \(d=k+1,\ldots,2k-1\), that is, \[{\rm Tr}_{d}({\cal A}_{G^{(k)}})=0\ {\rm for}\ d=k+1,\ldots,2k-1. \tag{2.6}\] And the expression of the \(2k\)-th order spectral moment of \(G^{(k)}\) is given as follows. **Lemma 2.3**.: Let \(G\) be a graph with \(n\) vertices and \(m\) edges. Let \(d_{i}\) denote the degree of vertex \(i\) in \(G\)\((i=1,\ldots,n)\). Then the \(2k\)-th order spectral moment of \(G^{(k)}\) is \[{\rm Tr}_{2k}({\cal A}_{G^{(k)}})=k^{k-1}(k-1)^{N-k}\big{(}1-2k^{k-3}(k-1)^{1-k }\big{)}m+k^{2k-3}(k-1)^{N-2k+1}\sum_{i=1}^{n}d_{i}^{2}, \tag{2.7}\] where \(N=n+m(k-2)\). Proof.: Let \({\cal G}=G^{(k)}\). Then \(|V({\cal G})|=n+m(k-2)=N\) and \(|E({\cal G})|=m\). Let \(N_{G}(P_{2})\) and \(N_{\cal G}(P_{2}^{(k)})\) denote the number of paths with length \(2\) in \(G\) and \({\cal G}\), respectively. Then \(N_{\mathcal{G}}(P_{2}^{(k)})=N_{G}(P_{2})=\sum\limits_{i=1}^{n}\binom{d_{i}}{2}\). From Lemma 2.1, we get \[\operatorname{Tr}_{2k}(\mathcal{A}_{\mathcal{G}})=(k-1)^{N-1}\sum\limits_{f\in \mathcal{F}_{2k}}\frac{b(f)}{c(f)}\pi_{f}(\mathcal{A}_{\mathcal{G}})|W(f)|.\] For \(f=(i_{1}\alpha_{1},\ldots,i_{2k}\alpha_{2k})\in\mathcal{F}_{2k}\), if \(\pi_{f}(\mathcal{A}_{\mathcal{G}})=\prod_{j=1}^{2k}a_{i_{j}\alpha_{j}}\neq 0\), then \(a_{i_{j}\alpha_{j}}\neq 0\) for all \(j\in[2k]\). For \(|W(f)|\neq 0\), there are the following two cases. Case 1: \(f=(i_{1}\alpha_{1},i_{1}\beta_{1},\ldots,i_{k}\alpha_{k},i_{k}\beta_{k})=f_{e} \in\mathcal{F}_{2k}\), where \(\{i_{1},\ldots,i_{k}\}=e\in E(\mathcal{G})\) and \(\alpha_{j},\beta_{j}\in\big{(}\{i_{1},\ldots,i_{k}\}\setminus\{i_{j}\}\big{)}^ {k-1}\) for \(j=1,\ldots,k\). Then \[\sum\limits_{e\in E(\mathcal{G})}\frac{b(f_{e})}{c(f_{e})}\pi_{f_ {e}}(\mathcal{A}_{\mathcal{G}})|W(f_{e})|\] \[= k^{k-1}(k-1)^{1-k}|E(\mathcal{G})|.\] Case 2: \(f=(i_{1}\alpha_{1},j_{1}\beta_{1},i_{2}\alpha_{2},\ldots,i_{k}\alpha_{k},j_{2} \beta_{2},\ldots,j_{k}\beta_{k})=f_{e_{1}e_{2}}\in\mathcal{F}_{2k}\), where \(i_{1}=j_{1}\), \(\{i_{1},i_{2},\ldots,i_{k}\}=e_{1}\in E(\mathcal{G}),\ \{i_{1},j_{2},\ldots,j_{k}\}=e_{2}\in E( \mathcal{G})\) and \(\alpha_{l}\in\big{(}\{i_{1},\ldots,i_{k}\}\setminus\{i_{l}\}\big{)}^{k-1},\ \beta_{l}\in\big{(}\{j_{1},\ldots,j_{k}\}\setminus\{j_{l}\}\big{)}^{k-1}\) for \(l=1,\ldots,k\). Then \[\sum\limits_{e_{1}e_{2}\subset\mathcal{G}}\frac{b(f_{e_{1}e_{2}}) }{c(f_{e_{1}e_{2}})}\pi_{f_{e_{1}e_{2}}}(\mathcal{A}_{\mathcal{G}})|W(f_{e_{1} e_{2}})|\] \[= \sum\limits_{e_{1}e_{2}\subset\mathcal{G}}\frac{2k(k-1)(k^{k-2}) ^{2}(2k-3)!\big{(}(k-2)!\big{)}^{2k-2}}{\big{(}2(k-1)\big{)}!\big{(}(k-1)! \big{)}^{2k-2}}\Big{(}\frac{1}{(k-1)!}\Big{)}^{2k}2\big{(}(k-1)!\big{)}^{2k}\] \[= 2k^{2k-3}(k-1)^{2-2k}N_{\mathcal{G}}(P_{2}^{(k)}).\] Then \[\operatorname{Tr}_{2k}(\mathcal{A}_{\mathcal{G}}) =(k-1)^{N-1}\Big{(}k^{k-1}(k-1)^{1-k}|E(\mathcal{G})|+2k^{2k-3}(k- 1)^{2-2k}N_{\mathcal{G}}(P_{2}^{(k)})\Big{)}\] \[=k^{k-1}(k-1)^{N-k}\big{(}1-2k^{k-3}(k-1)^{1-k}\big{)}m+k^{2k-3}( k-1)^{N-2k+1}\sum\limits_{i=1}^{n}d_{i}^{2},\] where \(N=n+m(k-2)\). ## 3 Main results In this section, we give an expression of the \(d\)-th order Laplacian spectral moments for \(k\)-uniform hypergraphs. And the explicit expressions of the first \(2k\) orders Laplacian spectral moments for \(k\)-uniform power hypergraphs are given. Given two hypergraphs \(\mathcal{H}=(V(\mathcal{H}),E(\mathcal{H}))\) and \(H=(V(H),E(H))\), if \(V(H)\subseteq V(\mathcal{H})\) and \(E(H)\subseteq E(\mathcal{H})\), then \(H\) is said to be a _subhypergraph_ of \(\mathcal{H}\). A \(k\)-uniform _multi-hypergraph_\(\mathcal{H}\) is a pair \((V(\mathcal{H}),E(\mathcal{H}))\), where \(E(\mathcal{H})\) is a multi-set of subsets of \(V(\mathcal{H})\) with cardinality \(k\). A _Veblen hypergraph_ is a \(k\)-uniform, \(k\)-valent (i.e., the degree of every vertex is a multiple of \(k\)) multi-hypergraph [4]. For a multi-hypergraph \(H\), let \(\underline{H}\) be the simple \(k\)-uniform hypergraph formed by removing duplicate edges of \(H\). And \(H\) is called a _multi-subgraph_ of \(\mathcal{H}\) if \(\underline{H}\) is a subhypergraph of \(\mathcal{H}\). Let \(\mathcal{V}_{d}(\mathcal{H})\) denote the set of connected Veblen multi-subgraph of \(\mathcal{H}\) with \(d\) edges. For \(f=(i_{1}\alpha_{1},\ldots,i_{d}\alpha_{d})\in\mathcal{F}_{d}\) and a \(k\)-uniform hypergraph \(\mathcal{H}\) (where \(\alpha_{j}=v_{1}^{(j)}\cdots v_{k-1}^{(j)}\) for \(j=1,\ldots,d\)), the _multi-subgraph induced by \(f\)_, denoted by \(H(f)\), is the multi-hypergraph with the vertex set \(V(f)\subseteq V(\mathcal{H})\) and the edge multi-set \(E(H(f))=\{\{i_{j},v_{1}^{(j)},\ldots,v_{k-1}^{(j)}\}|\ (\mathcal{A}_{ \mathcal{H}})_{i_{j}\alpha_{j}}\neq 0,\ 1\leq j\leq d\}\), and \(\underline{H}(f)\) is a subhypergraph of \(\mathcal{H}\). A _walk_ ia a digraph \(D\) is a non-empty alternating sequence \(v_{0}e_{0}v_{1}e_{1}\cdots v_{k}e_{k}\) of vertices and arcs in \(D\) such that \(e_{i}=(v_{i},v_{i+1})\) for all \(i<k\). A walk is _closed_ if \(v_{0}=v_{k}\). A closed walk in a digraph is an _Eulerian closed walk_ if it traverses each arc of this digraph exactly once. A digraph \(D\) is called _Eulerian_ if \(D\) has an Eulerian closed walk. Let \(d^{+}(v)\) and \(d^{-}(v)\) be the outdegree and indegree of the vertex \(v\in V(D)\), respectively. The digraph \(D\) is Eulerian if and only if \(d^{+}(v)=d^{-}(v)\) for all \(v\in V(D)\) and \(D\) is weakly connected. Since \(W(f)\) is the set of all closed walks with the arc multi-set \(E(f)\), we know that \(|W(f)|\) is equal to the number of Eulerian closed walks in the multi-digraph \(D(f)\). For \(f\in\mathcal{F}_{d}^{(3)}\), we give the following conclusion. **Lemma 3.1**.: Let \(\mathcal{H}\) be a \(k\)-uniform hypergraph with \(n\) vertices. If \(f\in\mathcal{F}_{d}^{(3)}\) and \(|W(f)|\neq 0\), then the multi-subgraph \(H(f)\) induced by \(f\) is a connected Veblen multi-subgraph of \(\mathcal{H}\) with at most \(d-1\) edges. Proof.: For \(f=(i_{1}\alpha_{1},\ldots,i_{d}\alpha_{d})\in\mathcal{F}_{d}\) (where \(\alpha_{j}=v_{1}^{(j)}\cdots v_{k-1}^{(j)}\) for \(j=1,\ldots,d\)), if \(|W(f)|\neq 0\), then the multi-digraph \(D(f)=(V(f),E(f))\) is Eulerian. For all \(v\in V(f)\), we have \(d^{+}_{D(f)}(v)=d^{-}_{D(f)}(v)\) and \[d^{+}_{D(f)}(v) =(k-1)|\{i_{j}\alpha_{j}|\ i_{j}=v\}|\] \[=(k-1)\big{(}|\{i_{j}\alpha_{j}|\ i_{j}=v\text{ and }\{i_{j},v^{(j)}_{1} \cdots v^{(j)}_{k-1}\}\in E(\mathcal{H})\}|+|\{i_{j}\alpha_{j}|\ i_{j}\alpha_{j }=vv\cdots v\}|\big{)},\] \[d^{-}_{D(f)}(v) =|\{i_{j}\alpha_{j}|\ i_{j}\neq v\text{ and }v\in V_{j}(f)\}|+(k-1)|\{i_{j} \alpha_{j}|\ i_{i}\alpha_{j}=vv\cdots v\}|.\] Then \[(k-1)|\{i_{j}\alpha_{j}|\ i_{j}=v\text{ and }\{i_{j},v^{(j)}_{1}\cdots v^{(j)}_{ k-1}\}\in E(\mathcal{H})\}|=|\{i_{j}\alpha_{j}|\ i_{j}\neq v\text{ and }v\in V_{j}(f)\}|.\] Fix a vertex \(v\in V(H(f))\). We have \[d_{H(f)}(v) =|\{i_{j}\alpha_{j}|\ i_{j}=v\text{ and }\{i_{j},v^{(j)}_{1} \cdots v^{(j)}_{k-1}\}\in E(\mathcal{H})\}|+|\{i_{j}\alpha_{j}|\ i_{j}\neq v \text{ and }v\in V_{j}(f)\}|\] \[=k|\{i_{j}\alpha_{j}|\ i_{j}=v\text{ and }\{i_{j},v^{(j)}_{1} \cdots v^{(j)}_{k-1}\}\in E(\mathcal{H})\}|.\] So \(k|d_{H(f)}(v)\), it follows that \(H(f)\) is a Veblen hypergraph by definition. And \(f\in\mathcal{F}^{(3)}_{d}\), then \(H(f)\) has at most \(d-1\) edges. For a connected Veblen multi-subgraph \(H\) of \(\mathcal{H}\) and \(f=(i_{1}\alpha_{1},\ldots,i_{d}\alpha_{d})\in\mathcal{F}_{d}\), \(f\) is called corresponding to \(H\) if \(f\) satisfy the following conditions: (a) there is a integer \(l(1\leq l\leq d-1)\), such that \(i_{j_{1}}\alpha_{j_{1}},\ldots,i_{j_{l}}\alpha_{j_{l}}\) are corresponding to some edges of \(H\); (b) for every edge \(e\in E(H)\), there exists \(j\in[d]\) such that \(i_{j}\alpha_{j}\) is corresponding to \(e\); (c) and others in \(f\) are \(v\beta_{v}\) where \(\beta_{v}=v\cdots v\in[n]^{k-1}\) for \(v\in V(H)\). Let \(\mathcal{F}_{d}(H)=\{f\in\mathcal{F}_{d}|\ f\) is corresponding to \(H\}\). From Lemma 3.1, we have \[\{f\in\mathcal{F}^{(3)}_{d}||W(f)|\neq 0\}=\bigcup_{z=1}^{d-1}\bigcup_{H\in \mathcal{V}_{z}(\mathcal{H})}\mathcal{F}_{d}(H).\] For simplicity, \(\tau(D(f))\) is abbreviated to \(\tau(f)\), which is the number of arborescences in multi-digraph \(D(f)\). According to the above process, the formula for the \(d\)-th order Laplacian spectral moment of \(k\)-uniform hypergraphs is given as follows. **Theorem 3.2**.: Let \(\mathcal{H}\) be a \(k\)-uniform hypergraph with \(n\) vertices. And the degree sequence of \(\mathcal{H}\) is \(d_{1},d_{2},\ldots,d_{n}\). Then \[\mathrm{Tr}_{d}(\mathcal{L}_{\mathcal{H}})=(k-1)^{n-1}\sum_{i=1}^{n}d_{i}^{d}+(- 1)^{d}\mathrm{Tr}_{d}(\mathcal{A}_{\mathcal{H}})+d(k-1)^{n}\sum_{z=1}^{d-1} \sum_{H\in\mathcal{V}_{z}(\mathcal{H})}\sum_{f\in\mathcal{F}_{d}(H)}\frac{ \tau(f)\pi_{f}(\mathcal{L}_{\mathcal{H}})}{\prod\limits_{v\in V(f)}\!\!d^{+}( v)}.\] Proof.: From Eq.(2.3), the \(d\)-th Laplacian spectral moments of \(\mathcal{H}\) is \[\mathrm{Tr}_{d}(\mathcal{L}_{\mathcal{H}})=(k-1)^{n-1}\sum_{j=1}^{3}\sum_{f \in\mathcal{F}_{d}^{(j)}}\frac{b(f)}{c(f)}\pi_{f}(\mathcal{L}_{\mathcal{H}}) |W(f)|. \tag{3.1}\] For \(f\in\mathcal{F}_{d}^{(3)}\), let \(\widetilde{D}(f)\) be the digraph obtained by removing all repeated arcs of \(D(f)\). Then \(c(f)=\prod_{v\in V(f)}d^{+}(v)!\), \(b(f)=\prod_{e\in\widetilde{D}(f)}w(e)!\) and \(|E(f)|=d(k-1)\). From Theorem 6 in [15], the number of Eulerian closed walks in \(D(f)\) is \[|W(f)|=\frac{|E(f)|}{b(f)}|\mathfrak{E}(f)|, \tag{3.2}\] where \(|\mathfrak{E}(f)|\) is the number of the Eulerian circuits in \(D(f)\). From BEST Theorem [16, 17], the number of the Eulerian circuits in \(D(f)\) is \[|\mathfrak{E}(f)|=\tau(f)\prod_{v\in V(f)}(d^{+}(v)-1)!. \tag{3.3}\] According to Eq.(3.2) and Eq.(3.3), we have \[(k-1)^{n-1}\sum_{f\in\mathcal{F}_{d}^{(3)}}\frac{b(f)}{c(f)}\pi_ {f}(\mathcal{L}_{\mathcal{H}})|W(f)| \tag{3.4}\] \[= (k-1)^{n-1}\sum_{z=1}^{d-1}\sum_{H\in\mathcal{V}_{z}(\mathcal{H} )}\sum_{f\in\mathcal{F}_{d}(H)}\frac{b(f)}{c(f)}\pi_{f}(\mathcal{L}_{ \mathcal{H}})|W(f)|\] \[= d(k-1)^{n}\sum_{z=1}^{d-1}\sum_{H\in\mathcal{V}_{z}(\mathcal{H} )}\sum_{f\in\mathcal{F}_{d}(H)}\frac{\tau(f)}{\prod\limits_{v\in V(f)}\!\!d^{ +}(v)}\pi_{f}(\mathcal{L}_{\mathcal{H}}).\] Then we can obtain the expression for the \(d\)-th order Laplacian spectral moment of \(\mathcal{H}\) by substituting Eq.(2.4), Eq.(2.5) and Eq.(3.4) into Eq.(3.1). **Remark 3.3**.: Since \(\mathrm{Tr}_{d}(\mathcal{A}_{\mathcal{H}})=0\) (\(d=1,\ldots,k-1\)) and \(\mathrm{Tr}_{k}(\mathcal{A}_{\mathcal{H}})=k^{k-1}(k-\sum_{f\in\mathcal{F}_{d} ^{(k-1)}}\sum_{f\in\mathcal{F}_{d}^{(k-1)}}\sum_{f\in\mathcal{F}_{d}^{(k-1)}} \sum_{f\in\mathcal{F}_{d}^{(k-1)}}\sum_{f\in\mathcal{F}_{d}^{(k-1)}}\sum_{f \in\mathcal{F}_{d}^{(k-1)}}\sum_{f\in\mathcal{F}_{d}^{(k-1)}}\sum_{f\in \mathcal{F}_{d}^{(k-1)}}\sum_{f\in\mathcal{F}_{d}^{(k-1)}}\sum_{f\in \mathcal{F}_{d}^{(k-1)}}\sum_{f\in\mathcal{F}_{d}^{(k-1)}}\sum_{f\in \mathcal{F}_{d}^{(k-1)}}\sum_{f\in\mathcal{F}_{d}^{(k-1)}}\sum_{f\in \mathcal{F}_{d}^{(k-1)}}\sum_{f\in\mathcal{F}_{d}^{(k-1)}}\sum_{f\in \mathcal{F}_{d}^{(k-1)}}\sum_{f\in\mathcal{F}_{d}^{(k-1)}}\sum_{f\in\mathcal{F} _{d}^{(k-1)}}\sum_{f\in\mathcal{F}_{d}^{(k-1)}}\sum_{f\in\mathcal{F}_{d}^{(k-1) }}\sum_{f\in\mathcal{F}_{d}^{(k-1)}}\sum_{f\in\mathcal{F}_{d}^{(k-1)}}\sum_{f \in\mathcal{F}_{d}^{(k-1)}}\sum_{f\in\mathcal{F}_{d}^{(k-1)}}\sum_{f\in \mathcal{F}_{d}^{(k-1)}}\sum_{f\in\mathcal{F}_{d}^{(k-1)}}\sum_{f\in\mathcal{F} _{d}^{(k-1)}}\sum_{f\in\mathcal{F}_{d}^{(k-1)}}\sum_{f\in\mathcal{F}_{d}^{(k-1) }}\sum_{f\in\mathcal{F}_{d}^{(k-1)}}\sum_{f\in\mathcal{F}_{d}^{(k-1)}}\sum_{f \in\mathcal{F}_{d}^{(k-1)}}\sum_{f\in\mathcal{F}_{d}^{(k-1)}}\sum_{f\in \mathcal{F}_{d}^{(k-1)}}\sum_{f\in\mathcal{F}_{d}^{(k-1)}}\sum_{f\in\mathcal{F} _{d}^{(k-1)}}\sum_{f\in\mathcal{F}_{d}^{(k-1)}}\sum_{f\in\mathcal{F}_{d}^{(k-1) }}\sum_{f\in\mathcal{F}_{d}^{(k-1)}}\sum_{f\in\mathcal{F}_{d}^{(k-1)}}\sum_{f \in\mathcal{F}_{d}^{(k-1)}}\sum_{f\in\mathcal{F}_{d}^{(k-1)}}\sum_{f\in \mathcal{F}_{d}^{(k-1)}}\sum_{f\in\mathcal{F}_{d}^{(k-1)}}\sum_{f\in \mathcal{F}_{d}^{(k-1)}}\sum_{f\in\mathcal{F}_{d}^{(k-1)}}\sum_{f\in\mathcal{F} _{d}^{(k-1)}}\sum_{f\in\mathcal{F}_{d}^{(k-1)}}\sum_{f\in\mathcal{F}_{d}^{(k-1)}} \sum_{f\in\mathcal{F}_{d}^{(k-1)}}\sum_{f\in\mathcal{F}_{d}^{(k-1)}}\sum_{f\in \mathcal{F}_{d}^{(k-1)}}\sum_{f\in\mathcal{F}_{d}^{(k-1)}}\sum_{f\in \mathcal{F}_{d}^{(k-1)}}\sum_{f\in\mathcal{F}_{d}^{(k-1)}}\sum_{f\in\mathcal{F} _{d}^{(k-1)}}\sum_{f\in\mathcal{F}_{d}^{(k-1)}}\sum_{f\in\mathcal{F}_{d}^{(k-1) }}\sum_{f\in\mathcal{F}_{d}^{(k-1)}}\sum_{f\in\mathcal{F}_{d}^{(k-1)}}\sum_{f\in \mathcal{F}_{d}^{(k-1)}}\sum_{f\in\mathcal{F}_{d}^{(k-1)}}\sum_{f\in \mathcal{F}_{d}^{(k-1)}}\sum_{f\in\mathcal{F}_{d}^{(k-1)}}\sum_{f\in\mathcal{F}_{d} ^{(k-1)}}\sum_{f\in\mathcal{F}_{d}^{(k-1)}}\sum_{f\in\mathcal{F}_{d}^{(k-1)}} \sum_{f\in\mathcal{F}_{d}^{(k-1)}}\sum_{f\in\mathcal{F}_{d \((k+1)^{n-k}|E(\mathcal{H})|\)[9], \(\mathcal{V}_{d}(\mathcal{H})=\emptyset\) for \(d=1,\ldots,k-1\). The expressions of the first \(k\) orders Laplacian spectral moments of a \(k\)-uniform hypergraph \(\mathcal{H}\) can be obtained directly by using the formulas given in Theorem 3.1. And these expressions have been given in [3]. **Remark 3.4**.: Since \(\mathrm{Tr}_{k+1}(\mathcal{A}_{\mathcal{H}})=(k+1)(k-1)^{n-k}C_{k}(\#\) of simplices in \(\mathcal{H})\)[9], and \(\mathcal{V}_{k}(\mathcal{H})=\{ke|\ e\in E(\mathcal{H})\}\). The expressions of the \(k+1\)-st orders Laplacian spectral moments of a \(k\)-uniform hypergraph \(\mathcal{H}\) can be obtained by using the formulas given in Theorem 3.1. And these expressions have been given in [8]. If the \(k\)-uniform hypergraph is the \(k\)-power hypergraph \(G^{(k)}\) of a graph \(G\) in Remark 3.1 and Remark 3.2, the expressions of the first \(k+1\) order Laplacian spectral moments for \(G^{(k)}\) can be obtained. For a graph \(G\), the expressions of the first \(2k\) orders Laplacian spectral moments of its \(k\)-power hypergraph \(G^{(k)}\) can be given by considering the formulas shown in Theorem 3.1, and these expressions are represented by some parameters of \(G\). **Theorem 3.5**.: Let \(G\) be a graph with \(n\) vertices and \(m\) edges. Let \(d_{i}\) denote the degree of vertex \(i\) in \(G\)\((i=1,\ldots,n)\). For the \(k\)-power hypergraph \(G^{(k)}\) of \(G\), then \[\mathrm{Tr}_{d}(\mathcal{L}_{G^{(k)}}) =(k-1)^{N-1}\sum_{i=1}^{n}d_{i}^{d}+(-1)^{k}dk^{k-2}(k-1)^{N-k} \Big{(}\sum_{i-1}^{n}d_{i}^{d-k+1}+\sum_{\{i,j\}\in E(G)}N_{d-k}(d_{i},d_{j}) \Big{)}\] \[+(k-1)^{N-k}\big{(}(k-1)^{k-1}+(-1)^{k}dk\big{)}(k-2)m,\] for \(d=k+1,\ldots,2k-1\), and \[\mathrm{Tr}_{2k}(\mathcal{L}_{G^{(k)}}) =(k-1)^{N-1}\sum_{i=1}^{n}d_{i}^{2k}+(-1)^{k}2k^{k-1}(k-1)^{N-k} \Big{(}\sum_{i-1}^{n}d_{i}^{k+1}+\sum_{\{i,j\}\in E(G)}N_{k}(d_{i},d_{j}) \Big{)}\] \[+k^{2k-3}(k-1)^{N-2k+1}\sum_{i=1}^{n}d_{i}^{2}+\ell m,\] where \(N=n+m(k-2)\), \(N_{s}(d_{i},d_{j})=\sum_{\begin{subarray}{c}1\leq c_{i}+c_{j}\leq s\\ 0\leq c_{i},c_{j}<s\end{subarray}}d_{i}^{c_{i}}d_{j}^{c_{j}}\big{(}s=1,\ldots,k \big{)}\), \(\ell=(k-1)^{N-k}\big{(}(k-1)^{k-1}(k-2)+(-1)^{k}2k^{k-1}(k-2)+k^{k-1}-2k^{2k-3} (k-1)^{1-k}\big{)}\). Proof.: Let \(\mathcal{G}=G^{(k)}\), then \(|V(\mathcal{G})|=N=n+m(k-2)\) and \(|E(\mathcal{G})|=m\). Since \(\mathcal{G}\) is the \(k\)-power hypergraph of \(G\), from the definition of Veblen hypergraph, we know that \(\mathcal{V}_{d}(\mathcal{G})=\emptyset\) for \(k\nmid d\) and \(d\in[2k]\). For a Veblen multi-subgraph \(H\in\mathcal{V}_{k}(\mathcal{G})\), it is easy to see that \(\underline{H}\in E(\mathcal{G})\). For convenience, let \(ke\) denote the connected Veblen multi-subgraph with \(k\) edges and \(\underline{ke}=e\in E(\mathcal{G})\). Then, \(\mathcal{V}_{k}(\mathcal{G})=\{ke|\ e\in E(\mathcal{G})\}\). For \(d=k+1,\ldots,2k\), we have \[\mathrm{Tr}_{d}(\mathcal{L}_{\mathcal{G}})= (k-1)^{N-1}\Big{(}m(k-2)+\sum_{i=1}^{n}d_{i}^{d}\Big{)}+(-1)^{d} \mathrm{Tr}_{d}(\mathcal{A}_{\mathcal{G}})\] \[+d(k-1)^{N}\sum_{e\in E(\mathcal{G})}\sum_{f\in\mathcal{F}_{d}(ke )}\frac{\tau(f)}{\prod\limits_{v\in V(f)}d^{+}(v)}\pi_{f}(\mathcal{L}_{ \mathcal{G}}).\] For \(f\in\mathcal{F}_{d}(ke)\) (where \(e=\{i_{1},i_{2},\ldots,i_{k}\}\in E(\mathcal{G})\)), let \[f=((i_{1}\beta_{1})^{c_{1}},i_{1}\alpha_{1},(i_{2}\beta_{2})^{c_{2}},i_{2} \alpha_{2},\ldots,(i_{k}\beta_{k})^{c_{k}},i_{k}\alpha_{k}),\] where \(i_{1}<i_{2}<\cdots<i_{k}\). For any \(j\in[k]\), \(\alpha_{j}\in(\{i_{1},\ldots,i_{k}\}\setminus\{i_{j}\})^{k-1}\), \(\beta_{j}=i_{j}\cdots i_{j}\in[N]^{k-1}\), \(c_{j}\geq 0\) is the total number of times that \(i_{j}\beta_{j}\) appears in \(f\), and \(\sum_{j=1}^{k}c_{j}=d-k\). Next, we consider the following two cases for \(f\in\mathcal{F}_{d}(ke)\). Case 1: If there exists \(j\in[k]\) such that \(c_{j}=d-k\), then \[f=f_{e,i_{j}}=(i_{1}\alpha_{1},\ldots,i_{j-1}\alpha_{j-1},(i_{j}\beta_{j})^{d- k},i_{j}\alpha_{j},\ldots,i_{k}\alpha_{k})\in\mathcal{F}_{d}(ke).\] We have \[\tau(f)=k^{k-2},\ \prod_{v\in V(f)}d^{+}(v)=(d-k+1)(k-1)^{k},\] and there are \((d-k+1)((k-1)!)^{k}\) elements in \(\mathcal{F}_{d}\) which share the same arc multi-set as \(f_{e,i_{j}}\), then \[\sum_{j=1}^{k}\frac{\tau(f)}{\prod\limits_{v\in V(f)}d^{+}(v)}\pi_{f}( \mathcal{L}_{\mathcal{G}})=(-1)^{k}k^{k-2}(k-1)^{-k}\sum_{j=1}^{k}d_{i_{j}}^{d -k}.\] Case 2: If for any \(j\in[k]\), \(0\leq c_{j}<d-k\), then \[f=f_{e,\{c_{1},c_{2},\ldots,c_{k}\}}=((i_{1}\beta_{1})^{c_{1}},i_{1}\alpha_{1},(i_{2}\beta_{2})^{c_{2}},i_{2}\alpha_{2},\ldots,(i_{k}\beta_{k})^{c_{k}},i_{k} \alpha_{k})\in\mathcal{F}_{d}(ke).\] We have \[\tau(f)=k^{k-2},\ \prod_{v\in V(f)}d^{+}(v)=(k-1)^{k}\prod_{j=1}^{k}(c_{j}+1),\] and there are \(((k-1)!)^{k}\prod\limits_{j=1}^{k}(c_{j}+1)\) elements in \(\mathcal{F}_{d}\) which share the same arc multi-set as \(f_{e,\{c_{1},c_{2},\ldots,c_{k}\}}\), then \[\sum_{\begin{subarray}{c}c_{1}+\cdots+c_{k}=d-k\\ \forall j\in[k],0\leq c_{j}<d-k\end{subarray}}\frac{\tau(f)}{\prod\limits_{v\in V (f)}d^{+}(v)}\pi_{f}(\mathcal{L}_{\mathcal{G}})=(-1)^{k}k^{k-2}(k-1)^{-k}\sum_ {\begin{subarray}{c}c_{1}+\cdots+c_{k}=d-k\\ \forall j\in[k],0\leq c_{j}<d-k\end{subarray}}\prod\limits_{j=1}^{k}d_{i_{j}}^ {c_{j}}.\] Then \[\sum_{e\in E(\mathcal{G})}\sum_{f\in\mathcal{F}_{d}(ke)}\frac{ \tau(f)}{\prod\limits_{v\in V(f)}d^{+}(v)}\pi_{f}(\mathcal{L}_{\mathcal{G}})\] \[= (-1)^{k}k^{k-2}(k-1)^{-k}\sum_{\{i_{1},\ldots,i_{k}\}\in E( \mathcal{G})}\bigg{(}\sum_{j=1}^{k}d_{i_{j}}^{d-k}+\sum_{\begin{subarray}{c} c_{1}+\cdots+c_{k}=d-k\\ \forall j\in[k],0\leq c_{j}<d-k\end{subarray}}\prod\limits_{j=1}^{k}d_{i_{j}}^ {c_{j}}\bigg{)}\] \[= (-1)^{k}k^{k-2}(k-1)^{-k}\bigg{(}\sum_{i=1}^{N}\sum_{e\in E_{i}}d _{i}^{d-k}+\sum_{\begin{subarray}{c}\{i_{1},\ldots,i_{k}\}\in E(\mathcal{G}) \end{subarray}}\sum_{\begin{subarray}{c}c_{1}+\cdots+c_{k}=d-k\\ \forall j\in[k],0\leq c_{j}<d-k\end{subarray}}\prod\limits_{j=1}^{k}d_{i_{j}}^ {c_{j}}\bigg{)}\] \[= (-1)^{k}k^{k-2}(k-1)^{-k}\bigg{(}\sum_{i=1}^{N}d_{i}^{d-k+1}+\sum_ {\begin{subarray}{c}\{i_{1},\ldots,i_{k}\}\in E(\mathcal{G})\end{subarray}} \sum_{\begin{subarray}{c}c_{1}+\cdots+c_{k}=d-k\\ \forall j\in[k],0\leq c_{j}<d-k\end{subarray}}\prod\limits_{j=1}^{k}d_{i_{j}}^ {c_{j}}\bigg{)}.\] For \(e=\{i,j\}\in E(G)\), let \(e^{(k)}=\{i,j\}^{(k)}=\{i,j,v_{e,1},\ldots,v_{e,k-2}\}\in E(G^{(k)})\), where \(v_{e,l}\) are cored vertex (the vertex whose degree is \(1\)[14]), then \(d_{e,l}=1(l=1,\ldots,k-2)\) and \(1\leq c_{i}+c_{j}=d-k-\sum_{l=1}^{k-2}c_{e,l}\leq d-k\). Then, for \(d=k+1,\ldots,2k\), the \(d\)-th order Laplacian spectral moment of \(G^{(k)}\) is \[\mathrm{Tr}_{d}(\mathcal{L}_{G^{(k)}})= (-1)^{d}\mathrm{Tr}_{d}(\mathcal{A}_{G^{(k)}})+(k-1)^{N-1}\Big{(}m (k-2)+\sum_{i=1}^{n}d_{i}^{d}\Big{)}\] \[+ (-1)^{k}dk^{k-2}(k-1)^{N-k}\bigg{(}\sum_{i=1}^{N}d_{i}^{d-k+1}+ \sum_{\begin{subarray}{c}\{i,j\}\in E(G)\end{subarray}}\sum_{\begin{subarray}{ c}1\leq c_{i}+c_{j}\leq d-k\\ 0\leq c_{i},c_{j}<d-k\end{subarray}}d_{i}^{c_{i}}d_{j}^{c_{j}}\bigg{)}.\] By substituting Eq.(2.6) and Eq.(2.7) into the above equation, the expressions of \(\operatorname{Tr}_{d}(\mathcal{L}_{G^{(k)}})(d=k+1,\ldots,2k)\) can be obtained. Let \(\sum_{i=s}^{t}a_{i}=0\) if \(t<s\). Let \(G=(V(G),E(G))\) be a finite simple graph. Let \(d_{v}\) denote the degree of the vertex \(v\) in \(G\). The first and second Zagreb indices were introduced in [18, 19], which are \(M_{1}(G)=\sum_{v\in V(G)}d_{v}^{2}=\sum_{\{u,v\}\in E(G)}\big{(}d_{u}+d_{v} \big{)}\) and \(M_{2}(G)=\sum_{\{u,v\}\in E(G)}d_{u}d_{v}\), respectively. The first and second variable Zagreb indices were introduced in [20, 21], which are \(M_{1}^{(r)}(G)=\sum_{v\in V(G)}d_{v}^{r}=\sum_{\{u,v\}\in E(G)}\big{(}d_{u}^{r -1}+d_{v}^{r-1}\big{)}\) and \(M_{2}^{(r)}(G)=\sum_{\{u,v\}\in E(G)}\big{(}d_{u}d_{v}\big{)}^{r}\) (where \(r\) is a variable parameter), respectively. And the generalized Zagreb index \(M_{\{r,s\}}(G)=\sum_{\{u,v\}\in E(G)}\big{(}d_{u}^{r}d_{v}^{s}+d_{u}^{s}d_{v} ^{r}\big{)}\) (where \(r\) and \(s\) are variable parameters) was introduced in [22]. Then the expressions of Laplacian spectral moments of power hypergraphs given in Theorem 3.2 can be represented by the Zagreb indices of graphs. **Remark 3.6**.: Let \(G\) be a graph with \(n\) vertices and \(m\) edges. Let \(d_{i}\) denote the degree of vertex \(i\) in \(G\) (\(i=1,\ldots,n\)). Then \[\operatorname{Tr}_{d}(\mathcal{L}_{G^{(k)}}) =(k-1)^{N-k}\big{(}(k-1)^{k-1}+(-1)^{k}dk\big{)}(k-2)m+(k-1)^{N-1} M_{1}^{(d)}(G)\] \[+(-1)^{k}dk^{k-2}(k-1)^{N-k}\Bigg{(}\sum_{r=2}^{d-k+1}M_{1}^{(r)} (G)+\sum_{r=1}^{\lfloor\frac{d-k}{2}\rfloor}M_{2}^{(r)}(G)+\sum_{r=1}^{ \lfloor\frac{d-k}{2}\rfloor}\sum_{s=r+1}^{d-k-r}M_{\{r,s\}}(G)\Bigg{)},\] for \(d=k+1,\ldots,2k-1\), and \[\operatorname{Tr}_{2k}(\mathcal{L}_{G^{(k)}}) =\ell m+(k-1)^{N-1}M_{1}^{(2k)}(G)+k^{2k-3}(k-1)^{N-2k+1}M_{1}(G)\] \[+(-1)^{k}2k^{k-1}(k-1)^{N-k}\Bigg{(}\sum_{r=2}^{k+1}M_{1}^{(r)}( G)+\sum_{r=1}^{\lfloor\frac{k}{2}\rfloor}M_{2}^{(r)}(G)+\sum_{r=1}^{ \lfloor\frac{k}{2}\rfloor}\sum_{s=r+1}^{k-r}M_{\{r,s\}}(G)\Bigg{)},\] where \(N=n+m(k-2)\) and \(\ell=(k-1)^{N-k}\big{(}(k-1)^{k-1}(k-2)+(-1)^{k}2k^{k-1}(k-2)+k^{k-1}-2k^{2k-3 }(k-1)^{1-k}\big{)}\). Proof.: For the terms related to degree of vertex, we have \[\sum_{i=1}^{N}d_{i}^{d-k+1}+\sum_{\{i,j\}\in E(G)}\sum_{\begin{subarray}{c}1 \leq c_{i}+c_{j}\leq d-k\\ 0\leq c_{i},c_{j}<d-k\end{subarray}}d_{i}^{c_{i}}d_{j}^{c_{j}}\] \[=\sum_{\{i,j\}\in E(G)}\left(d_{i}^{d-k}+d_{j}^{d-k}+\sum_{ \begin{subarray}{c}1\leq c_{i}+c_{j}\leq d-k\\ 0\leq c_{i},c_{j}<d-k\end{subarray}}d_{i}^{c_{i}}d_{j}^{c_{j}}\right)\] \[=\sum_{\{i,j\}\in E(G)}\left(\sum_{r=1}^{d-k}\left(d_{i}^{r}+d_{j} ^{r}\right)+\sum_{r=1}^{\lfloor\frac{d-k}{2}\rfloor}(d_{i}d_{j})^{r}+\sum_{r=1 }^{\lfloor\frac{d-k}{2}\rfloor}\sum_{s=r+1}^{d-k-r}\left(d_{i}^{r}d_{j}^{s}+d_ {i}^{s}d_{j}^{r}\right)\right)\] \[=\sum_{r=1}^{d-k}\sum_{\{i,j\}\in E(G)}\left(d_{i}^{r}+d_{j}^{r} \right)+\sum_{r=1}^{\lfloor\frac{d-k}{2}\rfloor}\sum_{\{i,j\}\in E(G)}(d_{i}d _{j})^{r}+\sum_{r=1}^{\lfloor\frac{d-k}{2}\rfloor}\sum_{s=r+1}^{d-k-r}\sum_{\{ i,j\}\in E(G)}\left(d_{i}^{r}d_{j}^{s}+d_{i}^{s}d_{j}^{r}\right)\] \[=\sum_{r=2}^{d-k+1}M_{1}^{(r)}(G)+\sum_{r=1}^{\lfloor\frac{d-k}{ 2}\rfloor}M_{2}^{(r)}(G)+\sum_{r=1}^{\lfloor\frac{d-k}{2}\rfloor}\sum_{s=r+1 }^{d-k-r}M_{\{r,s\}}(G),\text{ for }d=k+1,\ldots,2k.\] Then the expressions shown in Theorem 3.2 can be represented by the Zagreb indices of graphs. Given a \(k\)-uniform hypergraph \(\mathcal{H}\), the _signless Laplacian tensor_ of \(\mathcal{H}\) is \(\mathcal{Q}_{\mathcal{H}}=\mathcal{D}_{\mathcal{H}}+\mathcal{A}_{\mathcal{H}}\). And the \(d\)-th order signless Laplacian spectral moment of \(\mathcal{H}\) is equal to the \(d\)-th order trace of \(\mathcal{Q}_{\mathcal{H}}\). For the signless Laplacian spectral moments of hypergraphs, similar conclusions as Theorem 3.1 and Theorem 3.2 can be obtained by the same method, which is shown as follows. **Theorem 3.7**.: Let \(\mathcal{H}\) be a \(k\)-uniform hypergraph with \(n\) vertices. And the degree sequence of \(\mathcal{H}\) is \(d_{1},d_{2},\ldots,d_{n}\). Then \[\operatorname{Tr}_{d}(\mathcal{Q}_{\mathcal{H}})=(k-1)^{n-1}\sum_{i=1}^{n}d_{i }^{d}+\operatorname{Tr}_{d}(\mathcal{A}_{\mathcal{H}})+d(k-1)^{n}\sum_{z=1}^{d -1}\sum_{H\in\mathcal{V}_{z}(\mathcal{H})}\sum_{f\in\mathcal{F}_{d}(H)}\frac{ \tau(f)\pi_{f}(\mathcal{Q}_{\mathcal{H}})}{\prod\limits_{v\in V(f)}d^{+}(v)}.\] **Theorem 3.8**.: Let \(G\) be a graph with \(n\) vertices and \(m\) edges. Let \(d_{i}\) denote the degree of vertex \(i\) in \(G\) (\(i=1,\ldots,n\)). For the \(k\)-power hypergraph \(G^{(k)}\) of \(G\), then \[\operatorname{Tr}_{d}(\mathcal{Q}_{G^{(k)}}) =(k-1)^{N-1}\sum_{i=1}^{n}d_{i}^{d}+dk^{k-2}(k-1)^{N-k}\Big{(}\sum_ {i-1}^{n}d_{i}^{d-k+1}+\sum_{\{i,j\}\in E(G)}N_{d-k}(d_{i},d_{j})\Big{)}\] \[+(k-1)^{N-k}\big{(}(k-1)^{k-1}+dk\big{)}(k-2)m,\text{ for }d=k+1, \ldots,2k-1,\] \[\operatorname{Tr}_{2k}(\mathcal{Q}_{G^{(k)}}) =(k-1)^{N-1}\sum_{i=1}^{n}d_{i}^{2k}+2k^{k-1}(k-1)^{N-k}\Big{(} \sum_{i-1}^{n}d_{i}^{k+1}+\sum_{\{i,j\}\in E(G)}N_{k}(d_{i},d_{j})\Big{)}\] \[+k^{2k-3}(k-1)^{N-2k+1}\sum_{i=1}^{n}d_{i}^{2}+qm,\] where \(N=n+m(k-2)\), \(N_{s}(d_{i},d_{j})=\sum_{\begin{subarray}{c}1\leq c_{i}+c_{j}\leq s\\ 0\leq c_{i},c_{j}<s\end{subarray}}d_{i}^{c_{i}}d_{j}^{c_{j}}\big{(}s=1,\ldots,k \big{)}\), \(q=(k-1)^{N-k}\big{(}(k-1)^{k-1}(k-2)+2k^{k-1}(k-2)+k^{k-1}-2k^{2k-3}(k-1)^{1-k} \big{)}\). And the signless Laplacian spectral moments of \(k\)-power hypergraph \(G^{(k)}\) can also be represented by Zagreb indices of \(G\). Next, we introduce some concepts for the high-order (signless) Laplacian spectrum of graphs. For a graph \(G\) and an integer \(k\geq 2\), the (signless) Laplacian spectrum of \(G^{(k)}\) is called the _\(k\)-th order (signless) Laplacian spectrum_ of \(G\). A graph \(G\) is said to be determined by its high-order (signless) Laplacian spectrum, if there does not exist other non-isomorphic graph \(H\) such that \(H\) has the same \(k\)-th order (signless) Laplacian spectrum as \(G\) for all \(k\geq 2\). And we give the following examples to show that some (signless) Laplacian cospectral graphs can be determined by their high-oeder (signless) Laplacian spectrum. **Remark 3.9**.: The graphs shown in Figure 1 are non-isomorphic Laplacian cospectral graph. By the \(3\)-th order Laplacian spectral moments of \(3\)-power hypergraphs, we have \(\operatorname{Tr}_{3}(\mathcal{L}_{(G_{1})^{(3)}})\neq\operatorname{Tr}_{3}( \mathcal{L}_{(G_{2})^{(3)}})\), then \((G_{1})^{(3)}\) and \((G_{2})^{(3)}\) have different Lapla Figure 1: non-isomorphic Laplacian cospectral graph can be distinguished by their high-order Laplacian spectrum. The graphs shown in Figure 1 are non-isomorphic signless Laplacian cospectral graph. By the 3-th order signless Laplacian spectral moments of 3-power hypergraphs, we have \(\operatorname{Tr}_{3}(\mathcal{Q}_{(K_{3}\cup K_{1})^{(3)}})\neq\operatorname{ Tr}_{3}(\mathcal{Q}_{(K_{1,3})^{(3)}})\), then \((K_{3}\cup K_{1})^{(3)}\) and \((K_{1,3})^{(3)}\) have different signless Laplacian spectrum. So \(K_{3}\cup K_{1}\) and \(K_{1,3}\) can be distinguished by their high-order signless Laplacian spectrum. ## Acknowledgements This work is supported by the National Natural Science Foundation of China (No. 11801115, No. 12071097, No. 12042103, No. 12242105 and No. 12371344), the Natural Science Foundation of the Heilongjiang Province (No. QC2018002) and the Fundamental Research Funds for the Central Universities.
2304.03141
For-Each Operations in Collaborative Apps
Conflict-free Replicated Data Types (CRDTs) allow collaborative access to an app's data. We describe a novel CRDT operation, for-each on the list of CRDTs, and demonstrate its use in collaborative apps. Our for-each operation applies a given mutation to each element of a list, including elements inserted concurrently. This often preserves user intention in a way that would otherwise require custom CRDT algorithms. We give example applications of our for-each operation to collaborative rich-text, recipe, and slideshow editors.
Matthew Weidner, Ria Pradeep, Benito Geordie, Heather Miller
2023-04-06T15:23:50Z
http://arxiv.org/abs/2304.03141v1
# For-Each Operations in Collaborative Apps ###### Abstract. Conflict-free Replicated Data Types (CRDTs) allow collaborative access to an app's data. We describe a novel CRDT operation, for-each on the list of CRDTs, and demonstrate its use in collaborative apps. Our for-each operation applies a given mutation to each element of a list, including elements inserted concurrently. This often preserves user intention in a way that would otherwise require custom CRDT algorithms. We give example applications of our for-each operation to collaborative rich-text, recipe, and slideshow editors. collaboration, CRDTs, concurrency + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none Using our for-each operation, we easily implement Figure 1's intended behavior: issue a for-each operation with the mutation "if the character is in the range, set its "bold" formatting attribute to "true". See Section 4.1 for details. We hope that the intended semantics (i.e., user-visible behavior) of the list of \(C\)s and our for-each operation are already clear. However, some technicalities arise, especially when applying for-each to concurrently-inserted elements. Sections 2 and 3 discuss these technical details, including algorithms for all of our constructions. A hurried reader may skip directly to Section 4, which applies for-each operations to example collaborative apps. ### Background We assume familiarity with the causal order on CRDT operations [(8)]. The terms "(causally) prior", "concurrent", and "(causally) future" reference this order. Our algorithms use vector clocks [(10; 4)] to query the causal order relationship between CRDT operations. Throughout the paper, we use the language of operation-based CRDTs [(15)], although our constructions can easily be reformulated as state-based CRDTs. Each CRDT operation is described in terms of a _generator_ and an _effector_. The generator is called to handle user input on the user's local replica, and it returns a _message_ to be broadcast to other replicas. Each replica, including the sender, applies the operation by passing this message to the corresponding effector; the sender does so atomically with the generator call. We assume that messages are received exactly once on each replica, and in causal order. ## 2. List of CRDTs We begin with a formal description of the _list of CRDTs_. It is modeled on Yjs's Y.Array shared type [(6)]. First, a _list CRDT_ is a classic CRDT type whose external interface is a list (ordered sequence) of immutable values, e.g., the characters in a text document [(1; 15)]. Since the same value may appear multiple times in a list, we use _element_ to refer to a unique instance of a value. A list CRDT has operations to insert and delete elements. A _list of CRDTs_ is a more general CRDT in which the list values are themselves mutable CRDTs. Specifically, let \(C\) be an operation-based CRDT. The external interface of a _list of Cs_ is a list of mutable values of type \(C\). The operations on the list are: * \(\mathsf{insert}(i,\sigma)\): Inserts a new element with initial value \(\sigma-\mathsf{a}\) state of \(C-\)into the list at index \(i\), between the existing elements at indices \(i-1\) and \(i\). All later elements (\(\mathsf{index}\geq i\)) shift to an incremented index. * \(\mathsf{delete}(i)\): Deletes the element at index \(i\). All later elements (\(\mathsf{index}\geq i+1\)) shift to a decremented index. * \(\mathsf{apply}(i,\sigma)\): Applies a \(C\) operation \(o\) to the element at index \(i\). All replicas update their copy of the element's value (a state of \(C\)) in the usual way for \(C\) operations. A concurrent delete operation may cause a replica to receive the apply message after deleting the element; in this case, it ignores the apply message (the delete "wins"). **Example 2.1**.: As a running example, consider a collaborative rich-text document, such as a Google Doc. We can represent one character using a _rich character CRDT_. Its state is a pair (_char_, _attrs_), where _char_ is an immutable character and _attrs_ is a map CRDT [(15)] for formatting attributes. Then a _list of rich character CRDTs_ models the entire rich-text document's state. E.g., the text "\(\mathbf{ab}\)" is represented as \[[\{char:\text{``a''},\mathit{attrs}:\{\}\},\{\}],\] \[\{char:\text{``b''},\mathit{attrs}:\{\text{``bold''},\mathit{true} \}\}]\] We construct the list of \(C\)s using \(C\) and an ordinary list CRDT \(\mathcal{L}\). See Algorithm 1 for pseudocode. Specifically, we assume that \(\mathcal{L}\) produces _positions_ that are unique, immutable, and drawn from a dense total order \(<\), e.g., Logoot's "position identifiers" [(18)].1 Then the list of \(C\)s is implemented as: Footnote 1: If \(\mathcal{L}\) uses extra state (e.g., tomstones) or messages to manage its positions, then those are implicitly added to the state or messages for the list of \(C\)s. **State**: A list of elements \((p,\sigma)\), where \(p\) is a position from \(\mathcal{L}\) and \(\sigma\) is a state of \(C\), sorted by \(p\). An application using the list usually only looks at the values \(\sigma\), but it may also use the positions, e.g., for cursor locations. Figure 2. Light cone diagram for a forEach operation. **Insert, delete**: Similar to \(\mathcal{L}\). **Apply**: Similar to \(\mathcal{C}\), except that the message sent to remote replicas is tagged with the element's position \(p\). In the pseudocode, we use \(\mathcal{C}.\mathsf{gen}(o,\sigma)\) to represent \(\mathcal{C}\)'s generator for an operation \(o\), and we use \(\mathcal{C}.\mathsf{eff}(m,\sigma)\) to represent \(\mathcal{C}\)'s effector for a message \(m\). ## 3. For-Each Operation We now define our new CRDT operation, _for-each_, on the list of \(\mathcal{C}\)s. Let \(O\) denote the set of all \(\mathcal{C}\) operations. For technical reasons (described in Section 3.1 below), we restrict for-each to pure operations, where an operation is _pure_ if its generated message is just the operation itself. Formally: Definition 3.1 ().: Let \(\mathcal{C}.\mathsf{gen}(o,\sigma)\) denote \(\mathcal{C}\)'s generator. An operation \(o\in O\) is _pure_ if \(\mathcal{C}.\mathsf{gen}(o,\sigma)=o\) for all states \(\sigma\). Baquero et al. (Baquero et al., 2017) show that many classic CRDTs' operations are pure, at least with the relaxations discussed in Section 3.1. Let \(O_{P}\subset O\) denote the subset of pure \(\mathcal{C}\) operations. Let \[f:(p,prior)\to O_{P}\cup\{\mathsf{del},\mathsf{null}\}\] be a function that takes as input a list element's position \(p\) and a boolean \(prior\) described below, and returns one of: * \(o\in O_{P}\): a pure \(\mathcal{C}\) operation to apply to the element. * del: an instruction to delete the element. * null: an instruction to do nothing. Then the operation \(\mathsf{forEach}(f)\) loops over \(\mathit{elts}\), applies \(f\) to each element, then performs the operation specified by \(f\). Specifically, it loops over all elements that are inserted causally prior or concurrently to the for-each operation itself, but not causally future elements. It also computes the argument \(prior\) for \(f\), which indicates whether each element is causally prior (true) or concurrent (false). Example 3.2 ().: In a rich text document, a user holds a range of text. Let \(\mathit{start}\) and \(\mathit{end}\) be the positions of the first and last-plus-1 characters in the range, so that the range is \([\mathit{start},\mathit{end})\). Define: ``` function\(f(p,prior)\) if\(\mathit{start}\leq p<\mathit{end}\)then return\((\mathit{rich}\mapsto\mathit{rich}.\mathit{attrs}.\mathsf{set}(\mathsf{``bold}\mathsf{"},\mathsf{true}))\) else returnnull ``` Then for\(\mathsf{Each}(f)\) implements the intended behavior in Figure 1: all characters in the range are bolded, including those inserted concurrently. Example 3.3 ().: Again in a rich-text document, a user deletes a range of text \([\mathit{start},\mathit{end})\). To delete only existing characters, we consult \(prior\): Figure 3. Typical user intention for a delete-range operation (top) concurrent to text insertion (bottom): the concurrent text is not deleted, to avoid data loss. function\(f(p,prior)\) if\(prior\)and\(start\leq p<\mathit{end}\)then returndel elsereturnnull ``` Then forEach(\(f\)) implements the behavior shown in Figure 3. Note that we could instead use a forEachPrior operation, i.e., an ordinary loop on the initiating replica. However, forEach(\(f\)) generates less network traffic: a single forEach message for the entire range, instead of a separate delete message per deleted character. Algorithm 2 gives a pseudocode implementation of for each, which we now describe. We first modify the list of \(\mathcal{C}\)s to track each element's logical insertion time \(t-\)namely, its sender's vector clock entry.2 We also add a list \(\mathit{buffer}\) to the internal state. Footnote 2: Some authors call this a _equal dot_. When a user calls forEach(\(f\)), their replica broadcasts \(f\) together with the operation's vector clock \(w\). Upon receiving this message, a replica first loops over its current elements. For each element \(\mathit{elt}\), the replica computes \(f(\mathit{elt.p},\mathit{prior})\) and does as instructed, but only locally; it does not broadcast any new messages. Here \(\mathit{prior}\) indicates whether \(\mathit{elt}\) was inserted causally prior to forEach(\(f\)). Note that we do not apply \(f\) to elements that were already deleted on this replica, including by concurrent delete operations (the delete "wins"). Next, the receiving replica stores the message in its \(\mathit{buffer}\). In the future, whenever the replica receives an insert message, it checks whether the insert operation is concurrent to forEach(\(f\)). If so, the replica computes \(f(\mathit{elt.p},\mathsf{false})\) and does as instructed, again only locally. Note that the message stays in the buffer forever, although in principle it could be discarded once all concurrent operations are received (i.e., it is causally stable (Brandt et al., 2015)). ### On Pure Operations Our restriction to pure operations is not arbitrary: we need to know what message to pass to \(\mathcal{C}\).eff, even for concurrent elements. Such elements did not yet exist on the initiating replica, hence the replica could not pass their states to \(\mathcal{C}\).gen. With pure operations, we know that the generated message is just \(o\) itself, as used on line 9. In practice, we can relax the pure restriction by passing additional metadata to \(\mathcal{C}\).eff. In particular, we may pass in \(f\)'s vector clock: line 9 of execute becomes \[\mathit{elt.\sigma}\leftarrow\mathcal{C}.\mathsf{eff}((\mathit{op},\mathsf{w} ),\mathit{elt.\sigma}).\] This does not threaten strong eventual consistency because \(w\) is consistent across replicas. \(\mathcal{C}\) can use the provided vector clocks to query the causal order on operations. That is sufficient to implement most CRDTs using only pure operations (Brandt et al., 2015). List CRDTs' insert operations are a notable exception. ``` 1per-replicaCRDTstate: 2elts: A list of elements \((p,\sigma,t)\), where \(p\) is a position from \(\mathcal{L}\), \(\sigma\) is a state of \(\mathcal{C}\), and \(t=(\mathit{senderID},\mathit{clock})\) is a vector clock entry; sorted by \(p\)buffer: A list of pairs \((f,u)\), where \(f\) is the forEach argument and \(u\) is a vector clock entry \(vc\): the local vector clock, in the form of a function from replica IDs to \(\mathbb{N}\); initially the all-0 function \(\mathit{replicaID}\): the unique ID of this replica function\(\mathsf{execute}(f,\mathit{elt},\mathit{prior})\)\(op\gets f(\mathit{elt.p},\mathit{prior})\)if\(op\in\mathcal{O}\)then\(\mathit{elt.\sigma}\leftarrow\mathcal{C}.\mathsf{eff}(\mathit{op},\mathit{elt.\sigma})\)elseif\(op=\mathsf{del}\)then Delete \(\mathit{elt}\) from \(\mathit{elts}\) updateinsert generator(\(i,\sigma\))\(p\leftarrow\) new \(\mathcal{L}\) position between the positions at indices \(i-1\) and \(i\) in \(\mathit{elts}\)\(v\leftarrow\) copy of \(vc\)\(v[\mathit{replicaID}]\gets v[\mathit{replicaID}]+1\)return\((\mathsf{insert},p,\sigma,v,\mathit{replicaID})\) ``` **Algorithm 2**List of \(\mathcal{C}\)s with our for-each operation. Blocks not shown here are the same as in Algorithm 1 (elements, delete, and apply). ``` 1per-replicaCRDTstate: 2elts: A list of elements \((p,\sigma,t)\), where \(p\) is a position from \(\mathcal{L}\), \(\sigma\) is a state of \(\mathcal{C}\), and \(t=(\mathit{senderID},\mathit{clock})\) is a vector clock entry; sorted by \(p\)buffer: A list of pairs \((f,u)\), where \(f\) is the forEach argument and \(u\) is a vector clock entry \(vc\): the local vector clock, in the form of a function from replica IDs to \(\mathbb{N}\); initially the all-0 function \(\mathit{replicaID}\): the unique ID of this replica function\(\mathsf{execute}(f,\mathit{elt},\mathit{prior})\)\(op\gets f(\mathit{elt.p},\mathit{prior})\)if\(op\in\mathcal{O}\)then\(\mathit{elt.\sigma}\leftarrow\mathcal{C}.\mathsf{eff}(\mathit{op},\mathit{elt.\sigma})\)elseif\(op=\mathsf{del}\)then Delete \(\mathit{elt}\) from \(\mathit{elts}\) updateinsert generator(\(i,\sigma\))\(p\leftarrow\) new \(\mathcal{L}\) position between the positions at indices \(i-1\) and \(i\) in \(\mathit{elts}\)\(v\leftarrow\) copy of \(vc\)\(v[\mathit{replicaID}]\gets v[\mathit{replicaID}]+1\)return\((\mathsf{insert},p,\sigma,v,\mathit{replicaID})\) ``` **Algorithm 3**The \(\mathcal{C}\)s with our for-each operation. Blocks not shown here are the same as in Algorithm 1 (elements, delete, and apply). ### Correctness Informally, we claim that Algorithm 2 matches the semantics described in the introduction. That is, a for-each operation's \(f\) is applied to exactly the causally prior and concurrent elements, minus deleted elements, regardless of message order. We defer a precise correctness claim and proof sketch to Appendix A (Theorem A.2 and Corollary A.3). ### Other Data Structures For-each works equally well if we ignore the list order but still assign a unique ID \(p\) to each element. That is, we can define a for-each operation on a _set of CRDTs_ in which each added element is assigned a unique ID. Likewise, one can define for-each on a CRDT-valued map in which each key-value pair is assigned a unique ID when set, like Yjs's Y.Map shared type (Becker, 1997). However, our construction does not work with a Riak-style map (Rikolov, 2001) in which a key's value CRDT is created on first use instead of explicitly set: two users may create the same key's value CRDT concurrently, complicating the choice of which for-each operations to apply (Kleppmann and Beresford, 2001; Klemm and Beresford, 2001). We expect similar issues for the list of CRDTs in Kleppmann and Beresford's JSON CRDT (Kleppmann and Beresford, 2001), in which an element may reappear after deletion. We leave full descriptions to future work. ## 4. Examples We now describe example uses of our for-each operation in collaborative apps, at a high level. As in Section 3, we write a for-each operation as \(\mathsf{forEach}(f)\), where \[f:(p,prior)\to O_{P}\cup\{\mathsf{del},\mathsf{null}\}\] is a function that takes as input a list element's position \(p\) and whether it is causally prior (else concurrent), and outputs an instruction for that element: apply a (pure) operation \(o\in O_{P}\), delete the element, or do nothing. ### Rich-Text Editor Let us begin with a collaborative rich-text editor, as described in the introduction. To recap Examples 2.1 and 3.2, we can represent a rich-text document as a list of _rich character CRDTs_ (\(char\), \(attrs\)), where \(char\) is an immutable character and \(attrs\) is a map CRDT for formatting attributes. Given list positions \(start\) and \(end\), define: ``` function\(f(p,prior)\) if\(start\leq p<end\)then return\((rich\mapsto rich.attrs.\mathsf{set}(\mathsf{``bold"},\mathsf{true}))\) elsereturnnull ``` Then for\(\mathsf{Each}(f)\) holds the range \([start,end)\) with the intended behavior in Figure 1: all characters in the range are bolded, including concurrently-inserted ones. It is possible to use a closed interval \([start,end^{\prime}]\) instead of the half-open interval \([start,end)\). Here \(end^{\prime}\) is the position of the last character in the original range, while \(end\) is the last-plus-one position. The difference is that \([start,end)\) will also format concurrently-inserted characters at the end of the range, while \([start,end^{\prime}]\) will not. The latter behavior is typical for hyperlink formatting (Becker, 1997). Other formatting attributes are similar. However, for deletions, one typically deletes only causally prior characters, as in Example 3.3. This is safer because deletions are monotonic (permanent), making unintended deletions harder to undo. Note that a literal list of rich character CRDTs is memory-inefficient, since it stores a map CRDT per character. However, one can use this theoretical model as a guide, then implement an equivalent but more efficient CRDT. For example, one can store \(attrs\)'s state explicitly only when it differs from the previous character, like in Peritext (Becker, 1997). ### Recipe Editor A collaborative recipe editor allows multiple users to view and edit a recipe for a meal. Let us consider in particular the list of ingredients. We can model it as a list of _ingredient CRDTs_, where each ingredient CRDT has sub-CRDTs for its name and amount. Suppose we add a "scale recipe" button that multiplies every amount by a given value. If one user scales the recipe, while concurrently, another user inserts a new ingredient, then it is important that the new ingredient's amount is also scaled. Otherwise, it will be out of proportion with the other ingredients. To implement such a "scale recipe" operation, let \(s\) be the scaling amount. Define \(f\) by: ``` function\(f(p,prior)\) return\((ingredient\to ingredient.amount.\mathsf{mult}(s))\) ``` Then for\(\mathsf{Each}(f)\) scales every ingredient's amount, including ingredients inserted concurrently.3 Footnote 3: Here we assume an operation \(\mathsf{mult}(s)\) on the “ingredient amount” CRDT. This is nontrivial if you also allow \(\mathsf{set}(value)\) operations, but it can be implemented using another list with for-each operations; we omit the details. ### Slideshow Editor A slideshow editor is another collaborative app that can use for-each operations. A single slide might contain multiple images, shapes, or text boxes. These objects can be edited individually or together. For example, a user might translate (shift) a single object while another simultaneously rotates all objects on the slide. To implement these translations and rotations, each translation on an object can be represented as a translation vector. Then the object's position is represented by a list CRDT \(\mathcal{L}\) containing all translations made so far; the actual position is the sum of all the vectors in that list. The entire slide can be represented as a list CRDT \(\mathcal{L}^{\prime}\), where each element is a list CRDT \(\mathcal{L}_{p}\) of an object \(p\)'s translation vectors.4 Footnote 4: Since the objects on a slide are unordered, we use their positions \(p\) merely as IDs, ignoring their total order. A user shifts an object \(p\) by appending a translation vector to its list \(\mathcal{L}_{p}\), and rotates an object \(p\) by multiplying corresponding translation vectors in \(\mathcal{L}_{p}\) by a rotation matrix. When a group of objects are edited, the same operation should be applied to each object's list. For example, say a user rotates a group of objects 30 degrees clockwise, like the bottom operation of Figure 4. We want to rotate each object in the group. To keep objects aligned within the group, we also want to rotate any concurrent translation of those objects, such as the top translation of Figure 4. To implement this, let \(updatedObjects\) be a set of the positions \(p\) of the selected objects. Define \(g\) and \(f\) as: ``` function\(g(q,prior)\) return\(\left(vector\mapsto vector.\text{mult}\left(\begin{bmatrix}\cos 30&\sin 30\\ -\sin 30&\cos 30\end{bmatrix}\right)\right)\) function\(f(p,prior)\) if\(p\in updatedObjects\)then return\(\left(object\mapsto object.forEach(g)\right)\) else return null ``` Then forEach(\(f\)) rotates all \(updatedObjects\) 30 degrees clockwise. The degree of rotation can also be stored to render the rotated object correctly. ## 5. Related Work Dataflow programming and stream processing both perform operations "for each" element of a stream. Unlike this work, they typically apply a for-each operation to all regions in Figure 2, including the causal future. In particular, FlowPools ((12)) allow issuing a for-each operation after a FlowPool (stream) begins; they apply the operation to all existing elements immediately, then store it as a callback for concurrent or future elements, similar to our algorithm (Algorithm 2). Operational Transformation ((13)) allows every operation on a collaborative app to perform a transformation for each concurrent operation. In contrast, we allow for-each operations to transform list elements (equivalently, insert operations) but not each other. Thus we do not need complicated algebraic rules to ensure eventual consistency. The semidirect product of CRDTs ((16)) combines the operations of two CRDTs, \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\), in a single CRDT. It essentially implements the rule: to apply a \(\mathcal{C}_{2}\) operation, "act on" each prior and concurrent \(\mathcal{C}_{1}\) operation in some way, then reduce over those \(\mathcal{C}_{1}\) operations to get the current state. However, instead of storing the literal list of \(\mathcal{C}_{1}\) operations, it only stores their reduced form (the actual state). Thus one can view the semidirect product as an optimized but less intuitive version of our list with for-each operations. ## 6. Conclusions We formalized the list of CRDTs and described a novel for-each operation on this list. The resulting CRDT models a list of mutable values in a collaborative app, equipped with the operation: for each element of the list, including ones inserted concurrently, apply some operation to that element. We gave several examples in which our for-each operation matches user intention better than a literal for-each loop. For future work, we plan to implement our for-each operation in the Collabs CRDT library ((17)). ## Acknowledgments We thank James Riely for insightful questions about a previous paper (16) that inspired this work. We also thank the anonymous PaPoC reviewers for helpful feedback. Matthew Weidner was supported by an NDSEG Fellowship sponsored by the US Office of Naval Research. Benito Geordie was supported by an REU sponsored by the US National Science Foundation.
2308.12399
Symmetric Nonnegative Trifactorization of Pattern Matrices
A factorization of an $n \times n$ nonnegative symmetric matrix $A$ of the form $BCB^T$, where $C$ is a $k \times k$ symmetric matrix, and both $B$ and $C$ are required to be nonnegative, is called the Symmetric Nonnegative Matrix Trifactorization (SN-Trifactorization). The SNT-rank of $A$ is the minimal $k$ for which such factorization exists. The SNT-rank of a simple graph $G$ that allows loops is defined to be the minimal possible SNT-rank of all symmetric nonnegative matrices whose zero-nonzero pattern is prescribed by a given graph. We define set-join covers of graphs, and show that finding the SNT-rank of $G$ is equivalent to finding the minimal order of a set-join cover of $G$. Using this insight we develop basic properties of the SNT-rank for graphs and compute it for trees and cycles without loops. We show the equivalence between the SNT-rank for complete graphs and the Katona problem, and discuss uniqueness of patterns of matrices in the factorization.
Damjana Kokol Bukovšek, Helena Šmigoc
2023-08-23T19:37:43Z
http://arxiv.org/abs/2308.12399v1
# Symmetric Nonnegative Trifactorization of Pattern Matrices ###### Abstract A factorization of an \(n\times n\) nonnegative symmetric matrix \(A\) of the form \(BCB^{T}\), where \(C\) is a \(k\times k\) symmetric matrix, and both \(B\) and \(C\) are required to be nonnegative, is called the Symmetric Nonnegative Matrix Tric factorization (SN-Trifactorization). The SNT-rank of \(A\) is the minimal \(k\) for which such factorization exists. The SNT-rank of a simple graph \(G\) that allows loops is defined to be the minimal possible SNT-rank of all symmetric nonnegative matrices whose zero-nonzero pattern is prescribed by a given graph. We define set-join covers of graphs, and show that finding the SNT-rank of \(G\) is equivalent to finding the minimal order of a set-join cover of \(G\). Using this insight we develop basic properties of the SNT-rank for graphs and compute it for trees and cycles without loops. We show the equivalence between the SNT-rank for complete graphs and the Katona problem, and discuss uniqueness of patterns of matrices in the factorization. keywords: Nonnegative Matrix Factorization; Nonnegative Symmetric Matrices; Symmetric Nonnegative Trifactorization; Pattern Matrices Msc: [2020] 15A23, 15B48 + Footnote †: journal: ## 1 Introduction and Notation Factorizations of matrices where the factors are required to be entry-wise nonnegative provide a powerful tool in analysing nonnegative data. In this paper we consider a factorization of nonnegative symmetric matrices, which takes into account symmetry, nonnegativity and low rank of a matrix. Denote by \(\mathbb{R}_{+}\) the set of nonnegative real numbers, by \(\mathbb{R}^{n\times m}\) the set of \(n\times m\) real matrices, and by \(\mathbb{R}_{+}^{n\times m}\) the set of \(n\times m\) entry-wise nonnegative matrices. Furthermore, we denote \[\mathcal{S}_{n}^{+}=\{A\in\mathbb{R}_{+}^{n\times n};A=A^{T}\}.\] **Definition 1.1**.: A factorization of \(A\in\mathcal{S}_{n}^{+}\) of the form \(BCB^{T}\), where \(B\in\mathbb{R}_{+}^{n\times k}\) and \(C\in\mathcal{S}_{k}^{+}\), is called _Symmetric Nonnegative Trifactorization_ of \(A\) (_SN-Trifactorization_ for short). Minimal possible \(k\) in such factorization is called the _SNT-rank_ of \(A\), and is denoted by \(\mathrm{st}_{+}(A)\). The SN-Trifactorization was studied in [7], and is closely related to two other well known factorizations that feature nonnegative factors: Nonnegative Matrix Factorization and Completely Positive Factorization. We refer the reader to [4] for the background on the Nonnegative Matrix Factorization and to [1] for the background on the Completely Positive Factorization. The zero-nonzero pattern of a nonnegative matrix \(A\in\mathcal{S}_{n}^{+}\) poses constrains on the zero-nonzero pattern of nonnegative matrices \(B\in\mathbb{R}_{+}^{n\times k}\) and \(C\in\mathcal{S}_{k}^{+}\) satisfying \(A=BCB^{T}\). The aim of this paper is to better understand those restrictions. In the context of matrix patterns, graphs arise naturally. In this work, a graph \(G=(V(G),E(G))\) will always be a simple undirected graph that allows loops. Hence, \(E(G)\subseteq\{\{i,j\};i,j\in V(G)\}\), where \(\{i\}\in E(G)\) corresponds to a loop on the vertex \(i\) in \(G\). A vertex \(i\in V(G)\) is _isolated_, if \(\{i,j\}\not\in E(G)\) for all \(j\in V(G)\). In particular, \(i\) does not have a loop. The vertex set \(V(G)\) will often be \([n]:=\{1,2,\ldots,n\}\). Let \(G\) and \(H\) be two simple graphs that allow loops. We denote by \(G\cup H\) their disjoint union, by \(tG\) the union of \(t\) copies of \(G\) and by \(G\lor H\) the join of \(G\) and \(H\), i.e. the graph with \(V(G\lor H)=V(G)\cup V(H)\) and \(E(G\lor H)=E(G)\cup E(H)\cup\{\{i,j\}\mid i\in V(G),j\in V(H)\}\). We use standard notation for graphs without any loops: we denote by \(K_{n}\) the complete graph on \(n\) vertices, by \(K_{m,n}\) the bipartite graph on \(m+n\) vertices, by \(P_{n}\) the path, and by \(C_{n}\) the cycle on \(n\) vertices. We denote by \(K_{n}^{\ell}\) the complete graph on \(n\) vertices with all possible loops. We denote matrices by capital letters, \(A\in\mathbb{R}^{n\times m}\), vectors by bold letters, \(\mathbf{a}\in\mathbb{R}^{n}\), the zero matrix in \(\mathbb{R}^{n\times m}\) by \(0_{n\times m}\) and the zero vector in \(\mathbb{R}^{n}\) by \(\mathbf{0}_{n}\). Let \(A,B\in\mathbb{R}^{n\times m}\). Then \(A>B\) means that \(A-B\) is entry-wise positive, and \(A\geq B\) denotes entry-wise nonnegativity of \(A-B\). _The support_ of a vector \(\mathbf{a}\in\mathbb{R}^{n}\) is the set of all indices in \(i\in[n]\) for which \(a_{i}\neq 0\). For \(A\in\mathbb{R}^{n\times m}\) and \(\mathcal{S}\subset[n],\mathcal{T}\subset[m]\), we denote by \(A[\mathcal{S},\mathcal{T}]\) the submatrix of \(A\) containing entries \(a_{ij}\) for all \(i\in\mathcal{S},j\in\mathcal{T}\). After a brief introduction and development of notation in Section 1, we dedicate Section 2 to the definition of the SNT-rank and set-join covers of simple graphs that allow loops. We prove that finding the SNT-rank of a graph \(G\) is equivalent to determining minimal order of the set-join cover of \(G\). Using this insight we develop basic properties of SNT-rank of graphs. We conclude Section 2 by defining uniqueness of optimal set-join covers of graphs. In Section 3 we find the SNT-rank of trees without loops and consider the unicyclic graphs without loops. Section 4 is dedicated to the SNT-rank and optimal set-join covers of complete graphs without loops. Section 5 offers a selection of possible applications and further research directions motivated by the work in this paper. ## 2 Set-join covers and \(\mathbf{st_{+}(G)}\) In this work we will for the most part take aside the actual values of matrices and only consider their patterns. Our main focus will be the question, how small can \(\mathrm{st}_{+}(A)\) be among all matrices \(A\) with a given zero-nonzero pattern. ### Setup We begin by defining \(\mathrm{st}_{+}(G)\) for a graph \(G\), and developing a combinatorial question that is equivalent to finding \(\mathrm{st}_{+}(G)\). **Definition 2.1**.: Let \(A\in\mathbb{R}_{+}^{n\times m}\). The _pattern matrix_ of \(A\) is the matrix \(\mathrm{sign}(A)\in\{0,1\}^{n\times m}\) defined by \[\mathrm{sign}(A)_{ij}=\begin{cases}1;&\text{if }a_{ij}>0,\\ 0;&\text{if }a_{ij}=0.\end{cases}\] (The pattern matrix is in some literature called _the derangement matrix_.) **Definition 2.2**.: _The pattern graph \(G(A)=(V(G),E(G))\) of a matrix \(A\in\mathcal{S}_{n}^{+}\) is defined by \(V(G)=[n]\), and \(\{i,j\}\in E(G)\) precisely when \(a_{ij}>0\)._ Let \(G\) be a graph with \(|V(G)|=n\). By \(\mathcal{S}^{+}(G)\) we denote the set of all matrices in \(\mathcal{S}_{n}^{+}\) with the pattern graph \(G\), and ask the question how small can the SNT-rank be on this set. **Definition 2.3**.: Let \(G=(V(G),E(G))\) be a simple graph with loops. We define: \(\mathrm{st}_{+}(G):=\min\{\mathrm{st}_{+}(A);A\in\mathcal{S}^{+}(G)\}\). **Remark 2.4**.: A factorization of \(A\in\mathbb{R}_{+}^{n\times m}\) of the form \(UV^{T}\), where \(U\in\mathbb{R}_{+}^{n\times k}\) and \(V\in\mathbb{R}_{+}^{m\times k}\), is called Nonnegative Matrix Factorization of \(A\). Minimal possible \(k\) in such factorization is called the NMF-rank of \(A\), and is denoted by \(\mathrm{rk}_{+}(A)\) (see [4]). In this context also Boolean rank is studied. It is defined as \[\mathrm{rk}_{01}(A)=\min\{\mathrm{rk}_{+}(B);B\in\mathbb{R}_{+}^{n\times m}, \mathrm{sign}(B)=\mathrm{sign}(A)\}.\] It is easy to see that Boolean rank is equal to rectangle covering bound, i.e. the minimum number of rectangles needed to cover all nonzero entries in \(A\) (see [4]). We proceed to develop basic properties of \(\operatorname{st}_{+}(G)\). We start with a simple proposition which follows from the property that for \(A\in\mathcal{S}_{n}^{+},B\in\mathcal{S}_{m}^{+}\) we have \(\operatorname{st}_{+}(A\oplus B)=\operatorname{st}_{+}(A)+\operatorname{st}_{+} (B)\), see [7]. **Proposition 2.5**.: _For any graphs \(G,H\) we have \(\operatorname{st}_{+}(G\cup H)=\operatorname{st}_{+}(G)+\operatorname{st}_{+} (H)\). In particular, \(\operatorname{st}_{+}(G\cup K_{1})=\operatorname{st}_{+}(G)\) as \(\operatorname{st}_{+}(K_{1})=0\)._ The next result connects the patterns of nonnegative matrices \(A\), \(B\) and \(C\), provided \(A=BCB^{T}\). **Proposition 2.6**.: _Let \(A=BCB^{T}\) be SN-Trifactorization of \(A=(a_{ij})\in\mathcal{S}_{n}^{+}\). For \(i\in[n]\) let \(r_{i}(B)^{T}\in\mathbb{R}^{1\times k}\) be the \(i\)-th row of \(B\), and \(\mathcal{R}_{i}\subseteq\{1,\ldots,k\}\) the support of \(r_{i}(B)\), i.e. \(\mathcal{R}_{i}:=\{j;(r_{i}(B))_{j}\neq 0\}\). Then \(C[\mathcal{R}_{i},\mathcal{R}_{j}]=0\) if and only if \(a_{ij}=0\)._ Proof.: Since \(a_{ij}=r_{i}(B)^{T}Cr_{j}(B)=(r_{i}(B)[\mathcal{R}_{i}])^{T}C[\mathcal{R}_{i}, \mathcal{R}_{j}](r_{j}(B)[\mathcal{R}_{j}])\), and \(r_{s}(B)[\mathcal{R}_{s}]>0\) by the definition of \(\mathcal{R}_{s}\), the conclusion follows. Note that \(\operatorname{sign}(B)\) in Proposition 2.6 is the incidence matrix of \(\mathbf{R}=(\mathcal{R}_{1},\ldots,\mathcal{R}_{n})\). We recall the definition of the incidence matrix below. **Definition 2.7**.: Let \(\mathbf{R}=(\mathcal{R}_{1},\ldots,\mathcal{R}_{n})\) be a list of sets \(\mathcal{R}_{i}\subseteq\{1,\ldots,k\}\). _The incidence matrix_ of \(\mathbf{R}\) is an \(n\times k\) matrix \(\iota(\mathbf{R})\) with \[\iota(\mathbf{R})_{ij}=\begin{cases}1;&j\in\mathcal{R}_{i},\\ 0;&j\not\in\mathcal{R}_{i}.\end{cases}\] Since \(\operatorname{st}_{+}(G)\) is a parameter of the graph \(G\), we would like to determine it directly from \(G\). For this we need to introduce the set-join and the set-join cover. **Definition 2.8**.: Let \(\mathcal{S}\) be a finite set and \(\mathcal{K},\mathcal{L}\) two nonempty subsets of \(\mathcal{S}\) with possibly nonempty intersection. We define _the set-join of \(\mathcal{K}\) and \(\mathcal{L}\) on \(\mathcal{S}\)_, denoted by \(\mathcal{K}\vee_{\mathcal{S}}\mathcal{L}\), to be the graph with \(V(\mathcal{K}\vee_{\mathcal{S}}\mathcal{L})=\mathcal{S}\), and \(E(\mathcal{K}\vee_{\mathcal{S}}\mathcal{L})=\{\{i,j\};i\in\mathcal{K},j\in \mathcal{L}\}\). Note that \(\mathcal{K}\vee_{\mathcal{S}}\mathcal{L}\) has a loop on vertex \(i\) precisely when \(i\in\mathcal{K}\cap\mathcal{L}\), and that if \(\mathcal{K}\cup\mathcal{L}\neq\mathcal{S}\), then \(\mathcal{K}\vee_{\mathcal{S}}\mathcal{L}\) contains some isolated vertices without loops. **Definition 2.9**.: Let \(G\) be a graph and \(\mathcal{K}_{i},\mathcal{L}_{i}\subseteq V(G)\). We say that \[\mathscr{C}=\{\mathcal{K}_{i}\vee_{V(G)}\mathcal{L}_{i},i\in[t]\}\] is _a set-join cover_ of \(G\) if \(E(G)=\bigcup_{i=1}^{t}E(\mathcal{K}_{i}\vee_{V(G)}\mathcal{L}_{i})\). For a set-join cover \(\mathscr{C}\) we define: _the component set_ of \(\mathscr{C}\) to be: \[V(\mathscr{C})=\{\mathcal{K}_{i};i\in[t]\}\cup\{\mathcal{L}_{i};i\in[t]\},\] _the order_ of \(\mathscr{C}\) to be \(|\mathscr{C}|=|V(\mathscr{C})|\), _the graph_ of \(\mathscr{C}\) denoted by \(G(\mathscr{C})\) to be a graph with \(V(G(\mathscr{C}))=V(\mathscr{C})\) and \(\{\mathcal{K}_{i},\mathcal{K}_{j}\}\in E(G(\mathscr{C}))\) if and only if \(\mathcal{K}_{i}\vee_{V(G)}\mathcal{K}_{j}\in\mathscr{C}\). Note that in the above definition we allow \(\mathcal{K}_{i}=\mathcal{K}_{j}\) or \(\mathcal{K}_{i}=\mathcal{L}_{j}\), so \(|\mathscr{C}|\) can be smaller than \(2t\), and can in the extreme case even be equal to \(1\). **Definition 2.10**.: Let \(G\) be graph with a set-join cover \(\mathscr{C}\), and \(\mathcal{S}\subseteq V(G)\). _The restriction of \(\mathscr{C}\) to \(\mathcal{S}\)_ is the set \[\mathscr{C}[\mathcal{S}]=\{(\mathcal{K}\cap\mathcal{S})\vee_{V(G)\cap \mathcal{S}}(\mathcal{L}\cap\mathcal{S});\mathcal{K}\vee_{V(G)}\mathcal{L}\in \mathscr{C},\mathcal{K}\cap\mathcal{S}\neq\emptyset,\mathcal{L}\cap\mathcal{S} \neq\emptyset\}.\] In the definition above, \(\mathscr{C}[\mathcal{S}]\) is a set-join cover of a subgraph of \(G\) induced on \(V(G)\cap\mathcal{S}\) that satisfies \(|\mathscr{C}[\mathcal{S}]|\leq|\mathscr{C}|\). **Theorem 2.11**.: _Let \(G\) be a graph. Then \(\mathrm{st}_{+}(G)=\min\{|\mathscr{C}|;\,\mathscr{C}\) a set-join cover of \(G\}\)._ Proof.: Let \(G\) be a graph with \(V(G)=[n]\), and \(A\in\mathcal{S}^{+}(G)\) with SN-Trifactorization \(A=BCB^{T}\), where \(C\in\mathcal{S}^{+}_{k}\). For \(i\in[k]\), let \(c_{i}(B)\) be the \(i\)-th column of \(B\), and \(\mathcal{L}_{i}\) the support of the \(c_{i}(B)\). Clearly, \(c_{i}(B)c_{j}(B)^{T}+c_{j}(B)c_{i}(B)^{T}\in\mathcal{S}^{+}(\mathcal{L}_{i} \vee_{[n]}\mathcal{L}_{j})\), and \[\mathrm{sign}(A)=\mathrm{sign}\left\{\sum_{\{i,j\}\in E(G(C))}\left(c_{i}(B)c _{j}(B)^{T}+c_{j}(B)c_{i}(B)^{T}\right)\right\}.\] We deduce that \(\mathscr{C}=\{\mathcal{L}_{i}\vee_{[n]}\mathcal{L}_{j};\{i,j\}\in E(G(C))\}\) is a set-join cover of \(G\) with \(|\mathscr{C}|=k\). Conversely, let \(\mathscr{C}=\{\mathcal{L}_{i}\vee_{[n]}\mathcal{K}_{i};i\in[t]\}\) be a set-join cover of \(G\), and let us list all the elements in \(V(\mathscr{C})\) in some fixed order: \((\widehat{\mathcal{L}}_{1},\widehat{\mathcal{L}}_{2},\ldots,\widehat{ \mathcal{L}}_{|\mathscr{C}|})\). We define \(C\) to be a \(|\mathscr{C}|\times|\mathscr{C}|\) matrix with: \[c_{ij}=\begin{cases}1;&\text{if }\widehat{\mathcal{L}}_{i}\vee_{[n]}\widehat{ \mathcal{L}}_{j}\in\mathscr{C},\\ 0;&\text{otherwise},\end{cases}\] and \(B\) to be an \(n\times|\mathscr{C}|\) matrix with \[b_{ij}=\begin{cases}1,&\text{if }i\in\widehat{\mathcal{L}}_{j}\\ 0,&\text{otherwise}.\end{cases}\] In other words, \(C\) is the zero-one matrix in \(\mathcal{S}^{+}(G(\mathscr{C}))\), and \(B^{T}\) is the incidence matrix of \((\widehat{\mathcal{L}}_{1},\widehat{\mathcal{L}}_{2},\ldots,\widehat{\mathcal{ L}}_{|\mathscr{C}|})\). The proof is completed by noting that \(BCB^{T}\in\mathcal{S}^{+}(G)\). **Definition 2.12**.: If \(\mathscr{C}\) is a set-join cover of \(G\) with of order \(\operatorname{st}_{+}(G)\), then we say that \(\mathscr{C}\) is an _optimal set-join cover_ for \(G\). We will use the abbreviation _OSJ cover_ for \(G\). **Proposition 2.13**.: _An OSJ cover \(\mathscr{C}\) of \(G\) satisfies_ \[\operatorname{st}_{+}(G(\mathscr{C}))=|\mathscr{C}|=|V(G(\mathscr{C}))|.\] Proof.: Let \(\mathscr{C}\) be an OSJ cover of \(G\), \(C\in\mathcal{S}^{+}(G(\mathscr{C}))\) and \(B=\iota(V(\mathscr{C}))^{T}\) the transposed incidence matrix of \(V(\mathscr{C})\). Then \(A=BCB^{T}\in\mathcal{S}^{+}(G)\). If \(\operatorname{st}_{+}(G(\mathscr{C}))<|\mathscr{C}|\), then the matrix \(C\in\mathcal{S}^{+}(G(\mathscr{C}))\) can be chosen so that \(\operatorname{st}_{+}(C)<|\mathscr{C}|\). Hence, \(C=B_{1}C_{1}B_{1}^{T}\) where \(C_{1}\in\mathcal{S}_{k}^{+}\) with \(k<|\mathscr{C}|\). From \(A=(BB_{1})C_{1}(BB_{1})^{T}\) we conclude \(\operatorname{st}_{+}(A)\leq k<|\mathscr{C}|=\operatorname{st}_{+}(G)\), a contradiction. **Remark 2.14**.: A set-join cover \(\mathscr{C}\) of a graph can be interpreted in the following way. Consider a set of items \(V\) that are either required or forbidden to interact. The interactions are organised by meetings of certain subgroups of \(V\). If two subgroups \(V_{1},V_{2}\subseteq V\) meet, then all the items from \(V_{1}\) interact with all the items from \(V_{2}\). Hence, if \(i,j\in V\) are forbidden to interact, and \(V_{1}\) and \(V_{2}\) are two subgroups that meet, then \(i\in V_{1}\) and \(j\in V_{2}\) is not allowed. The desired interactions can clearly be organised by meetings of singletons, and we are asking what is the minimal number of subgroups that need to be formed, to be able to organise the desired interactions in such a way that no forbidden interactions occur. Let \(G\) be a graph that records which interactions are required and which are forbidden: \(V(G)=V\), \(\{i,j\}\in E(G)\) if and only if \(\{i,j\}\) are required to interact. Any set-join cover \(\mathscr{C}\) of \(G\) gives us possible way of organising required interactions, and \(\operatorname{st}_{+}(G)\) is the minimal number of groups that need to be formed. We illustrate Theorem 2.11 by the following example. **Example 2.15**.: Let \(A=BCB^{T}\) with \[B=\left(\begin{array}{cccc}1&0&1&0&0\\ 1&0&0&1&0\\ 1&0&0&0&1\\ 0&1&1&0&0\\ 0&1&0&1&0\\ 0&1&0&0&1\end{array}\right),\quad C=\left(\begin{array}{cccc}0&1&0&0&0\\ 1&0&0&0&0\\ 0&0&0&1&1\\ 0&0&1&0&1\\ 0&0&1&1&0\end{array}\right).\] Then \(G(A)=K_{6}\). This proves that \(\operatorname{st}_{+}(K_{6})\leq 5\). (In Section 4 we will show that \(\operatorname{st}_{+}(K_{6})=5\).) The patterns of \(B\) and \(C\) determine the set-join cover of \(K_{6}\): \[\mathcal{K}_{1}=\{1,2,3\},\mathcal{K}_{2}=\{4,5,6\},\mathcal{K}_{3}=\{1,4\}, \mathcal{K}_{4}=\{2,5\},\mathcal{K}_{5}=\{3,6\},\] and \(\mathscr{C}=\{\mathcal{K}_{1}\vee_{[6]}\mathcal{K}_{2},\mathcal{K}_{3}\vee_{[ 6]}\mathcal{K}_{4},\mathcal{K}_{4}\vee_{[6]}\mathcal{K}_{5},\mathcal{K}_{5} \vee_{[6]}\mathcal{K}_{3}\}\). Notice that \(G(\mathscr{C})=G(C)=K_{2}\cup K_{3}\). **Example 2.16**.: Let \(G\) be a graph with the adjacency matrix \[A=\left(\begin{array}{ccccc}0&1&1&0&0\\ 1&1&1&1&0\\ 1&1&1&1&1\\ 0&1&1&1&1\\ 0&0&1&1&0\end{array}\right).\] Then \(\operatorname{st}_{+}(G)=4\). Indeed, any matrix \(X\) with pattern matrix \(\operatorname{sign}(X)=A\) has rank at least \(4\). The first three rows of \(X\) clearly have to be linearly independent, and the last row cannot be written as a linear combination of the first three. Since \(\operatorname{st}_{+}(X)\geq\operatorname{rk}(X)\) for any \(X\in\mathcal{S}^{+}(G)\), we have \(\operatorname{st}_{+}(G)\geq 4\). Let \[\mathcal{K}_{1}=\{1,2\},\mathcal{K}_{2}=\{2,3\},\mathcal{K}_{3}=\{3,4\}, \mathcal{K}_{4}=\{4,5\},\] and observe that \(\mathscr{C}=\{\mathcal{K}_{1}\vee_{[5]}\mathcal{K}_{2},\mathcal{K}_{2}\vee_{ [5]}\mathcal{K}_{3},\mathcal{K}_{3}\vee_{[5]}\mathcal{K}_{4}\}\) a set-join cover of \(G\) with \(|\mathscr{C}|=4\), proving \(\operatorname{st}_{+}(G)\leq 4\). Let \(B^{T}=\iota(\mathbf{R})\) be the incidence matrix of the list \(\mathbf{R}=(\mathcal{K}_{1},\mathcal{K}_{2},\mathcal{K}_{3},\mathcal{K}_{4})\), \(C\) the adjacency matrix of the path on \(4\) vertices, and \(X=BCB^{T}\). Then \(X\in\mathcal{S}^{+}(G)\) and \(\operatorname{st}_{+}(X)=4\). **Remark 2.17**.: A matrix \(A\in\mathcal{S}^{+}_{n}\) is completely positive, if it can be written as \(BB^{T}\) for some matrix \(B\in\mathbb{R}^{n\times k}_{+}\). Such factorization is called CP-Factorization of \(A\). The minimal possible \(k\) in CP-Factorization as above is called the CP-rank of \(A\), and is denoted by \(\operatorname{cp}(A)\), see [1]. The set \(\mathcal{S}^{+}_{n}(G)\) contains matrices with CP-rank \(k\) if and only if there exists a set-join cover of \(G\) of the form \(\mathscr{C}=\{\mathcal{K}_{i}\vee_{V(G)}\mathcal{K}_{i},i=1\ldots,k\}\), since any CP-factorization \(A=BB^{T}\) can be viewed as an SN-Trifactorization \(A=BCB^{T}\), with the middle matrix \(C\) equal to the identity. So the graph \(G(\mathscr{C})\) is equal to \(tK_{1}^{\ell}\). Hence, all the edges of \(G\) can be covered with \(k\) complete graphs with all the loops. The lowest CP-rank that a nonnegative symmetric matrix with prescribed zero-nonzero pattern can have is equal to the clique cover number of the pattern graph. **Lemma 2.18**.: _Let \(\mathscr{C}\) be a set-join cover for a graph \(G\) with \(V(G)=[n]\). Assume there exist:_ * \(V^{\prime}\subseteq V(\mathscr{C})\) _with_ \(|V^{\prime}|=t\)_, and_ * \(V=\{\mathcal{L}_{i}\subseteq[n];\,i\in[s]\}\)_,_ _so that \(s<t\) and every element of \(V^{\prime}\) can be written as the union of some elements of \(V\). Then \(\mathscr{C}\) is not an OSJ cover of \(G\)._ Proof.: We will prove the lemma by constructing a set-join cover \(\widehat{\mathscr{C}}\) of \(G\) with \(V(\widehat{\mathscr{C}})=(V(\mathscr{C})\setminus V^{\prime})\cup V\). This will prove our claim as \(|(V(\mathscr{C})\setminus V^{\prime})\cup V|\leq|\mathscr{C}|-t+s<|\mathscr{C}|\). We define \(\widehat{\mathscr{C}}\) as the union of the following three sets: * \(\mathscr{C}_{1}:=\{\mathcal{K}\vee_{[n]}\mathcal{K}^{\prime};\,\mathcal{K} \vee_{[n]}\mathcal{K}^{\prime}\in\mathscr{C}\text{ and }\mathcal{K},\mathcal{K}^{\prime}\in(V( \mathscr{C})\setminus V^{\prime})\}\). * \(\mathscr{C}_{2}:=\{\mathcal{K}\vee_{[n]}\mathcal{L};\,\mathcal{K}\in(V( \mathscr{C})\setminus V^{\prime}),\,\mathcal{L}\in V\text{ and there exists }\mathcal{K}\vee_{[n]}\mathcal{K}^{\prime}\in\mathscr{C}\) with \(\mathcal{L}\subseteq\mathcal{K}^{\prime}\}\), * \(\mathscr{C}_{3}:=\{\mathcal{L}\vee_{[n]}\mathcal{L}^{\prime};\,\mathcal{L}, \mathcal{L}^{\prime}\in V\text{ and there exists }\mathcal{K}\vee_{[n]}\mathcal{K}^{\prime}\in\mathscr{C}\text{ with }\mathcal{L}\subseteq\mathcal{K},\) and \(\mathcal{L}^{\prime}\subseteq\mathcal{K}^{\prime}\}\). To prove that \(\widehat{\mathscr{C}}\) is a set-join cover of \(G\) we need to show that \(\widehat{\mathscr{C}}\) covers all the edges of \(G\), and that it does not cover any edges that are not in \(G\). Since \(\mathscr{C}\) is a cover of \(G\), it is clear that \(\mathscr{C}_{1}\) does not cover any edges that are not in \(E(G)\). If \(\mathcal{K}\in(V(\mathscr{C})\setminus V^{\prime})\), \(\mathcal{L}\in V\) and there exists \(\mathcal{K}\vee_{[n]}\mathcal{K}^{\prime}\in\mathscr{C}\) with \(\mathcal{L}\subseteq\mathcal{K}^{\prime}\), then \(E(\mathcal{K}\vee_{[n]}\mathcal{L})\subseteq E(\mathcal{K}\vee_{[n]}\mathcal{K} ^{\prime})\subseteq E(G).\) Hence, \(\mathscr{C}_{2}\) does not cover any edges not in \(E(G)\). The claim for \(\mathscr{C}_{3}\) is proved in a similar way. Now let \(e\in E(G)\) and \(\mathcal{K}\vee_{[n]}\mathcal{K}^{\prime}\in\mathscr{C}\) with \(e\in E(\mathcal{K}\vee_{[n]}\mathcal{K}^{\prime})\). If \(\mathcal{K},\mathcal{K}^{\prime}\in V(\mathscr{C})\setminus V^{\prime}\), then \(\mathcal{K}\vee_{[n]}\mathcal{K}^{\prime}\in\mathscr{C}_{1}\). If \(\mathcal{K}\in V(\mathscr{C})\setminus V^{\prime}\) and \(\mathcal{K}^{\prime}\in V^{\prime}\), then \(\mathcal{K}^{\prime}\) is the union of some elements form \(V\), hence there exists \(\mathcal{L}\in V\) so that \(\mathcal{L}\subseteq\mathcal{K}^{\prime}\) and \(e\in\mathcal{K}\vee_{[n]}\mathcal{L}\), hence \(e\) is covered by \(\mathscr{C}_{2}\). The case when \(\mathcal{K},\mathcal{K}^{\prime}\in V^{\prime}\) is proved in a similar way. Let \(\mathscr{C}\) be a set-join cover of a graph \(G\) with the component set \(V(\mathscr{C})=\{\mathcal{K}_{i},i\in[t]\}\). Recall that _a system of distinct representatives_ (SDR for short) of \(V(\mathscr{C})\) is a set \(\{x_{i},i\in[t]\}\) with the property that \(x_{i}\in\mathcal{K}_{i}\) and \(x_{i}\) are distinct, [2, Section 1.2]. **Lemma 2.19**.: _Let \(\mathscr{C}\) be an optimal set-join cover of a graph \(G\). Then \(V(\mathscr{C})\) has a system of distinct representatives._ Proof.: It is well known that a family of subsets \(V(\mathscr{C})\) has an SDR if an only the permanent of the incidence matrix \(\iota(V(\mathscr{C}))\) is not zero, [2, Section 7.5]. Assume then that \(\mathscr{C}\) is an optimal cover of \(G\) with \(\operatorname{per}(\iota(V(\mathscr{C})))=0\). Since \(\operatorname{st}_{+}(G)\leq n\), this is equivalent to \(\iota(V(\mathscr{C}))\) having an \(w\times t\) zero submatrix, where \(w+t=n+1\), by Frobenius-Konig Theorem, see for example [2, Section 1.2]. Hence, there exists \(V^{\prime}\subset V(\mathscr{C})\) with \(|V^{\prime}|=t\) so that \(|\cup_{\mathcal{K}_{i}\in V^{\prime}}\mathcal{K}_{i}|\leq n-w=t-1\). Let \(V=\{\{x\},x\in\cup_{\mathcal{K}_{i}\in V}\mathcal{K}_{i}\}\) be the set of singletons from \(\cup_{\mathcal{K}_{i}\in V^{\prime}}\mathcal{K}_{i}\). Then \(V^{\prime}\) and \(V\) satisfy the conditions of Lemma 2.18, thus \(\mathscr{C}\) is not optimal, a contradiction. **Theorem 2.20**.: _Let \(\mathscr{C}\) be an optimal set-join cover of a graph \(G\). Then \(G\) contains a subgraph that is isomorphic to \(G(\mathscr{C})\)._ Proof.: Let \(\mathscr{C}\) be an optimal cover for \(G\) and \(\mathcal{S}=\{x_{i},i=1\ldots,|\mathscr{C}|\}\) an SDR for \(V(\mathscr{C})\) that exists by Lemma 2.19. Clearly, there exists a (not necessarily induced) subgraph of \(G\) on vertices from \(\mathcal{S}\) that is isomorphic to \(G(\mathscr{C})\). **Remark 2.21**.: A graph property \(P\) is _monotone_ if every subgraph of a graph with property \(P\) also has property \(P\). Theorem 2.20 shows that if \(G\) has a monotone graph property and \(\mathscr{C}\) is an OSJ cover of \(G\), then \(G(\mathscr{C})\) has it also. For example: * If \(G\) is a forest, then \(G(\mathscr{C})\) is a forest. * If \(G\) is triangle free, then \(G(\mathscr{C})\) is also. * If \(G\) is bipartite, then \(G(\mathscr{C})\) is also. ### Uniqueness After we establish \(\operatorname{st}_{+}(G)\), we can ask, if the set-join cover of \(G\) of order \(\operatorname{st}_{+}(G)\) is unique. We will consider three different types of uniqueness as described in the definition below. **Definition 2.22**.: A graph \(G\) has _unique optimal set-join cover_ (_unique OSJ cover_ for short), if the OSJ cover of \(G\) of order \(\operatorname{st}_{+}(G)\) is unique. A graph \(G\) has _essentially unique OSJ cover_, if for any two covers \(\mathscr{C}\) and \(\mathscr{C}^{\prime}\) of \(G\) satisfying \(|\mathscr{C}|=|\mathscr{C}^{\prime}|=\operatorname{st}_{+}(G)\), there exists an automorphism \(\sigma:V(G)\to V(G)\) of \(G\) so that \(\sigma(\mathscr{C})=\mathscr{C}^{\prime}\). (For a cover \(\mathscr{C}=\{\mathcal{K}_{i}\vee_{V(G)}\mathcal{L}_{i},i=1\ldots, \operatorname{st}_{+}(G)\}\), we define \(\sigma(\mathscr{C})\) to be the cover \(\{\sigma(\mathcal{K}_{i})\ \vee_{V(G)}\ \sigma(\mathcal{L}_{i}),i=1\ldots, \operatorname{st}_{+}(G)\}\).) A graph \(G\) has the _unique OSJ cover graph_, if all covers \(\mathscr{C}\) of \(G\) of order \(\operatorname{st}_{+}(G)\) have the same \(G(\mathscr{C})\) up to isomorphism of graphs. **Example 2.23**.: Let \(G_{1}=(3K_{1}^{\ell})\lor K_{1}^{\ell}\) be the star graph on 4 vertices with all the loops, and denote the vertices of \(3K_{1}^{\ell}\) by 1, 2, 3 and the central vertex by 4. Then \(\operatorname{st}_{+}(G_{1})=3\) and \(G_{1}\) has the unique OSJ cover \(\mathscr{C}_{1}\), with the components: \[\mathcal{K}_{1}=\{1,4\},\mathcal{K}_{2}=\{2,4\},\mathcal{K}_{3}=\{3,4\},\] \(\mathscr{C}_{1}=\{\mathcal{K}_{1}\vee_{[4]}\mathcal{K}_{1},\mathcal{K}_{2} \vee_{[4]}\mathcal{K}_{2},\mathcal{K}_{3}\vee_{[4]}\mathcal{K}_{3}\}\) and \(G(\mathscr{C}_{1})=3K_{1}^{\ell}\). Next, let \(G_{2}\) be defined by \(V(G_{2})=[4]\) and adjacency matrix \[A_{2}=\left(\begin{array}{cccc}1&1&1&1\\ 1&1&0&1\\ 1&0&0&1\\ 1&1&1&0\end{array}\right).\] Again \(\operatorname{st}_{+}(G_{2})=3\), but \(G_{2}\) does not even have the unique OSJ cover graph. Indeed, \(G_{2}\) has two OSJ covers. Both covers have components \[\mathcal{K}_{1}=\{1,2\},\mathcal{K}_{2}=\{1,4\},\mathcal{K}_{3}=\{1,2,3\},\] \(\mathscr{C}_{2}=\{\mathcal{K}_{1}\vee_{[4]}\mathcal{K}_{1},\mathcal{K}_{1} \vee_{[4]}\mathcal{K}_{2},\mathcal{K}_{2}\vee_{[4]}\mathcal{K}_{3}\}\), and \(\mathscr{C}_{3}=\{\mathcal{K}_{1}\vee_{[4]}\mathcal{K}_{1},\mathcal{K}_{2} \vee_{[4]}\mathcal{K}_{3}\}\). Note that \(G(\mathscr{C}_{2})\) is a path \(P_{3}\) with a loop on one pendant vertex, and \(G(\mathscr{C}_{3})\) is \(K_{1}^{\ell}\cup P_{2}\). Finally, let \(G_{3}=K_{3}\lor K_{1}^{\ell}\) be a graph on 4 vertices. Denote the vertices of \(K_{3}\) by 1, 2, 3 and the vertex with the loop by 4. Again \(\operatorname{st}_{+}(G_{3})=3\). This time \(G_{3}\) has the unique OSJ cover graph \(K_{3}\), but OSJ cover is not unique nor essentially unique. Let \[\mathcal{K}_{1}=\{1,4\},\mathcal{K}_{2}=\{2,4\},\mathcal{K}_{3}=\{3,4\}, \mathcal{K}_{3}^{\prime}=\{3\}.\] The covers \(\mathscr{C}_{4}=\{\mathcal{K}_{1}\vee_{[4]}\mathcal{K}_{2},\mathcal{K}_{2} \vee_{[4]}\mathcal{K}_{3},\mathcal{K}_{3}\vee_{[4]}\mathcal{K}_{1}\}\) and \(\mathscr{C}_{5}=\{\mathcal{K}_{1}\vee_{[4]}\mathcal{K}_{2},\mathcal{K}_{2} \vee_{[4]}\mathcal{K}_{3}^{\prime},\mathcal{K}_{3}^{\prime}\vee_{[4]}\mathcal{ K}_{1}\}\) are not isomorphic since the cardinalities of their components do not match. In Example 4.10 we will see that \(K_{6}\) has essentially unique but not unique OSJ cover. ### Operations on graphs that preserve \(\operatorname{st}_{+}(G)\) Observations so far make it clear, that if \(G\) has two vertices with the same set of neighbours, then removing one of those vertices will not change \(\operatorname{st}_{+}(G)\). This gives us an operation on graphs, whose effect on SNT-rank is easily understood. To make this observation more precise, we need the definition below. **Definition 2.24**.: Let \(G\) be a graph and \(v\in V(G)\). Then \[N_{G}(v):=\{w;\{v,w\}\in E(G)\}\] is called the _neighbourhood_ of \(v\). Note that \(v\in N_{G}(v)\) precisely when \(v\) has a loop in \(G\). **Definition 2.25**.: Two vertices \(v,w\in V(G)\) are called _twins_ if \(N_{G}(v)=N_{G}(w)\). A graph \(G\) is _twin-free_, if no two pairs of vertices in \(V(G)\) are twins. By \(F_{tw}(G)\) we denote the biggest twin free sub-graph of \(G\). Note that \(F_{tw}(G)\) is obtained from \(G\) by removing all but one vertex from every set of twins in \(G\). **Definition 2.26**.: Let \(A\in\mathcal{S}_{n}^{+}\). The graph \(F_{tw}(G(A))\) is called _the twin-free graph_ of \(A\). _The twin-free pattern matrix_ of \(A\), denoted by \(F_{tw}(\operatorname{sign}(A))\), is the adjacency matrix of \(F_{tw}(G(A))\) and can be obtained from \(\operatorname{sign}(A)\) by removing any duplicate rows and columns from \(\operatorname{sign}(A)\). With this new terminology we can restate our earlier observation. **Proposition 2.27**.: _Let \(G\) be a graph. Then \(\operatorname{st}_{+}(G)=\operatorname{st}_{+}(F_{tw}(G))\). Moreover, \(G\) has unique OSJ cover if and only if \(F_{tw}(G)\) has._ Proof.: Let \(G\) be a graph, \(v\in V(G)\), and \(\widehat{G}\) a graph obtained from \(G\) by duplicating the vertex \(v\) into vertices \(v_{1}\) and \(v_{2}\). Hence, \(V(\widehat{G})=(V(G)\setminus\{v\})\cup\{v_{1},v_{2}\}\), and \(N_{\widehat{G}}(v_{i})\), \(i=1,2\), is equal to \(N_{G}(v)\) if \(v\not\in N_{G}(v)\), and is equal to \((N_{G}(v)\setminus\{v\})\cup\{v_{1},v_{2}\}\) otherwise. A set-join cover of \(\widehat{G}\) can be obtained from a set-join cover \(\mathscr{C}\) of \(G\) by replacing \(v\) in each of the components of \(\mathscr{C}\) by \(v_{1}\) and \(v_{2}\). Note that two distinct set-join covers of \(G\) result two distinct set-join covers of \(\widehat{G}\) in this process. Hence, \(\operatorname{st}_{+}(G)\geq\operatorname{st}_{+}(\widehat{G})\) and unique OSJ cover of \(\widehat{G}\) implies unique OSJ cover of \(G\). In order to prove that unique OSJ cover of \(G\) also implies unique OSJ cover of \(\widehat{G}\), let us assume that \(G\) has unique OSJ cover, and let \(\widehat{\mathscr{C}}\) be some OSJ cover of \(\widehat{G}\). We claim that for \(\mathcal{K}\in V(\widehat{\mathscr{C}})\), \(v_{1}\in\mathcal{K}\) if and only if \(v_{2}\in\mathcal{K}\). If this is not true, then one set-join cover \(\mathscr{C}_{1}\) of \(G\) can be obtained from \(\widehat{\mathscr{C}}\) by removing \(v_{2}\) and replacing \(v_{1}\) in each of the components of \(\widehat{\mathscr{C}}\) by \(v\), and a different set-join cover \(\mathscr{C}_{2}\) of \(G\) can be obtained from \(\widehat{\mathscr{C}}\) by removing \(v_{1}\) and replacing \(v_{2}\) in each of the components of \(\widehat{\mathscr{C}}\) by \(v\). In particular, this proves that every OSJ cover of \(\widehat{G}\) can be obtained from some OSJ cover of \(G\) by the process outlined above. Since the OSJ cover of \(G\) is by our assumption unique this establishes the uniqueness of OSJ cover of \(\widehat{G}\) **Lemma 2.28**.: _If \(\mathscr{C}\) is an optimal set-join cover for a graph \(G\), then \(G(\mathscr{C})\) is twin-free._ Proof.: The assertion follows directly from Propositions 2.13 and 2.27. **Remark 2.29**.: Suppose that \(G(\mathscr{C})\) contains twin vertices \(\mathcal{K}_{1}\) and \(\mathcal{K}_{2}\). Let \(\mathcal{K}_{3}=\mathcal{K}_{1}\cup\mathcal{K}_{2}\) and \[\mathscr{C}_{1} =\{\mathcal{K}\vee_{V(G)}\mathcal{K}^{\prime};\mathcal{K}, \mathcal{K}^{\prime}\in(V(\mathscr{C})\setminus\{\mathcal{K}_{1},\mathcal{K}_ {2}\}),\mathcal{K}\vee_{V(G)}\mathcal{K}^{\prime}\in\mathscr{C}\},\] \[\mathscr{C}_{2} =\{\mathcal{K}_{3}\vee_{V(G)}\mathcal{K}^{\prime};\mathcal{K}^{ \prime}\in(V(\mathscr{C})\setminus\{\mathcal{K}_{1},\mathcal{K}_{2}\}), \mathcal{K}_{1}\vee_{V(G)}\mathcal{K}^{\prime}\in\mathscr{C}\}.\] \[\mathscr{C}_{3} =\{\mathcal{K}_{3}\vee_{V(G)}\mathcal{K}_{3}\}.\] If \(\mathcal{K}_{1}\vee_{V(G)}\mathcal{K}_{2}\in\mathscr{C}\) let \(\widehat{\mathscr{C}}=\mathscr{C}_{1}\cup\mathscr{C}_{2}\cup\mathscr{C}_{3}\), otherwise let \(\widehat{\mathscr{C}}=\mathscr{C}_{1}\cup\mathscr{C}_{2}\). It is straightforward to check that \(\widehat{\mathscr{C}}\) is set-join cover of \(G\) with \(V(\widehat{\mathscr{C}})=(V(\mathscr{C})\setminus\{\mathcal{K}_{1},\mathcal{K} _{2}\})\cup\{\mathcal{K}_{3}\}\), so \(\mathscr{C}\) is not an optimal set-join cover of \(G\). **Proposition 2.30**.: _Let \(G\) be a graph and let \(\widehat{G}=G\lor K_{1}^{\ell}\). Then:_ \[\operatorname{st}_{+}(\widehat{G})=\begin{cases}\operatorname{st}_{+}(G);& \text{ if }|N_{G}(v)|\geq 1\text{ for all }v\in V(G),\\ \operatorname{st}_{+}(H)+2;&\text{ if }G=H\cup tK_{1}.\end{cases}\] _Moreover, if \(G\) is a graph without isolated vertices, then \(G\) has unique OSJ cover graph if and only if \(\widehat{G}\) has one._ Proof.: Since any \(A\in\mathcal{S}_{+}(G\lor K_{1}^{\ell})\) has \(A_{1}\in\mathcal{S}_{+}(G)\) as a principal sub-matrix \(\operatorname{st}_{+}(G)\leq\operatorname{st}_{+}(G\lor K_{1}^{\ell})\) holds. Assume that \(G\) has no isolated vertices. Let \(V(G)=[n]\) and \(V(\widehat{G})=[n+1]\). Replacing every component \(\mathcal{K}\) in a set-join cover of \(G\) by \(\widehat{\mathcal{K}}:=\mathcal{K}\cup\{n+1\}\) produces a set-join cover of \(\widehat{G}\). Conversely, replacing every component \(\mathcal{K}\) in a set-join cover of \(\widehat{G}\) by \(\mathcal{K}^{\prime}:=\mathcal{K}\setminus\{n+1\}\) produces a set-join cover of \(G\). Observe that \(\{n+1\}\) is not a component of any OSJ cover of \(\widehat{G}\). Indeed, if a cover \(\widehat{\mathscr{C}}\) of \(\widehat{G}\) contains \(\{n+1\}\) as a component, then we can construct a set join cover \(\widehat{\mathscr{C}}^{\prime}\) of \(\widehat{G}\) satisfying \(|\widehat{\mathscr{C}}^{\prime}|=|\widehat{\mathscr{C}}|-1\) by removing any set-joins involving \(\{n+1\}\) and adding \(n+1\) to all other components of \(\mathscr{C}\). Since the process outlined above preserves the order and the graphs of the corresponding OSJ covers, the statement for graphs without isolated vertices follows. (We note in passing that \(G\) may have unique OSJ cover, but \(\widehat{G}\) doesn't, since the vertex \(n+1\) is not necessarily contained in all the components of OSJ cover of \(G\lor K_{1}^{\ell}\).) Now, let \(G=H\cup tK_{1}\), where we assume that \(H\) does not contain any isolated vertices (without loops). Let \(V(G)=V(H)\cup\mathcal{I}=[n]\), and \([n+1]\) as before. Defining \(\widehat{\mathcal{K}}\) as above, produces from a set-join cover \(\mathscr{C}\) of \(G\) a set-join cover \(\widehat{\mathscr{C}}\) of \((G\lor K_{1}^{\ell})\cup tK_{1}\), since the elements of \(\mathcal{I}\) are not contained in any component in this construction. Adding \(\mathcal{I}\vee_{[n+1]}\{n+1\}\) to \(\widehat{\mathscr{C}}\) results in a set-join cover of \(\widehat{G}\), and introduces \(2\) new components. We conclude that \(\operatorname{st}_{+}(\widehat{G})\leq\operatorname{st}_{+}(H)+2\). Finally, let \(\mathscr{C}^{\prime}\) be an OSJ cover of \(\widehat{G}\). Since the elements of \(\mathcal{I}\) are connected only to \(\{n+1\}\) in \(\widehat{G}\), we observe that \(\mathscr{C}^{\prime}\) has to contain \(\{\mathcal{K}_{0}\vee_{[n+1]}\{n+1\}\}\), where \(\mathcal{I}\subseteq\mathcal{K}_{0}\), and \(\mathcal{K}_{0}\) is not a component of any other set-join in \(\mathscr{C}^{\prime}\). Let \(\mathscr{C}^{\prime\prime}\) be obtained from \(\mathscr{C}^{\prime}\) by removing all set-joins with a component \(\{n+1\}\). Note that \(|\mathscr{C}^{\prime\prime}|\leq|\mathscr{C}^{\prime}|-2\), since \(V(\mathscr{C}^{\prime\prime})\) does not contain \(\{n+1\}\) nor \(\mathcal{K}_{0}\). Replacing every \(\mathcal{K}\vee_{V(\widehat{G})}\mathcal{L}\in\mathscr{C}^{\prime\prime}\) by \(\mathcal{K}^{\prime}\vee_{V(H)}\mathcal{L}^{\prime}\), where \(\mathcal{K}^{\prime}:=\mathcal{K}\setminus\{n+1\}\) as above, gives us a set-join cover of \(H\), proving \(\operatorname{st}_{+}(\widehat{G})=\operatorname{st}_{+}(H)+2\). **Remark 2.31**.: Note that \(G=K_{2}\cup K_{1}\) has unique OSJ cover graph, but \(\widehat{G}=G\lor K_{1}^{\ell}\) doesn't. From Propositions 2.5, 2.27 and 2.30 we see that in order to understand SNT-rank of graphs, we can from now on consider only connected twin-free graphs that are not of the form \(G\lor K_{1}^{\ell}\). **Example 2.32**.: Threshold graphs is a family of graphs (without loops) that can be constructed by repeating two operations, adding an isolated vertex \((G\cup K_{1})\), and joining a vertex \((G\lor K_{1})\). Here we extend this definition to _threshold graphs with loops_ to be all graphs that can be constructed by repeating the following two operations on \(G\): * \(G\cup K_{1}\) * \(G\lor K_{1}^{\ell}\) To obtain a twin free graph the two operations have to alternate. Considering only connected twin-free graphs resulting from this process, we get the following sequence of graphs: \[T_{1}:=K_{1}\lor K_{1}^{\ell},\,T_{i+1}:=(T_{i}\cup K_{1})\lor K_{1}^{\ell}.\] We have \(\operatorname{st}_{+}(T_{i})=2i\) by Proposition 2.30. This is not surprising, as it is not difficult to see that every matrix \(A\in\mathcal{S}^{+}(T_{i})\) has \(\operatorname{rk}(A)=2i\). **Remark 2.33**.: Let \(G\) be a graph with a cut edge \(\{u_{1},u_{2}\}\), so that \(G\) with this edge removed is equal to \(G_{1}\cup G_{2}\), where \(u_{i}\in V(G_{i})\), \(i=1,2\). Then: \[\operatorname{st}_{+}(G_{1})+\operatorname{st}_{+}(G_{2})\leq\operatorname{ st}_{+}(G)\leq\operatorname{st}_{+}(G_{1})+\operatorname{st}_{+}(G_{2})+2.\] Moreover, \(\operatorname{st}_{+}(G)=\operatorname{st}_{+}(G_{1})+\operatorname{st}_{+}(G _{2})\) if and only if for \(i=1,2\) there exist OSJ covers \(\mathscr{C}_{i}\) with \(\{u_{i}\}\in V(\mathscr{C}_{i})\), and \(\operatorname{st}_{+}(G)=\operatorname{st}_{+}(G_{1})+\operatorname{st}_{+}(G _{2})+1\) if such cover exists for either \(i=1\) or \(i=2\), but not both. ## 3 Trees and cycles without loops Let \(G\) be a graph without loops that does not contain any four cycles. Then a set-join cover of \(G\) can contain only elements of the form \(\mathcal{K}\vee_{V(G)}\{v\}\) for some \(v\in V(G)\), since all set-joins \(\mathcal{K}\vee_{V(G)}\mathcal{L}\) with \(|\mathcal{K}|\geq 2\) and \(|\mathcal{L}|\geq 2\) contain a four cycle. Hence, for graphs without loops and four cycles, a set-join cover is equivalent to an edge star cover, as defined below. **Definition 3.1**.: Let \(G\) be a simple graph without loops. A family of simple stars \(\{S_{1},S_{2},...,\,S_{k}\}\) is _an edge star cover_ of \(G\) if \(E(G)=\cup_{i=1}^{k}E(S_{i})\). The _edge star cover number_\(\operatorname{star}(G)\) of \(G\) is the minimal number of stars in any edge star cover of \(G\). Clearly, \(\operatorname{st}_{+}(G)\leq 2\operatorname{star}(G)\) for any graph \(G\), and we will show that this is an equality for trees. For any tree \(T\) it is known that \(\operatorname{rk}(A)\) equals twice the matching number of \(T\) for all \(A\in\mathcal{S}^{+}(T)\), [5]. Since for a tree \(T\) the matching number equals \(\operatorname{star}(T)\), the inequality \(\operatorname{st}_{+}(T)\geq 2\operatorname{star}(T)\) clearly holds. In this work we use the edge star cover number due to its immediate connection to set-join covers. We summarise this observation in the next proposition. **Proposition 3.2**.: _Let \(T\) be a forest (without loops) and \(A\in\mathcal{S}^{+}(T)\). Then \(\operatorname{rk}(A)=\operatorname{st}_{+}(A)=2\operatorname{star}(T)\). In particular, \(\operatorname{st}_{+}(T)=2\operatorname{star}(T)\)._ In the next lemma we see that if a graph \(G\) contains a leaf, then any set-join cover has to contain at least one element of the form \(\mathcal{K}\vee\{v\}\) for some \(v\in V(G)\), i.e. it contains at least one star. **Lemma 3.3**.: _Let \(G\) be a graph and let \(L(G)\subset V(G)\) be the set of all leaves without a loop in \(G\). Let \(\ell\in L(G)\), \(w\in V(G)\) its unique neighbour, and \(G^{\prime}\) the graph obtained from \(G\) by deleting all edges \(\{w,v^{\prime}\}\) with \(v^{\prime}\in N_{G}(w)\) (and all singletons that result after this deletion). Then \(\operatorname{st}_{+}(G)=\operatorname{st}_{+}(G^{\prime})+2\) and for any optimal set-join cover \(\mathscr{C}\) of \(G\), we have \(\{w\}\vee_{V(G)}\mathcal{N}\in\mathscr{C}\), where \(N_{G}(w)\cap L(G)\subseteq\mathcal{N}\subseteq N_{G}(w)\)._ Proof.: Let \(\mathscr{C}\) be an optimal set-join cover of \(G\). Since \(N_{G}(\ell)=\{w\}\), there exists \(\mathcal{N}\subseteq N_{G}(w)\) with \(\ell\in\mathcal{N}\), so that \(\{w\}\vee_{V(G)}\mathcal{N}\in\mathscr{C}\). If there exists \(\ell^{\prime}\in(N_{G}(w)\cap L(G))\setminus\mathcal{N}\), then we also have \(\{w\}\vee_{V(G)}\mathcal{N}^{\prime}\in\mathscr{C}\) for some \(\mathcal{N}^{\prime}\) with \(\ell^{\prime}\in\mathcal{N}^{\prime}\). Since \(\mathscr{C}_{1}:=(\mathscr{C}\cup(\{w\}\vee N_{G}(w)))\setminus\{\{w\}\vee_{V (G)}\mathcal{N},\{w\}\vee_{V(G)}\mathcal{N}^{\prime}\}\) is a set-join cover of \(G\) with \(|\mathscr{C}_{1}|\leq|\mathscr{C}|-1\), we get a contradiction with the assumption that \(\mathscr{C}\) is optimal. Hence, \(N_{G}(w)\cap L(G)\subseteq\mathcal{N}\). Observe that \(\mathscr{C}^{\prime}:=\mathscr{C}\setminus\{\{w\}\vee_{V(G)}\mathcal{N}\}\) is a set-join cover for \(G^{\prime}\cup tK_{1}\) for some \(t\in\mathbb{N}\). Since, \(\operatorname{st}_{+}(G^{\prime})=\operatorname{st}_{+}(G^{\prime}\cup tK_{1})\), this implies \(\operatorname{st}_{+}(G^{\prime})\leq\operatorname{st}_{+}(G)+2\). On the other hand, \(\mathscr{C}=\{\{w\}\vee_{V(G)}N_{G}(w)\}\cup\mathscr{C}^{\prime}\) is a set-join cover of \(G\) for any set-join cover \(\mathscr{C}^{\prime}\) of \(G^{\prime}\). Hence, \(\operatorname{st}_{+}(G)\leq\operatorname{st}_{+}(G^{\prime})+2\), as desired. **Example 3.4**.: For paths we have \(\operatorname{st}_{+}(P_{2k})=\operatorname{st}_{+}(P_{2k+1})=2k\) by inductive application of Lemma 3.3. In the result below we resolve the question of uniqueness of OSJ covers for trees without loops. **Theorem 3.5**.: _Let \(T\) be a tree with \(|V(T)|\geq 3\). Then \(T\) has unique OSJ cover if and only if the distance between any two leaves in \(T\) is even._ Proof.: Assume the distance between any two leaves in \(T\) is even, and let \(V_{0}\) be the set of vertices in \(T\) at an odd distance to all leaves. An inductive application of Lemma 3.3 shows that \[\mathscr{C}_{0}:=\{\{v\}\lor N_{T}(v);v\in V_{0}\}\] is an OSJ cover of \(T\). Let \(\mathscr{C}^{\prime}\) be an OSJ cover of \(T\) and \(V^{\prime}\) the set of all central vertices of stars in \(\mathscr{C}^{\prime}\). Then \(|\mathscr{C}^{\prime}|=2|V^{\prime}|=2|V_{0}|\). Since for any \(v\in V(T)\setminus V^{\prime}\) there exists \(w\in V^{\prime}\cap N_{T}(v)\), we necessarily have \(V^{\prime}=V_{0}\). Since no two vertices in \(V_{0}\) are connected in \(T\), \(\mathscr{C}_{0}\) is the only collection of stars with central vertices \(V_{0}\) that covers \(T\). We conclude that the cover is unique. We further remark, that the unique OSJ cover graph of \(T\) is \(|V_{0}|P_{2}\). Assume now, that there are two leaves \(\ell_{1}\) and \(\ell_{2}\) in a tree \(T\) at odd distance \(d\), \(d\geq 3\). Observe that in any set-join cover \(\mathscr{C}\) of \(T\) at least every second vertex of the path between \(\ell_{1}\) and \(\ell_{2}\) is a central vertex of a star in that cover, hence a component in \(\mathscr{C}\). Since \(d\) is odd, we have \(\{v_{1}\}\vee\mathcal{S}_{1}\) and \(\{v_{2}\}\vee\mathcal{S}_{2}\) in \(\mathscr{C}\), such that \(v_{1}\) and \(v_{2}\) are neighbours in \(T\), and at least one of the inclusions \(v_{1}\in\mathcal{S}_{2}\) and \(v_{2}\in\mathcal{S}_{1}\) holds. Assume now that \(\mathscr{C}\) is an OSJ cover, and without loss of generality that \(v_{1}\in\mathcal{S}_{2}\). Then we get a different OSJ cover of \(T\) by replacing \(\{v_{1}\}\vee\mathcal{S}_{1}\) and \(\{v_{2}\}\vee\mathcal{S}_{2}\) by \(\{v_{1}\}\vee(\mathcal{S}_{1}\cup\{v_{2}\})\) and \(\{v_{2}\}\vee(\mathcal{S}_{2}\setminus\{v_{1}\})\). **Example 3.6**.: Note that the only tree with \(|V(T)|=2\) is a path \(P_{2}\) and it has the unique OSJ cover. Moreover, a path \(P_{n}\) has unique OSJ cover if and only if \(n\) is odd or \(n=2\), see Theorem 3.5. For integers \(t\geq 3\) and \(k_{i}\geq 1\), \(i\in[t]\), we denote by \(\operatorname{star}(k_{1},k_{2},...,k_{t})\) the graph with the central vertex \(v\) and \(t\) arms, where each arm is a path on \(k_{i}+1\) vertices and one of its end-vertices is \(v\). Hence, \(\operatorname{star}(k_{1},k_{2},...,k_{t})\) is a generalised star with \(\sum_{i=1}^{t}k_{i}+1\) vertices. If \(k_{i}=1\) for all \(i\), then \(\operatorname{star}(1,1,...,1)=K_{1,t}\) is a star graph with \(t\) leaves. **Example 3.7**.: Let \(G=\operatorname{star}(k_{1},k_{2},...,k_{t})\), \(t\geq 3\), be a generalised star. Then \[\operatorname{st}_{+}(G)=\begin{cases}|V(G)|+1-|\{k_{i};k_{i}\text{ odd}\}|;& \text{if at least one $k_{i}$ is odd},\\ |V(G)|-1;&\text{if all $k_{i}$ are even}.\end{cases}\] We will prove this claim by induction on \[\psi(G):=\sum_{i=1}^{t}k_{i}-t.\] We first assume \(\psi(G)=0\). This implies \(G=G_{0}(t):=\operatorname{star}(1,1,\ldots,1)\), i.e. a star with \(t\) leaves. Clearly, \(\operatorname{st}_{+}(G_{0}(t))=2\), and the claim holds. For \(\psi(G)=1\), we have \(G=G_{1}(t):=\operatorname{star}(2,1,\ldots,1)\), i.e. a generalised star with \(t\) arms. Applying Lemma 3.3 for the leaf on the long arm, we get: \[\operatorname{st}_{+}(G_{1}(3))=\operatorname{st}_{+}(P_{3})+2=4\] by Example 3.4, and for \(t>3\): \[\operatorname{st}_{+}(G_{1}(t))=\operatorname{st}_{+}(G_{0}(t-1))+2=4.\] Since \(G_{1}(t)\) has \(t+2\) vertices and \(t-1\) arms of odd length, this establishes the base of induction. Now assume the claim holds for all generalised stars \(G^{\prime}=\operatorname{star}(k_{1}^{\prime},k_{2}^{\prime},...,k_{t^{\prime} }^{\prime})\) with \(\psi(G^{\prime})\leq\psi\), and let \(G:=\operatorname{star}(k_{1},k_{2},...,k_{t})\) with \(\psi(G)=\psi+1\geq 2\). Without loss of generality we can assume \(k_{1}\geq 2\). By Lemma 3.3 we have \(\operatorname{st}_{+}(G)=\operatorname{st}_{+}(G^{\prime})+2\), where: * \(G^{\prime}:=\operatorname{star}(k_{2},\ldots,k_{t})\) with \(\psi(G^{\prime})=\psi(G)-1\geq 1\) for \(k_{1}=2\), * \(G^{\prime}:=\operatorname{star}(k_{1}-2,k_{2},\ldots,k_{t})\) with \(\psi(G^{\prime})=\psi(G)-2\) for \(k_{1}\geq 3\). If \(k_{1}=2\) and \(t=3\), then \(G^{\prime}=P_{k_{2}+k_{t}+1}\), and the claim holds for \(G\) by Example 3.4. In all other cases \(G^{\prime}\) is again a generalised star, so we can use the induction hypothesis. Noticing that \(V(G)-V(G^{\prime})=2\) and that \(G\) and \(G^{\prime}\) have same number of arms of odd length, we establish the claim. The graph \(G\) has unique OSJ cover if and only if all \(k_{i}\)'s are odd or all even by Theorem 3.5. A vertex \(v\) is a _cut_ vertex of a connected graph \(G\), if by removing \(v\) and all edges \(\{v,w\}\), \(w\in N_{G}(v)\), from \(G\) we get a disconnected graph that we will denote by \(G\setminus\{v\}\). (We allow for a cut vertex to have a loop.) **Proposition 3.8**.: _Let \(v\) be a cut vertex of a connected graph \(G\), where \(k\geq 2\) and \(G\setminus\{v\}=\cup_{i=1}^{k}G_{i}\) for some connected graphs \(G_{i}\). Then \(\operatorname{st}_{+}(G)\leq 2+\sum_{i=1}^{k}\operatorname{st}_{+}(G_{i})\)._ Proof.: Let \(\mathscr{C}_{i}\) be a set-join cover of \(G_{i}\), \(i\in[k]\). Then \[\mathscr{C}:=\cup_{i=1}^{k}\mathscr{C}_{i}\cup\{\{v\}\vee_{V(G)}N_{G}(v)\}\] is a set-join cover of \(G\). Since \(|\mathscr{C}|=2+\sum_{i=1}^{k}|\mathscr{C}_{i}|\), the claim follows. **Example 3.9**.: The inequality of Proposition 3.8 is not always an equality, but it cannot be improved in general. 1. Let \(G=\operatorname{star}(2,2,2)\). Then \(\operatorname{st}_{+}(G)=6\) by Proposition 3.7 and \(G\setminus\{v\}=3P_{2}\), so \(2+\sum_{i=1}^{k}\operatorname{st}_{+}(G_{i})=8\). 2. Let \(G\) be a star graph \(G=\operatorname{star}(3,3,3)\). Then \(\operatorname{st}_{+}(G)=8\) by Proposition 3.7 and \(G\setminus\{v\}=3P_{3}\), so \(2+\sum_{i=1}^{k}\operatorname{st}_{+}(G_{i})=8\). We now consider SNT-rank of cycles. **Proposition 3.10**.: _Let \(n\geq 3,n\neq 4\). Then \(\operatorname{st}_{+}(C_{n})=n\). Furthermore, \(\operatorname{st}_{+}(C_{4})=2\)._ Proof.: If \(n=3\), then the determinant of any matrix \(A\in\mathcal{S}^{+}(C_{3})\) is positive, so \(\operatorname{rk}(A)=3\), thus \(\operatorname{st}_{+}(C_{3})=3\). If \(n=4\), then \[\mathscr{C}=\{\{1,3\}\vee_{[4]}\{2,4\}\}\] is a set-join cover of \(C_{4}\), so \(\operatorname{st}_{+}(C_{4})=2\). Now let \(n\geq 5\) and let \(\mathscr{C}\) be a set-join cover of \(C_{n}\). If every component in \(\mathscr{C}\) is a singleton, then \(|\mathscr{C}|=n\). If \(\mathscr{C}\) contains a component \(\mathcal{K}_{1}\), which is not a singleton, then \(\mathcal{K}_{1}=N_{C_{n}}(v)\) for some \(v\in V(C_{n})\) and \(\{v\}\vee_{[n]}\mathcal{K}_{1}\in\mathscr{C}\). Since \(n\geq 5\) the components \(\{v\}\) and \(\mathcal{K}_{1}\) don't appear anywhere else in \(\mathscr{C}\). This implies that \(\mathscr{C}\setminus\{\{v\}\vee_{[n]}\mathcal{K}_{1}\}\) is a set-join cover of \(P_{n-1}\), so \(|\mathscr{C}|\geq 2+\operatorname{st}_{+}(P_{n-1})\geq n\), proving \(\operatorname{st}_{+}(C_{n})=n\). **Remark 3.11**.: Let \(G\) be an unicyclic graph, i.e. connected simple graph without loops containing exactly one cycle. Then \(G\) is either a cycle or it contains at least one leaf. We can find the SNT-rank of \(G\) by repeatedly applying Lemma 3.3 on a leaf. In each step we obtain a disjoint union of trees and at most one unicyclic graph. Eventually we obtain a disjoint union of trees and at most one cycle. For this graph the SNT-rank can be computed by Proposition 3.2 and Proposition 3.10. ## 4 Complete graphs without loops Let \(\mathscr{C}\) be a set-join cover of \(K_{n}\) with \(V(K_{n})=[n]\). If \(\mathcal{K}\vee_{[n]}\mathcal{L}\in\mathscr{C}\), then \(\mathcal{K}\cap\mathcal{L}=\emptyset\), since \(K_{n}\) has no loops. On the other hand, for any pair \(i,j\in[n]\), \(i\neq j\), there exist \(\mathcal{K}\vee_{[n]}\mathcal{L}\in\mathscr{C}\), so that \(i\in\mathcal{K}\), \(j\in\mathcal{L}\). **Definition 4.1**.: Let \(\mathbf{T}\) be a \(k\)-tuple of subsets of \([n]\). If for any pair \(i,j\in[n]\), \(i\neq j\), there exists \(\mathcal{K},\mathcal{L}\in\mathbf{T}\) so that \(i\in\mathcal{K}\), \(j\in\mathcal{L}\) and \(\mathcal{K}\cap\mathcal{L}=\emptyset\), then \(\mathbf{T}\) is called _a separating cover_ of \(n\) elements with \(k\) sets. Hence, if \(\mathscr{C}\) is a set-join cover of \(K_{n}\), then \(V(\mathscr{C})\) is a separating cover of \(n\) elements with \(|V(\mathscr{C})|\) sets. To determine \(\operatorname{st}_{+}(K_{n})\) for a given \(n\), we need to find minimal separating cover of \(n\) elements. This problem is known as Katona problem and has been solved independently by A. C. C. Yao, [9] and M. C. Cai [3], hence the problem of determining \(\operatorname{st}_{+}(K_{n})\) is resolved. In this section we expand on the work in [9], by showing how various set-join covers of \(K_{n}\) can be constructed, investigate when these covers are optimal, and when they are essentially unique. **Theorem 4.2** ([9]).: _Let \(n\in\mathbb{N}\), and let \(s(n):=\min\{k:\) there is a separating cover on \(n\) elements with \(k\) sets\(\}\). For all \(n\geq 2\),_ \[s(n)=\begin{cases}3i&\text{if }\,2\cdot 3^{i-1}<n\leq 3^{i}\\ 3i+1&\text{if }\,3^{i}<n\leq 4\cdot 3^{i-1}\\ 3i+2&\text{if }\,4\cdot 3^{i-1}<n\leq 2\cdot 3^{i},\end{cases} \tag{1}\] _while \(s(1)=0\)._ **Corollary 4.3**.: _The SNT-rank of a complete graph is \(\operatorname{st}_{+}(K_{n})=s(n)\), where \(s(n)\) is defined in (1)._ To give some intuition on how a set-join cover for \(K_{n}\) can be constructed, we offer the following example. **Example 4.4**.: Suppose that \(n\in\mathbb{N}\) is written as a product of two factors \(n=p\cdot q\) with \(p,q\geq 2\). Let \(V(K_{n})=[n]\), and define: \[\mathcal{L}_{i} =\{(i-1)p+s;s=1,\ldots,p\}\ \text{ for every }i\in[q],\] \[\mathcal{K}_{j} =\{j+tp;t=0,\ldots,q-1\},\ \text{ for every }j\in[p].\] Then \[\mathscr{C}=\{\mathcal{L}_{i}\vee_{[n]}\mathcal{L}_{j},i,j\in[q],i\neq j\} \cup\{\mathcal{K}_{i}\vee_{[n]}\mathcal{K}_{j},i,j\in[p],i\neq j\}\] is a set-join of \(K_{n}\). It follows that \(\operatorname{st}_{+}(K_{n})\leq p+q\). A generalisation of this example leads to an OSJ cover of \(K_{n}\). **Proposition 4.5**.: _Let \(n\in\mathbb{N}\) and \(q_{i}\in\mathbb{N}\), \(i\in[t]\), satisfy \(q_{i}\geq 2\) and \(n\leq\prod_{k=1}^{t}q_{k}\). Then there exists a set-join cover \(\mathscr{C}\) of \(K_{n}\) with \(|\mathscr{C}|=\sum_{i=1}^{t}q_{i}\) and \(G(\mathscr{C})=\cup_{i=1}^{t}K_{q_{i}}\). Furthermore,_ \[\operatorname{st}_{+}(K_{n})=\min\left\{\sum_{i=1}^{t}q_{i};n\leq\prod_{i=1}^{ t}q_{i}\right\}. \tag{2}\] Proof.: We prove the first claim by induction on \(t\). If \(t=1\), then the claim is trivial. If \(n\leq m\), then \(K_{n}\) is a subgraph of \(K_{m}\) and a restriction of any set-join cover of \(K_{m}\) to the elements of \(K_{n}\) results in a set-join cover of \(K_{n}\), in particular \(\operatorname{st}_{+}(K_{n})\leq\operatorname{st}_{+}(K_{m})\). This observation allows us to assume \(n=\prod_{k=1}^{t}q_{k}\). The claim for \(t=2\) is proved in Example 4.4 and establishes the base of induction. Let \(n_{1}=\prod_{k=1}^{t-1}q_{k}\), \(V(K_{n})=[n]\) and \(V(K_{n_{1}})=[n_{1}]\). Let \(\mathscr{C}\) be a set-join cover of \(K_{n_{1}}\) with \(|\mathscr{C}|=\sum_{k=1}^{t-1}q_{k}\). For every \(\mathcal{K}\in V(\mathscr{C})\) let \[\mathcal{K}^{\prime}=\cup_{i\in\mathcal{K}}\{i+tn_{1};t=0,\dots,q_{t}-1\} \subset[n].\] Furthermore, let \[\mathcal{L}_{i}=\{(i-1)n_{1}+s;s=1,\dots,n_{1}\}\ \ \text{for every}\ i\in[q_{t}].\] Then \[\mathscr{C}^{\prime}=\{\mathcal{K}_{1}^{\prime}\vee_{[n]}\mathcal{K}_{2}^{ \prime};\mathcal{K}_{1}\vee_{[n_{1}]}\mathcal{K}_{2}\in\mathscr{C}\}\] covers all the edges of \(K_{n}\) within the sets \(\mathcal{L}_{1},\mathcal{L}_{2},...,\mathcal{L}_{q_{t}}\). So \[\widehat{\mathscr{C}}=\mathscr{C}^{\prime}\cup\{\mathcal{L}_{i}\vee_{[n]} \mathcal{L}_{j},i,j\in[q_{t}],i\neq j\}\] is a set-join cover of \(K_{n}\) with \(|\widehat{\mathscr{C}}|=\sum_{i=1}^{t}q_{i}\). To complete the proof, we need to show \(\operatorname{st}_{+}(K_{n})\geq\min\{\sum_{i=1}^{t}q_{i};n\leq\prod_{i=1}^{t} q_{i}\}\). Let us fix \(n\) and define \(m=\max\{t;s(t)=\ s(n)\}\). From Theorem 4.2 we know that \(m\) is the largest positive integer that satisfies \(\operatorname{st}_{+}(K_{n})=\operatorname{st}_{+}(K_{m})\). In particular, \(m\) has one of the forms \(m=3^{i},m=4\cdot 3^{i-1}\), or \(m=2\cdot 3^{i}\). In each case we can write \(m=\prod_{k=1}^{t}q_{k}\), where \(q_{j}\in\{2,3,4\}\) for all \(j\in[t]\). It is straightforward to check that the equality \(\operatorname{st}_{+}(K_{m})=\sum_{i=1}^{t}q_{i}\) holds in each of these cases. **Remark 4.6**.: An optimal set-join cover of \(K_{6}\) is given in Example 2.15. Following the construction outlined in the proof of Proposition 4.5, we can produce examples of optimal set-join covers of \(K_{n}\) for every \(n\geq 2\). If \(n=2,3,4,5\), then \(\operatorname{st}_{+}(K_{n})=n\), and the set-join cover \(\mathscr{C}\), where all the components of \(\mathscr{C}\) are singletons and \(G(\mathscr{C})=K_{n}\), is an OSJ cover of \(K_{n}\). Note that for \(n=2\) and for \(n=3\) these OSJ covers are unique. For \(n=4\) this set-join cover is not unique, since the factorization \(4=2^{2}\) gives rise to an OSJ cover \(\mathscr{C}^{\prime}\) of \(K_{4}\) with \(G(\mathscr{C}^{\prime})=2K_{2}\). In the case \(n=5\) this set-join cover is also not unique, since the factorization \(6=2\cdot 3\) gives rise to an OSJ cover \(\mathscr{C}^{\prime}\) of \(K_{5}\) with \(G(\mathscr{C}^{\prime})=K_{2}\cup K_{3}\). Suppose that \(n\geq 6\) and let \(m=\max\{t;s(t)=s(n)\}\). An inductive construction outlined in the proof of Proposition 4.5 gives an OSJ cover \(\mathscr{C}\) of \(K_{m}\), which we can restrict to \(K_{n}\) to obtain an OSJ cover of \(K_{n}\). Let us look at some properties of an OSJ cover \(\mathscr{C}\) constructed in this way, for a few choices of \(n\): * For \(n=3^{i}\), all components of \(\mathscr{C}\) have cardinally \(3^{i-1}\), and for each \(\mathcal{K}\in\mathscr{C}\) there exist two other components \(\mathcal{K}^{\prime},\mathcal{K}^{\prime\prime}\in V(\mathscr{C})\), so that \(\mathcal{K},\mathcal{K}^{\prime}\) and \(\mathcal{K}^{\prime\prime}\) and pairwise disjoint. * For \(n=3^{i}-1\), \(2i\) components have cardinality \(3^{i-1}\), and \(i\) components have cardinality \(3^{i-1}-1\). For each \(\mathcal{K}\in\mathscr{C}\) there exist two other components \(\mathcal{K}^{\prime},\mathcal{K}^{\prime\prime}\in V(\mathscr{C})\), so that \(\mathcal{K},\mathcal{K}^{\prime}\) and \(\mathcal{K}^{\prime\prime}\) and pairwise disjoint. * For \(n=2\cdot 3^{i}\), the largest two components \(\mathcal{K},\mathcal{K}^{\prime}\in V(\mathscr{C})\) are disjoint and have cardinality \(3^{i}\). All other components have cardinality \(2\cdot 3^{i-1}\). * For \(n=2\cdot 3^{i}-1\), the largest two components in \(V(\mathscr{C})\) are disjoint, one has \(3^{i}\) and the other \(3^{i}-1\) elements. There are \(2i\) components of cardinality \(2\cdot 3^{i-1}\), and \(i\) components of cardinality \(2\cdot 3^{i-1}-1\). Having constructed an OSJ cover of \(K_{n}\) for each \(n\in\mathbb{N}\), we want to consider what properties any OSJ cover \(\mathscr{C}\) of \(K_{n}\) has to have. In particular, we consider the components of minimal and maximal cardinality. **Lemma 4.7**.: _Let \(m,n\in\mathbb{N}\), \(n<m\), \(\mathrm{st}_{+}(K_{n})=\mathrm{st}_{+}(K_{m})\), and let \(\mathscr{C}\) be an OSJ cover of \(K_{m}\). Then any component \(\mathcal{K}\in V(\mathscr{C})\) satisfies \(|\mathcal{K}|>m-n\)._ Proof.: Let \(\mathscr{C}\) be an OSJ cover of \(K_{m}\), and \(\mathcal{K}\in V(\mathscr{C})\). Let \(\mathscr{C}^{\prime}\) be obtained from \(\mathscr{C}\) by removing \(\mathcal{K}\), and elements from \(\mathcal{K}\) from all components of \(\mathscr{C}\). Then \(\mathscr{C}^{\prime}\) is a set-join cover of \(K_{m-|\mathcal{K}|}\) with \(|\mathscr{C}^{\prime}|\leq|\mathscr{C}|-1\). From \(\mathrm{st}_{+}(K_{n})=\mathrm{st}_{+}(K_{m})\) we get \(m-|\mathcal{K}|<n\), as required. **Corollary 4.8**.: _Let \(\mathscr{C}\) be an OSJ cover of \(K_{n}\) and \(\mathcal{K}\in V(\mathscr{C})\)._ 1. _If_ \(n=3^{i}\)_, then_ \(|\mathcal{K}|\geq 3^{i-1}\)_._ 2. _If_ \(n=4\cdot 3^{i}\)_, then_ \(|\mathcal{K}|\geq 3^{i}\)_._ 3. _If_ \(n=2\cdot 3^{i}\)_, then_ \(|\mathcal{K}|\geq 2\cdot 3^{i-1}\)_._ Proof.: Let \(n=3^{i}\), then \(\mathrm{st}_{+}(K_{3^{i}})=\mathrm{st}_{+}(K_{2\cdot 3^{i-1}+1})\) by Theorem 4.2. Hence, \(|\mathcal{K}|>3^{i-1}-1\) by Lemma 4.7. The other two items are proved in the same way. **Proposition 4.9**.: _Suppose that \(n\in\mathbb{N}\) satisfies one of the following conditions: \(n\in\{4,5,7,8\}\), or \(3^{i}+1\leq n\leq 2\cdot 3^{i}-2\) for some \(i\geq 2\), or \(2\cdot 3^{i}+1\leq n\leq 3^{i+1}-2\) for some \(i\geq 2\). Then an OSJ cover of \(K_{n}\) is not essentially unique._ Proof.: In Remark 4.6 we have already seen that an OSJ cover of \(K_{n}\) is not essentially unique for \(n\in\{4,5\}\). \(K_{8}\) has two essentially different OSJ covers: one arising form the factorization \(8=2\cdot 4\), and the other form \(8\leq 3^{2}\). In the first case the cover has the graph \(K_{2}\cup K_{4}\), and in the second case the cover has the graph \(2K_{3}\). Since \(s(8)=6\) both covers are optimal. Restrictions of those covers to the elements of \(K_{7}\) gives two OSJ covers of \(K_{7}\). Suppose that \(n+2\leq m\) and \(s(n)=s(m)\). We will show that in this case an OSJ cover of \(K_{n}\) is not essentially unique. Let \(\mathscr{C}\) be an OSJ cover for \(K_{m}\) and let \(\mathcal{K}\in V(\mathscr{C})\) be a component of \(\mathscr{C}\) of minimal cardinality. Then \(|\mathcal{K}|>m-n\geq 2\), by Lemma 4.7. Now, choose \(m-n\) vertices in \(\mathcal{K}\) and remove them from \(K_{m}\) and \(\mathscr{C}\) to obtain a set-join cover \(\mathscr{C}^{\prime}\) of \(K_{n}\). The set-join cover \(\mathscr{C}^{\prime}\) contains a component of cardinality \(|\mathcal{K}|-(m-n)\). Next, choose \(m-n\) vertices in \(K_{m}\) so that they do not all belong to the same component of \(\mathscr{C}\). This is possible since \(m-n\geq 2\), and \(m\geq 9\), so \(m\geq|V(\mathscr{C})|+3\). Remove them from \(K_{m}\) and \(\mathscr{C}\) to obtain a set-join cover \(\mathscr{C}^{\prime\prime}\) of \(K_{n}\). The set-join cover \(\mathscr{C}^{\prime\prime}\) does not contain a component of cardinality \(|\mathcal{K}|-(m-n)\), so it is essentially different from \(\mathscr{C}^{\prime}\). For \(n=4\cdot 3^{i}=2^{2}\cdot 3^{i}\) the construction in Proposition 4.5 gives rise to two set-join covers \(\mathscr{C}_{1}\) and \(\mathscr{C}_{2}\), one with the graph \(K_{4}\cup iK_{3}\), and one with the graph \(2K_{2}\cup iK_{3}\). Both covers are optimal, proving that in this case OSJ cover is not unique. Now let \(m\) satisfy \(m<n\) and \(s(m)=s(n)\). By restricting \(\mathscr{C}_{1}\) and \(\mathscr{C}_{2}\) to \([m]\), we get two OSJ covers of \(K_{m}\) with different graphs, proving that OSJ covers of \(K_{m}\) are not essentially unique. Before we move to more general cases, we show in the example below that \(K_{6}\) has essentially unique OSJ cover. **Example 4.10**.: Let \(V(K_{6})=[6]\) and let \(\mathscr{C}\), with \(V(\mathscr{C})=\{\mathcal{K}_{i};i=1,\ldots,5\}\), be an OSJ cover of \(K_{6}\). In addition, we assume \(|\mathcal{K}_{1}|\geq|\mathcal{K}_{i}|\), \(i=2,3,4,5\), and \(\mathcal{K}_{1}\vee_{[6]}\mathcal{K}_{2}\in\mathscr{C}\). Next we show that \(|\mathcal{K}_{1}|=3\). Since \(\mathcal{K}_{1}\) and \(\mathcal{K}_{2}\) are disjoint, the set-joins of \(\mathscr{C}\) with components \(\mathcal{K}_{3}\), \(\mathcal{K}_{4}\), and \(\mathcal{K}_{5}\), need to cover all the edges between the elements of \(\mathcal{K}_{1}\). In particular, the set \(V_{1}=\{\mathcal{K}_{3}\cap\mathcal{K}_{1},\mathcal{K}_{4}\cap\mathcal{K}_{1},\mathcal{K}_{5}\cap\mathcal{K}_{1}\}\) needs to contain a component set of a set-cover for a complete graph with the vertex set \(\mathcal{K}_{1}\). Since \(s(4)=4\), this excludes \(|\mathcal{K}_{1}|\geq 4\). On the other hand, \(s(6)=s(5)\) implies that \(V(\mathscr{C})\) doesn't contain singletons by Lemma 4.7. If \(|\mathcal{K}_{i}|=2\) for all \(i\in[5]\), then we may without loss of generality assume that \(\mathcal{K}_{1}=\{1,2\}\) and \(\mathcal{K}_{2}\subset\{3,4,5,6\}\). Since the edge \(\{1,2\}\) needs to be covered, we may also assume that \(\mathcal{K}_{3}=\{1,3\}\) and \(\mathcal{K}_{4}=\{2,4\}\). Similarly, since the edge \(\{5,6\}\) needs to be covered, we may assume \(5\in\mathcal{K}_{2}\), \(6\in\mathcal{K}_{5}\) and \(\mathcal{K}_{2}\cap\mathcal{K}_{5}=\emptyset\). Now we have \(j\in\mathcal{K}_{2}\) for \(j=3\) or \(j=4\). In either case, the edge \(\{j,5\}\) cannot be covered by the components \(\mathcal{K}_{i}\), \(i=1,\ldots,5\). At this point we can assume \(\mathcal{K}_{1}=\{1,2,3\}\), and deduce that \(V_{1}\) is a component set of an OSJ cover for the complete graph on \(\mathcal{K}_{1}\), hence \(V_{1}=\{\{1\},\{2\},\{3\}\}\) and \(\{\mathcal{K}_{3}\vee_{[6]}\mathcal{K}_{4},\mathcal{K}_{4}\vee_{[6]}\mathcal{ K}_{5},\mathcal{K}_{5}\vee_{[6]}\mathcal{K}_{3}\}\subset\mathscr{C}\). Now we know that \(\mathcal{K}_{3}\), \(\mathcal{K}_{4}\) and \(\mathcal{K}_{5}\) are pairwise disjoint, and without loss of generality we may assume \(\mathcal{K}_{1}\cap\mathcal{K}_{3}=\{1\}\), \(\mathcal{K}_{1}\cap\mathcal{K}_{4}=\{2\}\) and \(\mathcal{K}_{1}\cap\mathcal{K}_{5}=\{3\}\). Next we argue that \(\mathcal{K}_{2}=\{4,5,6\}\). Assume that \(4\not\in\mathcal{K}_{2}\), and note that \(4\) can be contained in at most one of \(\mathcal{K}_{3}\), \(\mathcal{K}_{4}\) and \(\mathcal{K}_{5}\). Without loss of generality let \(4\in\mathcal{K}_{3}\). This leads to a contradiction, since the edge \(\{1,4\}\) clearly cannot be covered. Arguing as above, we prove that \(\{\mathcal{K}_{3}\cap\mathcal{K}_{2},\mathcal{K}_{4}\cap\mathcal{K}_{2}, \mathcal{K}_{5}\cap\mathcal{K}_{2}\}=\{\{4\},\{5\},\{6\}\}\). We may assume without loss of generality that \(\mathcal{K}_{3}=\{1,4\},\mathcal{K}_{4}=\{2,5\}\), and \(\mathcal{K}_{5}=\{3,6\}\). We obtain that \[\mathscr{C}=\{\mathcal{K}_{1}\vee_{[6]}\mathcal{K}_{2},\mathcal{K}_{3}\vee_{[ 6]}\mathcal{K}_{4},\mathcal{K}_{4}\vee_{[6]}\mathcal{K}_{5},\mathcal{K}_{5} \vee_{[6]}\mathcal{K}_{3}\}\] and \(G(\mathscr{C})=K_{2}\cup K_{3}\). Notice that an OSJ cover of \(K_{6}\) constructed above is not unique, since by permuting the vertices of \(K_{6}\) we can obtain a different set-join cover of \(K_{6}\). To identify \(K_{n}\) with essentially unique OSJ covers, we need to extend the arguments seen in Example 4.10. To do so, we depend on a selection of observations, that we do not prove here, but that can be deduced from the proof of Theorem 4.2 given in [6, Ch. 18]. **Lemma 4.11**.: _([9, 6]) Let \(\mathscr{C}\) be an OSJ cover of \(K_{n}\) for some \(n\geq 4\). Then_ \[s(n)=\min\left\{k+s\left(\left\lceil\frac{n}{k}\right\rceil\right);k=2,3,4,5 \right\}. \tag{3}\] _Let \(\mathcal{M}_{n}\) be the set of all \(k\in\{2,3,4,5\}\) for which the minimum is achieved, and \(\mathcal{K}_{1}\) a component of \(\mathscr{C}\) of maximal cardinality. Then there exists \(k_{0}\in\mathcal{M}_{n}\) so that:_ 1. \(\mathcal{K}_{1}\) _is disjoint with precisely_ \(k_{0}-1\) _components_ \(\mathcal{K}_{2},\ldots,\mathcal{K}_{k_{0}}\in V(\mathscr{C})\)_._ 2. _The restriction of_ \(\mathscr{C}\) _to_ \(\mathcal{K}_{1}\) _is an OSJ cover of the complete graph on_ \(\mathcal{K}_{1}\)_, and the restriction of_ \(\mathscr{C}\) _to_ \(\mathcal{S}=[n]\setminus\cup_{j=2}^{k_{0}}\mathcal{K}_{i}\) _is an OSJ cover of the complete graph on_ \(\mathcal{S}\)_._ 3. \(s(|\mathcal{S}|)=s(|\mathcal{K}_{1}|)=s(\lceil\frac{n}{k_{0}}\rceil)=s(n)-k_{0}\)_._ **Proposition 4.12**.: _Suppose that \(n\in\mathbb{N}\), \(n=3^{i}\) or \(n=2\cdot 3^{i}\) for some \(i\geq 1\). Then \(K_{n}\) has essentially unique OSJ cover._ Proof.: We will prove the claim by induction on \(i\). Since we have already seen that \(K_{3}\) and \(K_{6}\) have essentially unique OSJ covers, the base of induction is established. We assume \(i\geq 2\), and that the claim holds for \(n=3^{i-1}\) and \(n=2\cdot 3^{i-1}\). Throughout the proof we also assume that \(\mathscr{C}\) is an OSJ cover of \(K_{n}\), and \(\mathcal{K}_{1}\in V(\mathscr{C})\) is one of its components with maximal cardinality. First we want to determine the parameter \(k_{0}\) from Lemma 4.11. Combining (1) and (3) it is straightforward to see that for \(i\geq 2\) we have \(\mathcal{M}_{3^{i}}=\{3\}\) and \(\mathcal{M}_{2\cdot 3^{i}}=\{2,3\}\). Suppose that for \(n=2\cdot 3^{i}\) we have \(k_{0}=3\). Applying Lemma 4.11 we get \(s(|\mathcal{K}_{1}|)=s(|\mathcal{S}|)=s(\lceil\frac{n}{3}\rceil)=s(2\cdot 3 ^{i-1})=3i-1\). This restricts \(4\cdot 3^{i-2}<|\mathcal{K}_{1}|\leq|\mathcal{S}|\leq 2\cdot 3^{i-1}\). From \[|\mathcal{K}_{2}\cup\mathcal{K}_{3}| \leq 2|\mathcal{K}_{1}|\leq 4\cdot 3^{i-1},\] \[|\mathcal{K}_{2}\cup\mathcal{K}_{3}| =n-|\mathcal{S}|\geq 2\cdot 3^{i}-2\cdot 3^{i-1}=4\cdot 3^{i-1}\] we conclude that \(|\mathcal{K}_{2}\cup\mathcal{K}_{3}|=4\cdot 3^{i-1}\). Hence, \(\mathcal{K}_{2}\cap\mathcal{K}_{3}=\emptyset\), and \(|\mathcal{K}_{i}|=2\cdot 3^{i-1}\) for \(i=1,2,3\). Note that for \(i=1,2,3\), \(|\mathscr{C}[\mathcal{K}_{i}]|=|\mathscr{C}|-3\), hence \(\mathcal{C}[\mathcal{K}_{i}]\) is an OSJ cover of \(\mathcal{K}_{i}\), and by the induction hypothesis essentially unique. By Remark 4.6, \(\mathscr{C}[\mathcal{K}_{i}]\) has two components of cardinality \(3^{i-1}\), and all other components of cardinality \(2\cdot 3^{i-2}\). Let \(\mathcal{K}_{4}\) be one of the components of \(\mathscr{C}\) with the largest intersection with \(\mathcal{K}_{1}\). From \(|\mathscr{C}[\mathcal{K}_{i}]|=|\mathscr{C}|-3\) for \(i=2,3\), we deduce that \(\mathcal{K}_{4}\cap\mathcal{K}_{i}\neq\emptyset\) for \(i=2,3\). Then \(|\mathcal{K}_{4}\cap\mathcal{K}_{1}|=3^{i-1}\) and \(|\mathcal{K}_{4}\cap\mathcal{K}_{i}|\geq 2\cdot 3^{i-2}\) for \(i=2,3\). It follows that \(|\mathcal{K}_{4}|\geq 3^{i-1}+2\cdot 3^{i-2}+2\cdot 3^{i-2}>2\cdot 3^{i-1}=| \mathcal{K}_{1}|\), a contradiction with maximality of \(|\mathcal{K}_{1}|\). Let \(n=3^{i}\) for \(i\geq 2\), and hence \(s(n)=3i\) and \(k_{0}=3\). From Lemma 4.11 we get \(s(|\mathcal{K}_{1}|)=s(|\mathcal{S}|)=3(i-1)\), implying \(|\mathcal{K}_{1}|\leq|\mathcal{S}|\leq 3^{i-1}\). Hence, \[|\mathcal{K}_{2}\cup\mathcal{K}_{3}| \leq 2|\mathcal{K}_{1}|\leq 2\cdot 3^{i-1},\] \[|\mathcal{K}_{2}\cup\mathcal{K}_{3}| =n-|\mathcal{S}|\geq 3^{i}-3^{i-1}=2\cdot 3^{i-1}.\] As above, we conclude that \(\mathcal{K}_{1}\), \(\mathcal{K}_{2}\), and \(\mathcal{K}_{3}\) are pairwise disjoint, their union is \([n]\), and they all have \(\frac{n}{3}=3^{i-1}\) elements. Since we are allowing isomorphism of graphs, \(\mathcal{K}_{j}\), \(j\in[3]\), can be taken to be any three subsets of \([n]\) that satisfy those conditions. Furthermore, \(\mathscr{C}[\mathcal{K}_{j}]\), \(j\in[3]\), is an OSJ cover of the complete graph on \(\mathcal{K}_{j}\) and by the induction hypothesis essentially unique. Since \(\mathcal{K}_{j}\), \(j\in[3]\), are pairwise disjoint and \(\cup_{j\in[3]}\mathcal{K}_{j}=[n]\) this implies essential uniqueness of \(\mathscr{C}\). The proof for \(n=2\cdot 3^{i}\), \(i\geq 2\) with \(s(n)=2+3i\) and \(k_{0}=2\) is very similar and we leave it to the reader. **Remark 4.13**.: With arguments, akin to the ones in the proof of Proposition 4.12, it can be shown that for \(n=4\cdot 3^{i}\) the graph \(K_{n}\) has exactly two essentially different OSJ covers. In particular, the components of any OSJ cover of \(K_{n}\) either satisfy \(|\mathcal{K}_{1}|=\ldots=|\mathcal{K}_{4}|=2\cdot 3^{i}\), \(|\mathcal{K}_{5}|=\ldots=|\mathcal{K}_{3i+4}|=4\cdot 3^{i-1}\) or \(|\mathcal{K}_{1}|=\ldots=|\mathcal{K}_{3i}|=4\cdot 3^{i-1}\), \(|\mathcal{K}_{3i+1}|=\ldots=|\mathcal{K}_{3i+4}|=3^{i}\). Notice that in this case two possible values of \(k_{0}\in\mathcal{M}_{n}\) can be attained, namely in the first cover we have \(k_{0}=2\) and in the second cover \(k_{0}=3\). The following proposition covers the remaining cases with essentially unique OSJ cover. Only a sketch of the proof is given. The proof with all the technical details included would be rather long, and would not give significant new insights to the reader. **Proposition 4.14**.: _Let \(n\in\mathbb{N}\) be of the form \(n=3^{i}-1\) for some \(i\geq 3\), or of the form \(n=2\cdot 3^{i}-1\) for some \(i\geq 2\). Then \(K_{n}\) has essentially unique OSJ cover._ Sketch of the proof.: We claim that \(K_{n}\) has essentially unique OSJ cover for every \(n\) in the set \[\mathcal{N}=\{n\in\mathbb{N},n=3^{i}-1,i\geq 3\}\cup\{n\in\mathbb{N},n=2\cdot 3 ^{i}-1,i\geq 2\}.\] This claim can be proved by induction, where the base of induction needs to be established for \(n\in\{17,26\}\). (This is left to the reader.) Next we fix \(n\), and assume that the claim holds for \(m\in\mathcal{N}\), with \(m<n\). We assume notation and definitions from Lemma 4.11. For \(n=3^{i}-1,i\geq 3\), we have \(\mathcal{M}_{n}=\{3\}\). Lemma 4.11 implies \(s(|\mathcal{K}_{1}|)=s(|\mathcal{S}|)=3(i-1)\). From here we get: \[|\mathcal{K}_{2}\cup\mathcal{K}_{3}|\geq n-|\mathcal{S}|\geq 2\cdot 3^{i-1}-1.\] Since \(\mathcal{K}_{1}\cap(\mathcal{K}_{2}\cup\mathcal{K}_{3})=\emptyset\) and \(\mathcal{K}_{1}\cup\mathcal{K}_{2}\cup\mathcal{K}_{3}=[3^{i}-1]\), we conclude that \(|\mathcal{K}_{1}|=3^{i-1}\) and \(|\mathcal{K}_{2}\cup\mathcal{K}_{3}|=2\cdot 3^{i-1}-1\). Note that \(|\mathscr{C}[\mathcal{K}_{2}\cup\mathcal{K}_{3}]|=|\mathcal{C}|-1=3i-1\), since \(\mathcal{K}_{1}\cap(\mathcal{K}_{2}\cup\mathcal{K}_{3})=\emptyset\). Hence \(\mathscr{C}[\mathcal{K}_{2}\cup\mathcal{K}_{3}]\) is an OSJ set-join cover of the complete graph on \(\mathcal{K}_{2}\cup\mathcal{K}_{3}\), and it is by the induction hypothesis essentially unique. By Remark 4.6 we know that its largest components are disjoint (and of order \(3^{i-1}\) and \(3^{i-1}-1\)). This implies that \(\mathcal{K}_{2}\) and \(\mathcal{K}_{3}\) are disjoint, \(|\mathcal{K}_{1}|=|\mathcal{K}_{2}|=3^{i-1}\), and \(|\mathcal{K}_{3}|=3^{i-1}-1\). The claim is proved by noting that \(\mathscr{C}[\mathcal{K}_{j}]\) is an essentially unique OSJ cover of the complete graph on \(\mathcal{K}_{j}\) for \(j\in[3]\). For \(n=2\cdot 3^{i}-1,i\geq 2\), we have \(\mathcal{M}_{n}=\{2,3\}\). The case \(k_{0}=3\) can be excluded in a similar way, as was done in the proof of Proposition 4.12 for the case \(n=2\cdot 3^{i}\). The rest of the proof of this case is very similar to the proof for the case \(n=3^{i}-1\) above, and it is left to the reader. Propositions 4.9, 4.12, and 4.14 completely decide the question of essential uniqueness of OSJ covers of \(K_{n}\). The result is summarized in the theorem below. **Theorem 4.15**.: _Let \(n\in\mathbb{N}\), \(n\geq 2\). The graph \(K_{n}\) has essentially unique OSJ cover if and only if \(n=3^{i}\) for some \(i\geq 1\), \(n=2\cdot 3^{i}\) for some \(i\geq 1\), \(n=3^{i}-1\) for some \(i\geq 3\), or \(n=2\cdot 3^{i}-1\) for some \(i\geq 2\)._ **Remark 4.16**.: Equations (1), (2) and (3) express \(\mathrm{st}_{+}(K_{n})\) in three different ways: \[\mathrm{st}_{+}(K_{n}) =\begin{cases}3i&\text{if}\,\,\,2\cdot 3^{i-1}<n\leq 3^{i}\\ 3i+1&\text{if}\,\,\,3^{i}<n\leq 4\cdot 3^{i-1}\\ 3i+2&\text{if}\,\,\,4\cdot 3^{i-1}<n\leq 2\cdot 3^{i}\end{cases}\] \[=\min\left\{\sum_{i=1}^{t}q_{i};n\leq\prod_{i=1}^{t}q_{i}\right\}\] \[=\min\left\{k+\mathrm{st}_{+}\left(K_{\left\lceil\frac{n}{k} \right\rceil}\right);k=2,3,4,5\right\}.\] Through those expressions we have seen how OSJ covers can be constructed recursively. **Remark 4.17**.: In this section, an idea of constructing set-join covers from set-join covers of smaller graphs was repeatedly used. This idea can be formalised using co-normal products of graphs. Let \(G\) and \(H\) be simple graphs (with loops). The co-normal product of \(G\) and \(H\), denoted by \(G*H\), is the graph with the vertex set \(V(G)\times V(H)\), and the edge set \[E(G*H)=\left\{\{(g,h),(g^{\prime},h^{\prime})\};\{g,g^{\prime}\}\in E(G)\text{ or }\{h,h^{\prime}\}\in E(H)\right\}.\] Let \(\mathscr{C}_{G}\) and \(\mathscr{C}_{H}\) be set-join covers of \(G\) and \(H\), respectively. Using \(\mathscr{C}_{G}\) and \(\mathscr{C}_{H}\) we can build a set-join cover \(\mathscr{C}^{\prime}\) of \(G*H\) as follows. For every \(\mathcal{K}\in V(\mathscr{C}_{G})\) define \(\mathcal{K}^{\prime}:=\{(g,h);g\in\mathcal{K},h\in V(H)\}\) and for every \(\mathcal{L}\in V(\mathscr{C}_{H})\) define \(\mathcal{L}^{\prime}:=\{(g,h);g\in V(G),h\in\mathcal{L}\}\). Then \[\mathscr{C}^{\prime}:=\{\mathcal{K}^{\prime}_{1}\vee_{V(G*H)}\mathcal{K}^{ \prime}_{2};\,\mathcal{K}_{1}\vee_{V(G)}\mathcal{K}_{2}\in\mathscr{C}_{G}\} \cup\{\mathcal{L}^{\prime}_{1}\vee_{V(G*H)}\mathcal{L}^{\prime}_{2};\, \mathcal{L}_{1}\vee_{V(H)}\mathcal{L}_{2}\in\mathscr{C}_{H}\}\] is a set-join cover of \(G*H\). In particular, \[\mathrm{st}_{+}(G*H)\leq\mathrm{st}_{+}(G)+\mathrm{st}_{+}(H). \tag{4}\] Since \(K_{p}*K_{q}\) is isomorphic to \(K_{pq}\), we see from above, that the inequality (4) can be strict (for example for \(p=q=5\)), but it cannot be improved in general (it is equality for example for \(p=3^{i}\) and \(q=3^{j}\)). **Remark 4.18**.: If \(A\in\mathcal{S}_{n}^{+}\), then its Boolean rank is the NMF-rank of the pattern graph of the matrix \(A\) (see [4]): \[\operatorname{rk}_{01}(A)=\min\{\operatorname{rk}_{+}(B);B\in\mathcal{S}^{+}(G( A))\}.\] It follows immediately from (1) that \(\operatorname{st}_{+}(K_{n})\) behaves asymptotically as \(\operatorname{st}_{+}(K_{n})\sim\frac{3}{\log 3}\log n\). On the other hand, if \(A\in\mathcal{S}^{+}(K_{n})\) its Boolean rank \(\operatorname{rk}_{01}(A)\) is the minimal \(k\) such that \(n\leq\binom{k}{\lfloor k/2\rfloor}\), [4]. It follows that NMF-rank of such matrix is asymptotically bounded below by \(\frac{1}{\log 2}\log n\). It means that the minimal SNT-rank and the minimal NMF-rank that can be achieved on the complete graphs have the same order of asymptotic behaviour, but the constant differs by factor 1.893. ## 5 Concluding remarks Finding \(\operatorname{st}_{+}(G)\) for a given graph \(G\) is well-defined combinatorial optimisation problem, that is of independent interest. In this section we highlight a few settings, where the theory of this paper naturally appears and provides relevant groundwork. The most immediate application of \(\operatorname{st}_{+}(G)\) is the study \(\operatorname{st}_{+}(A)\) and SN-Trifactorizations of \(A\in\mathcal{S}_{n}^{+}\). Below we offer an example how this can be done for a given matrix. **Example 5.1**.: The matrix \[A=\left(\begin{array}{cccc}4&1&1&4\\ 1&1&2&0\\ 1&2&0&3\\ 4&0&3&1\end{array}\right)\] has rank 3. Let \(G:=G(A)\), and let \(H\) be an induced subgraph of \(G\) on the set \(\{2,3,4\}\). Since any matrix in \(\mathcal{S}^{+}(H)\) is invertible, we have \(\operatorname{st}_{+}(G)\geq 3\). Note also that \(H\) has the unique OSJ cover \(\mathscr{C}_{H}\), where each component in \(V(\mathscr{C}_{H})\) is a singleton. The set-join cover \(\mathscr{C}_{H}\) can be extended to a set-join cover of \(G\): \[\mathscr{C}=\{\mathcal{K}_{1}\vee_{[4]}\mathcal{K}_{1},\mathcal{K}_{1}\vee_{[ 4]}\mathcal{K}_{2},\mathcal{K}_{2}\vee_{[4]}\mathcal{K}_{3},\mathcal{K}_{3} \vee_{[4]}\mathcal{K}_{3}\},\] where \[2\in\mathcal{K}_{1},3\in\mathcal{K}_{2},4\in\mathcal{K}_{3},\] and 1 is an element of at least two of \(\mathcal{K}_{i}\), \(i=1,2,3\). This describes all possible set-join covers of \(G\), and, in particular, shows that \(\operatorname{st}_{+}(G)=3\). Suppose that \(\mathrm{st}_{+}(A)=3\) and let \(A=BCB^{T}\) be an optimal SN-Trifactorization. By observations above and Theorem 2.11 the matrix \(B\) has the form \[B=\left(\begin{array}{c}\mathbf{b}^{T}\\ D\end{array}\right),\] where \(\mathbf{b}\in\mathbb{R}_{+}^{3}\) has at least two nonzero entries and \(D\) is an invertible nonnegative diagonal matrix. It follows that \[A=BCB^{T}=\left(\begin{array}{c}\mathbf{b}^{T}\\ D\end{array}\right)C\left(\begin{array}{cc}\mathbf{b}&D\end{array}\right)= \left(\begin{array}{cc}\mathbf{b}^{T}C\mathbf{b}&\mathbf{b}^{T}CD\\ DC\mathbf{b}&DCD\end{array}\right),\] so \[DCD=\left(\begin{array}{ccc}1&2&0\\ 2&0&3\\ 0&3&1\end{array}\right)\text{ and }DC\mathbf{b}=\left(\begin{array}{c}1\\ 1\\ 4\end{array}\right).\] Thus \[\mathbf{b}=D(DCD)^{-1}(DC\mathbf{b})=D\left(\begin{array}{ccc}1&2&0\\ 2&0&3\\ 0&3&1\end{array}\right)^{-1}\left(\begin{array}{c}1\\ 1\\ 4\end{array}\right)=D\left(\begin{array}{c}-1\\ 1\\ 1\end{array}\right),\] which means that vector \(\mathbf{b}\) has a negative entry, a contradiction. It follows that \(\mathrm{st}_{+}(A)=4\). Given an \(n\times n\) nonnegative symmetric matrix \(A\) there are existing algorithms that given \(k\) find matrices \(B\in\mathbb{R}_{+}^{n\times k}\) and \(C\in\mathcal{S}_{k}^{+}\) that minimize \(\left\|A-BCB^{T}\right\|_{F}\), [8]. Using the theory developed in this paper it should be possible to extend such algorithms to find matrices \(B\in\mathbb{R}_{+}^{n\times k}\) and \(C\in\mathcal{S}_{k}^{+}\) where \(BCB^{T}\) has some prescribed zero-nonzero pattern. In Remark 2.14 we have seen an interpretation of \(\mathrm{st}_{+}(G)\) as the minimal number of subsets of a given set \(V\) that need to be formed if we want to organize required and forbidden interactions as given by the graph \(G\). This interpretation motivates variations of the question considered in this paper. For example, minimizing the number of meetings corresponds to finding set-join covers \(\mathscr{C}\) containing minimal number of set-joins. Posing restrictions on the size of groups would mean looking for set-join covers \(\mathscr{C}\) with restrictions on the cardinalities of the elements in \(V(\mathscr{C})\). In addition to having pairs of elements that are required and pairs that are forbidden to interact, we may have pairs of elements that can (but are not required to) interact. To solve this question we would be looking for partial set-join covers of \(G\). ## Acknowledgments Damjana Kokol Bukovsek acknowledges financial support from the ARIS (Slovenian Research and Innovation Agency, research core funding No. P1-0222).
2307.05060
Satisfiability of Arbitrary Public Announcement Logic with Common Knowledge is $Σ^1_1$-hard
Arbitrary Public Announcement Logic with Common Knowledge (APALC) is an extension of Public Announcement Logic with common knowledge modality and quantifiers over announcements. We show that the satisfiability problem of APALC on S5-models, as well as that of two other related logics with quantification and common knowledge, is $\Sigma^1_1$-hard. This implies that neither the validities nor the satisfiable formulas of APALC are recursively enumerable. Which, in turn, implies that APALC is not finitely axiomatisable.
Rustam Galimullin, Louwe B. Kuijer
2023-07-11T07:10:01Z
http://arxiv.org/abs/2307.05060v1
Satisfiability of Arbitrary Public Announcement Logic with Common Knowledge is \(\Sigma^{1}_{1}\)-hard ###### Abstract Arbitrary Public Announcement Logic with Common Knowledge (APALC) is an extension of Public Announcement Logic with common knowledge modality and quantifiers over announcements. We show that the satisfiability problem of APALC on \(S5\)-models, as well as that of two other related logics with quantification and common knowledge, is \(\Sigma^{1}_{1}\)-hard. This implies that neither the validities nor the satisfiable formulas of APALC are recursively enumerable. Which, in turn, implies that APALC is not finitely axiomatisable. ## 1 Introduction **Quantified Public Announcement Logics**. _Epistemic logic_ (EL) [22] is one of the better-known formalisms for reasoning about knowledge of agents in multi-agent systems. It extends the language of propositional logic with constructs \(\Box_{a}\varphi\) meaning that 'agent \(a\) knows \(\varphi\)'. Formulas of EL are interpreted on epistemic models (or, equivalently, \(S5\)-models) that comprise a set of states, equivalence relations for each agent between states, and a valuation function that specifies in which states propositional variables are true. However, EL provides only a static description of distribution of knowledge in a system. Extensions of the logic that allow one to reason about how information of individual agents and groups thereof changes as a result of some epistemic event are generally collectively known as _dynamic epistemic logics_ (DELs) [11]. The prime example of a DEL and arguably the most well-studied logic in the family is _public announcement logic_ (PAL) [26]. A public announcement is an event of all agents publicly and simultaneously receiving the same piece of information. The language of PAL extends that of EL with formulas \([\psi]\varphi\) that are read as 'after public announcement of \(\psi\), \(\varphi\) is true'. Quantification over various epistemic actions, and in particular over public announcements, has been explored in the last 15 or so years [10]. Adding quantification over public announcements allows one to shift the emphasis from the effects of a particular announcement to the question of (non-)existence of an announcement leading to a desired epistemic goal. In this paper, we focus on the three, perhaps most well-known, _quantified PALs_ (QPALs). The first of the three is _arbitrary PAL_ (APAL) [7] that extends the language of PAL with constructs \([!]\varphi\) meaning 'after _any_ public announcement, \(\varphi\) is true'. A formula with the dual existential quantifier \(\langle!\rangle\varphi\) is read as '_there is_ a public announcement, after which \(\varphi\) is true'. Observe that quantifiers of APAL do not specify whether an announcement can be made by any of the agents, or groups thereof, modelled in a system. Hence, a more 'agent-centric' quantified PAL was proposed. _Group announcement logic_ (GAL) [2] extends the language of PAL with formulas \([G]\varphi\) meaning 'after _any_ announcement by agents from group \(G\), \(\varphi\) is true'. A formula with the dual of the universal GAL quantifier is \(\langle G\rangle\varphi\) that is read '_there is_ an announcement by agents from group \(G\) that makes \(\varphi\) true'. Once we start reasoning about what groups of agents can achieve by making public announcements, it is only too natural to consider their abilities in a game-theoretic setting. In particular, we may let agents outside of the group make their own announcements in an attempt to preclude the group from reaching their epistemic goals. A QPAL with such a competitive flavour to it is called _coalition announcement logic_ (CAL) [3, 16]. The logic extends PAL with modalities \([\langle G\rangle]\varphi\) that are read as _'whatever_ agents from coalition \(G\) announce, _there is_ a counter-announcement by the anti-coalition that makes \(\varphi\) true'. The diamond version \(\langle\![G]\!\rangle\varphi\) is then means that _'there is_ an announcement by coalition \(G\), such that _whatever_ the anti-coalition announces at the same time, they cannot avoid \(\varphi\)'. Observe, that compared to APAL and GAL, modalities of CAL contain double quantification: \(\forall\exists\) and \(\exists\forall\) correspondingly. As the name of the logic suggests, modalities of CAL were inspired by coalition logic [25], and they capture game-theoretic notions of \(\alpha\)- and \(\beta\)-effectivity [6]. **Some Logical Properties of QPALs**. One of the most pressing open problems in the area is the existence of finitary axiomatisations of QPALs. Both finitary and infinitary axiom systems for APAL were proposed in [7], but later the finitary version was shown to be unsound [20]. The infinitary axiomatisation is, however, sound and complete [8]. As the axiomatisation of GAL [2] is quite similar to that of APAL, its finitary version is also not sound [14, Footnote 4], and its infinitary version can be shown to be sound and complete by a modification of the proof from [8]. To the best of our knowledge, there are no known sound and complete proof systems, finitary or infinitary, for CAL1. Footnote 1: A complete infinitary axiomatisation with CAL modalities and additional operators was given in [17] The satisfiability problem for QPALs is known to be undecidable [4]. The result is achieved by a reduction from the classic tiling problem that consists in answering the question whether a given finite set of tiles can tile the \(\mathbb{N}\times\mathbb{N}\) plane. Since this problem is co-RE-complete [9, 18], or, equivalently, \(\Pi^{0}_{1}\)-complete, the reduction amounts to the fact that the satisfiability problem for QPALs is co-RE-hard (or \(\Pi^{0}_{1}\)-hard). Note that this result does not rule out the existence of finitary axiomatisations of QPALs. A prime example of a logic with a co-RE-complete satisfiability problem and a finitary axiomatisation is first-order logic. **Overview of the paper and our result.** In this paper we consider extensions of QPALs with _common knowledge_[13], which is a classic variant of group knowledge in multi-agent systems. Its intuitive meaning is that '\(\varphi\) is common knowledge among agents in group \(G\) if everyone in \(G\) knows \(\varphi\), everyone in \(G\) knows that everyone in \(G\) knows \(\varphi\) and so on ad infinitum'. Semantically, common knowledge among agents from \(G\) corresponds to the reflexive transitive closure of equivalence relations of all agents from group \(G\). We call extensions of APAL, GAL, and CAL with common knowledge APALC [5], GALC, and CALC, correspondingly, or QPALCs if we refer to all of them at the same time. The result we prove in this paper is that the satisfiability problems for QPALCs are \(\Sigma^{1}_{1}\)-hard. We do this by showing that the _recurring tiling problem_, which is known to be \(\Sigma^{1}_{1}\)-complete [19], can be reduced to satisfiability of QPALC formulas. Because the satisfiability problems are \(\Sigma^{1}_{1}\)-hard, it follows that, in particular, the set of valid QPALC formulas is not recursively enumerable. That, in turn, implies that QPALCs have no finitary axiomatisations. The non-existence of a finitary axiomatisation of a somewhat related arbitrary arrow update logic [12] with common knowledge was shown in [21] by the reduction from the non-halting problem. Moreover, the recurring tiling problem was used in [23] to demonstrate that the satisfiability problem of PAL with iterated announcements and common knowledge is \(\Sigma^{1}_{1}\)-complete. The use of common knowledge is instrumental in our paper, since it allows us to have a 'tighter' grid than the ones from [4] and [15]. We deem our result important in at least two ways. First, the non-existence of finitary axiomatisations of QPALCs is interesting in its own right as it demonstrates that presence of common knowledge in QPALCs is a sufficient condition for \(\Sigma^{1}_{1}\)-hardness. Second, having both our construction (with common knowledge) and the constructions from [4] and [15] side by side, allows one to flesh out crucial differences between \(\Sigma^{1}_{1}\)-hardness and \(\Sigma^{0}_{1}\)-hardness arguments, and, hopefully, move closer to tackling the open problem of (non-)existence of finitary axiomatisations of QPALs. **Outline of the paper.** The rest of the paper is organised as follows. In Section 2 we cover the background on QPALCs. After that, in Section 3, we prove the main claim of this paper, and, finally, we conclude in Section 4. ## 2 Quantified Public Announcement Logics with Common Knowledge Let \(A\) be a finite set of agents, and \(P\) be a countable set of propositional variables. **Definition 2.1**.: The _languages of arbitrary public announcement logic with common knowledge_\(\mathsf{APALC}\), _group announcement logic with common knowledge_\(\mathsf{GALC}\), and _coalition announcement logic with common knowledge_\(\mathsf{CALC}\) are inductively defined as \[\mathsf{APALC} \ni\ \varphi\ \mathrel{\mathop{:}}=p\mid\neg\varphi\mid(\varphi \land\varphi)\mid\Box_{a}\varphi\mid[\varphi]\varphi\mid\blacksquare_{G}\varphi\mid [!]\varphi\] \[\mathsf{GALC} \ni\ \varphi\ \mathrel{\mathop{:}}=p\mid\neg\varphi\mid(\varphi \land\varphi)\mid\Box_{a}\varphi\mid[\varphi]\varphi\mid\blacksquare_{G}\varphi\mid [G]\varphi\] \[\mathsf{CALC} \ni\ \varphi\ \mathrel{\mathop{:}}=p\mid\neg\varphi\mid(\varphi \land\varphi)\mid\Box_{a}\varphi\mid[\varphi]\varphi\mid\blacksquare_{G}\varphi\mid [\langle G\rangle]\varphi\] where \(p\in P\), \(a\in A\), and \(G\subseteq A\). Duals are defined as \(\Diamond_{a}\varphi\mathrel{\mathop{:}}=\neg\Box_{a}\neg\varphi\), \(\langle\psi\rangle\varphi\mathrel{\mathop{:}}=\neg[\psi]\neg\varphi\), \(\blacklozenge_{G}\varphi\mathrel{\mathop{:}}=\neg\blacksquare_{G}\neg\varphi\), \(\langle i\rangle\varphi\mathrel{\mathop{:}}=\neg[!]\neg\varphi\), \(\langle G\rangle\varphi\mathrel{\mathop{:}}=\neg[G]\neg\varphi\) and \(\langle[G]\rangle\varphi\mathrel{\mathop{:}}=\neg[\langle G\rangle]\neg\varphi\). The fragment of \(\mathsf{APALC}\) without \([!]\varphi\) is called _public announcement logic with common knowledge_\(\mathsf{PACC}\); the latter without \([\varphi]\varphi\) is _epistemic logic with common knowledge_\(\mathsf{ELC}\); \(\mathsf{PACL}\) and \(\mathsf{ELC}\) minus \(\blacksquare_{G}\varphi\) are, correspondingly, _public announcement logic_\(\mathsf{PACL}\) and _epistemic logic_\(\mathsf{EL}\). Finally, fragments of \(\mathsf{APALC}\), \(\mathsf{GALC}\) and \(\mathsf{CALC}\) without \(\blacksquare_{G}\varphi\) are called _arbitrary public announcement logic_\(\mathsf{APAL}\), _group announcement logic_\(\mathsf{GAL}\) and _coalition announcement logic_\(\mathsf{CAL}\) respectively. **Definition 2.2**.: A _model \(M\)_ is a tuple \((S,\sim,V)\), where \(S\) is a non-empty set of states, \(\sim:A\to 2^{S\times S}\) gives an equivalence relation for each agent, and \(V:P\to 2^{S}\) is the valuation function. By \(\sim_{G}\) we mean reflexive transitive closure of \(\bigcup_{a\in G}\sim_{a}\). We will denote model \(M\) with a distinguished state \(s\) as \(M_{s}\). We would like to stress that agent relations in our models are _equivalence relations_ (and hence our models are \(S5\) models). The results of this paper do not generalise to arbitrary agent relations in any obvious way. It is assumed that for group announcements, agents know the formulas they announce. In the following, we write \(\mathsf{PALC}^{G}=\{\bigwedge_{i\in G}\Box_{i}\psi_{i}\mid\text{ for all }i\in G,\psi_{i}\in\mathsf{PALC}\}\) to denote the set of all possible announcements by agents from group \(G\). We will use \(\psi_{G}\) to denote arbitrary elements of \(\mathsf{PALC}^{G}\). **Definition 2.3**.: Let \(M_{s}=(S,R,V)\) be a model, \(p\in P\), \(G\subseteq A\), and \(\varphi,\psi\in\mathsf{APALC}\cup\mathsf{GALC}\cup\mathsf{CALC}\). \[M_{s}\models p \text{iff}\quad s\in V(p)\] \[M_{s}\models\neg\varphi \text{iff}\quad M_{s}\not\models\varphi\] \[M_{s}\models\varphi\land\psi \text{iff}\quad M_{s}\models\varphi\text{ and }M_{s}\models\psi\] \[M_{s}\models\square_{a}\varphi \text{iff}\quad\forall t\in S:s\sim_{a}t\text{ implies }M_{t}\models\varphi\] \[M_{s}\models\blacksquare_{G}\varphi \text{iff}\quad\forall t\in S:s\sim_{G}t\text{ implies }M_{t}\models\varphi\] \[M_{s}\models[\psi]\varphi \text{iff}\quad M_{s}\models\psi\text{ implies }M_{s}^{\psi}\models\varphi\] \[M_{s}\models[t!]\varphi \text{iff}\quad\forall\psi\in\mathsf{PALC}:M_{s}\models[\psi]\varphi\] \[M_{s}\models[G]\varphi \text{iff}\quad\forall\psi_{G}\in\mathsf{PALC}^{G}:M_{s}\models[ \psi_{G}]\varphi\] \[M_{s}\models[\langle G\rangle]\varphi \text{iff}\quad\forall\psi_{G}\in\mathsf{PALC}^{G},\exists\chi_{ A\setminus G}\in\mathsf{PALC}^{A\setminus G}:M_{s}\models\psi_{G}\text{ implies }M_{s}\models\langle\psi_{G}\land\chi_{A\setminus G}\rangle\varphi\] where \(M_{s}^{\psi}=(S^{\psi},R^{\psi},V^{\psi})\) with \(S^{\psi}=\{s\in S\mid M_{s}\models\psi\}\), \(R^{\psi}(a)\) is the restriction of \(R(a)\) to \(S^{\psi}\) for all \(a\in A\), and \(V^{\psi}(p)=V(p)\cap S^{\psi}\) for all \(p\in P\). Observe, that it follows from the definition of the semantics that in the case of the grand coalition \(A\), \(M_{s}\models[A]\varphi\) if and only if \(M_{s}\models[\langle A\rangle]\varphi\). For the case of the empty group \(\varnothing\), we assume that the conjunction of an empty set of formulas is a tautology. **Remark 1**.: For APAL, GAL, and CAL, we assume that quantification ranges over a quantifier-free fragment of the language, i.e. over PAL, which is equally expressive as EL [26]. This is, however, not as straightforward once we consider ELC and PALC. The latter is strictly more expressive than ELC [11, Theorem 8.48], and ELC, in its turn, is strictly more expressive than EL, and thus it matters, expressivity-wise, which quantifer-free fragment of a QPALC the quantification ranges over. These matters are explored in [5], where also infinitary axiomatisations of APALC and GALC are given. For our current purposes, though, the difference in the range of quantification does not play a role. ## 3 The Satisfiability Problem of QPALCs is \(\Sigma^{1}_{1}\)-hard We prove the \(\Sigma^{1}_{1}\)-hardness of the satisfiability problem of QPALCs via a reduction from the recurring tiling problem [18]. **Definition 3.1**.: Let \(C\) be a finite set of _colours_. A _tile_ is a function \(\tau:\{\mathsf{north},\mathsf{south},\mathsf{east},\mathsf{west}\}\to C\). A finite set of tiles T is called an _instance_ of the tiling problem. A _solution_ to an instance of the tiling problem is a function2\(f:\mathbb{N}\times\mathbb{N}\to\text{T}\) such that for all \((i,j)\in\mathbb{N}\times\mathbb{N}\), Footnote 2: Throughout the paper we assume that \(0\in\mathbb{N}\). \[f(i,j)(\mathsf{north})=f(i,j+1)(\mathsf{south})\text{ and }f(i,j)(\mathsf{east })=f(i+1,j)(\mathsf{west}).\] **Definition 3.2**.: Let T be a finite set of tiles with a designated tile \(\tau^{*}\in\text{T}\). The _recurring tiling problem_ is the problem to determine whether there is a solution to instance T of the tiling problem such that \(\tau^{*}\) appears _infinitely_ often in the first column. We assume without loss of generality that the designated tile \(\tau^{*}\) occurs only in the first column. ### Encoding a Tiling For our construction we will require five propositional variables -- north, south, east, west and centre -- to designate the corresponding sides of tiles. Additionally, we will have designated propositional variables for each colour in \(C\), and for each tile \(\tau_{i}\in\mathrm{T}\) there is a propositional variable \(p_{i}\) that represents this tile. Finally, we will use \(p^{*}\) for the special \(\tau^{*}\). In our construction, we will represent each tile with (at least) five states: one for each of the four sides of a tile, and one for the centre. As for agents, we require only three of them for our construction. Agent \(s\), for square, cannot distinguish states within the same tile. Agent \(v\), for vertical, cannot distinguish between the northern part of one tile and the southern part of the tile above. Similarly, the _h_orizontal agent \(h\) cannot distinguish between the eastern and western parts of adjacent tiles. See Figure 1 for the depiction of an intended grid-like model. Let an instance \(\mathrm{T}\) of the recurring tiling problem be given. We start by construction of formula \(\Psi_{\mathrm{T}}\) that will be satisfied in a given model if and only if the model is grid-like. We will build up \(\Psi_{\mathrm{T}}\) step-by-step, defining useful subformulas along the way. Let \(\mathsf{Position}\) be the following set \(\mathsf{Position}:=\{\mathsf{north},\allowbreak\mathsf{south},\allowbreak \mathsf{east},\allowbreak\mathsf{west},\mathsf{centre}\}\). The first constraint, expressed by formula _one_colour_, is that each state is coloured by exactly one colour. To ensure that all five parts -- north, south, east, west, and centre -- are present in a current square, we state in _all_parts_ that in all squares the square agent \(s\) has access to all five relevant states. \[\mathit{one\_colour}:=\bigvee_{c\in C}\left(c\wedge\bigwedge_{d\in C\setminus\{c \}}\neg d\right)\qquad\qquad\mathit{all\_parts}:=\square_{s}\bigvee_{q\in \mathsf{Position}}q\wedge\bigwedge_{q\in\mathsf{Position}}\lozenge_{s}q\] The formulas _hor_ and _vert_ state that the relation \(h\) only allows us to move between east and west states, while \(v\) only allows movement between north and south states. \[\mathit{hor}:=\bigwedge_{q\in\{\mathsf{north},\mathsf{south},\mathsf{ centre}\}}(q\to\square_{h}q)\qquad\qquad\mathit{vert}:=\bigwedge_{q\in\{\mathsf{east},\allowbreak \mathsf{west},\mathsf{centre}\}}(q\to\square_{v}q)\] Figure 1: Left: a representation of a single tile \(\tau_{i}\), where agent \(s\) has the universal relation within the dashed square, relations \(h\) and \(v\) are equivalences, and reflexive arrows are omitted. Each state is labelled by a set of propositional variables that are true there. Right: an example of a grid-like model that we construct in our proof. Each tile \(\tau\) has a similar structure as presented on the left of the figure. With _one_pos_ we force each state to satisfy exactly one propositional variable from Position, and with _one_tile_ we ensure that all states within the same tile are labelled by the tile proposition. \[\textit{one\_pos}:=\bigvee_{q\in\mathsf{Position}}\left(q\wedge\bigwedge_{q^{ \prime}\in\mathsf{Position}\setminus\{q\}}\neg q^{\prime}\right)\qquad\textit{ one\_tile}:=\bigvee_{\tau_{i}\in\mathsf{T}}\left(p_{i}\wedge\square_{s}p_{i} \wedge\bigwedge_{\tau_{j}\in\mathsf{T}\setminus\{\tau_{i}\}}\neg p_{j}\right)\] Next, we force each state in a tile to satisfy exactly one atom corresponding to their designated colour: \[\textit{state\_col}:=\bigvee_{\tau_{i}\in\mathsf{T}}\left(p_{i}\to\bigwedge_{q \in\mathsf{Position}\setminus\{\mathsf{centre}\}}(q\to\tau_{i}(q))\right),\] where \(\tau_{i}(q)\) is the colour of the tile \(\tau_{i}\) on the side \(q\) (e.g. \(\tau_{i}(\mathsf{south})\) is the bottom colour of tile \(\tau_{i}\)). All the formulas considered so far deal with the representation of a single tile. We will use the following abbreviation: \[\psi_{\textit{tile}}:=\textit{one\_colour}\wedge\textit{all\_parts}\wedge \textit{hor}\wedge\textit{vert}\wedge\textit{one\_pos}\wedge\textit{one\_tile} \wedge\textit{state\_col}\] Adjoining tiles are required to have the same colour on the sides facing each other, we simulate this by requiring that agents \(h\) and \(v\) consider a current colour in the top and right directions. In such a way we also ensure that the grid is infinite in the positive quadrant. \[\textit{adj\_tiles}:=\bigwedge_{c\in\mathcal{C}}\left((\mathsf{north}\wedge c \to\Diamond_{v}\mathsf{south}\wedge\square_{v}c)\wedge(\mathsf{east}\wedge c \to\Diamond_{h}\mathsf{west}\wedge\square_{h}c)\right)\] We are concerned with the reduction from the \(\mathbb{N}\times\mathbb{N}\) recurring tiling problem, i.e. our grid will have left and bottom edges. We force the existence of a tile at position \((0,0)\) with the following formula: \[\textit{init}:=\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[no\_change:=\bigwedge_{q,q^{\prime}\in\mathsf{Position}}[!]((q\wedge\Diamond_{s}q^{ \prime})\to(\square_{h}(q\to\Diamond_{s}q^{\prime})\wedge\square_{v}(q\to\Diamond_ {s}q^{\prime})))\] The formula _hor_ states that unless we are in a \(\mathsf{east}\) or \(\mathsf{west}\) position, we cannot go to a different position using \(h\). Similarly, _vert_ states that unless we are in a north or south position we can't use \(v\) to change position. The formula _no_change_ then states that any move by relation \(h\) or \(v\) that does not change the position must lead to an indistinguishable tile. We abbreviate formulas with quantifiers as \[\psi_{x\&y}:=up\wedge right\wedge right\&up\wedge up\&right\wedge no\_change\] In our reduction, we are interested in grids where a special tile appears infinitely often in the first column of the grid. The following formula requires that the special tile appears only in the leftmost column: \[tile\_left:=p^{*}\to\square_{s}(\mathsf{west}\to\square_{h}\mathsf{west})\] All of this completes the necessary requirements for the grid. Now, by adding a common knowledge modality for all agents, we force all of the aforementioned formulas to hold everywhere in the grid. \[\Psi_{\mathsf{T}}:=\blacksquare_{(h,v,s)}\left(\psi_{tile}\wedge adj\_tiles \wedge init\wedge\psi_{x\&y}\wedge tile\_left)\right.\] Observe that \(\Psi_{\mathsf{T}}\) does not say anything about the special tile \(\tau^{*}\) appearing infinitely often in the first column. The formula merely requires that if there is a special tile, then it should appear in the first column. We first show that \(\Psi_{\mathsf{T}}\) forces a grid-like model, and only after that will we consider the (in)finite number of occurrences of the special tile. **Lemma 1**.: Let \(\mathsf{T}\) be an instance of the recurring tiling problem. If \(\mathsf{T}\) can tile \(\mathbb{N}\times\mathbb{N}\), then \(\Psi_{\mathsf{T}}\) is satisfiable. Proof.: Assume that there is a tiling of the \(\mathbb{N}\times\mathbb{N}\) plane with a finite set of tiles \(\mathsf{T}\). We construct model \(M=(S,\sim,V)\) satisfying \(\Psi_{\mathsf{T}}\) directly from the given tiling. In particular, * \(S=\mathbb{N}\times\mathbb{N}\times\{\mathsf{n},\mathsf{s},\mathsf{e},\mathsf{ n},\mathsf{c}\}\), * \(\sim_{s}=\{(i,j,\mathsf{t}),(i^{\prime},j^{\prime},\mathsf{t}^{\prime})\mid i =i^{\prime}\text{ and }j=j^{\prime}\}\) * \(\sim_{v}\) is the reflexive closure of \(\{(i,j,\mathsf{n}),(i,j+1,\mathsf{s})\}\) * \(\sim_{h}\) is the reflexive closure of \(\{(i,j,\mathsf{e}),(i+1,j,\mathsf{n})\}\) * for all \(\tau_{k}\in\mathsf{T}\), \(V(p_{k})=\{(i,j,\mathsf{t})\mid\tau_{k}\text{ is at }(i,j)\}\) * for all \(c\in\mathcal{C}\), \(V(c)=\{(i,j,\mathsf{t})\mid\tau(\mathsf{t})=c\}\) * for all \(l\in\mathsf{Position}\), \(V(l)=\{(i,j,\mathsf{t})\mid l\text{ corresponds to }\mathsf{t}\}\) To argue that \(M_{(0,0,\epsilon)}\models\Psi_{\mathsf{T}}\) we first notice that due to the fact that \(\mathsf{T}\) tiles the \(\mathbb{N}\times\mathbb{N}\) plane and by the construction of \(M\), subformulas of \(\Psi_{\mathsf{T}}\) that do not involve arbitrary announcements are straightforwardly satisfied. Now, consider the formula \(up\). For every \((i,j,\mathsf{t})\), there is at most one \((i^{\prime},j^{\prime},\mathsf{t}^{\prime})\) that is reachable by taking an \(s\)-step to a north state followed by a \(v\)-step to a south state, namely \((i^{\prime},j^{\prime},\mathsf{t}^{\prime})=(i,j+1,\mathsf{s})\). Furthermore, this property is retained in any submodel of \(M\). As a consequence, in any state of any submodel of \(M\), \(\Diamond_{up}\chi\) implies \(\square_{up}\chi\), for every \(\chi\). In particular, it follows that \(M_{(i,j,\mathsf{t})}\models[!](\Diamond_{up}\Diamond_{s}\mathsf{centre}\to \square_{up}\Diamond_{s}\mathsf{centre})\), i.e., \(M_{(i,j,\mathsf{t})}\models up\). Similar reasoning shows that \((i,j,\mathsf{t})\) satisfies the other conjuncts of \(\psi_{x\&y}\). Hence \(M_{(i,j,\mathsf{t})}\models\psi_{tile}\wedge adj\_tiles\wedge init\wedge\psi_{x \&y}\wedge tile\_left,\) for all \((i,j,\mathsf{t})\), and thus \(M_{(0,0,\epsilon)}\models\Psi_{\mathsf{T}}\) The more complex part of the reduction is to show that if \(\Psi_{\mathsf{T}}\) is satisfiable, then a tiling exists. **Lemma 2**.: Let \(\mathsf{T}\) be an instance of the recurring tiling problem. If \(\Psi_{\mathsf{T}}\) is satisfiable, then \(\mathsf{T}\) can tile \(\mathbb{N}\times\mathbb{N}\). Proof.: Let \(M\) be such that \(M_{s}\models\Psi_{\mathsf{T}}\). The model \(M\) is partitioned by \(\sim_{s}\), we refer to these partitions as grid points, and label these points as follows. * The grid point containing \(s\) is labelled \((0,0)\). * If \(A\) and \(B\) are grid points, \(A\) is labelled \((i,j)\) and there is a north-state in \(A\) that is \(v\)-indistinguishable to a south-state in \(B\), then \(B\) is labelled \((i,j+1)\). * If \(A\) and \(B\) are grid points, \(A\) is labelled \((i,j)\) and there is a east-state in \(A\) that is \(h\)-indistinguishable to a west-state in \(B\), then \(B\) is labelled \((i+1,j)\). Note that a single grid point might have multiple labels. We say that \((i,j)\) is tiled with \(\tau_{i}\) if there is some grid point labelled with \((i,j)\) that contains a state where \(p_{i}\) holds. We start by noting that because the main connective of \(\Psi_{\mathsf{T}}\) is \(\blacksquare_{\{h,v,s\}}\), the formula holds in every labelled grid point. For every labelled grid point \(X\) and every \(x\in X\), we therefore have \(M_{x}\models\psi_{tile}\). So \(X\) contains states for every direction, each labelled with exactly one colour that corresponds to the tile that holds on \(X\). We continue by proving the following claim. **Claim 1:** Let \(X\), \(A\) and \(B\) be grid points where \(X\) is labeled \((i,j)\) while \(A\) and \(B\) are both labeled \((i,j+k)\) by virtue of being \(k\)-steps to the north of \(X\). Then \(A\) and \(B\) are PALC-indistinguishable, in the sense that for every \(\chi\in\mathsf{PALC}\), if there is an \(a\in A\) such that \(M_{a}\models\chi\) then there is a \(b\in B\) such \(M_{b}\models\chi\) (and vice versa). **Proof of Claim 1:** By induction on \(k\). As base case, let \(k=1\) and suppose towards a contradiction that, for some \(\chi\in\mathsf{PALC}\) and \(a\in A\), \(M_{a}\models\chi\) while for every \(b\in B\), \(M_{b}\not\models\chi\). Consider then the formula \(\mathsf{centre}\to\Diamond_{s}\chi\). Every centre state in \(A\) satisfies this formula, while none of the centre states in \(B\) do. Hence, for every state \(x\in X\), \(M_{x}\models[\mathsf{centre}\to\Diamond_{s}\chi](\Diamond_{up}\Diamond_{s} \mathsf{centre}\wedge\neg\Box_{up}\Diamond_{s}\mathsf{centre})\). But that contradicts \(M_{x}\models up\). From this contradiction, we prove the base case \(k=1\). Now, suppose as induction hypothesis that \(k>1\) and that the claim holds for all \(k^{\prime}<k\). Again, suppose towards a contradiction that \(M_{a}\models\chi\) while \(M_{b}\not\models\chi\) for all \(b\in B\). Let \(A^{\prime}\) and \(B^{\prime}\) be grid points that lie \(k-1\) steps to the north of \(X\) and one step to the south of \(A\) and \(B\), respectively. Then for every \(a^{\prime}\in A^{\prime}\) and \(b^{\prime}\in B^{\prime}\), \(M_{a^{\prime}}\models\Diamond_{up}\Diamond_{s}\chi\) and \(M_{b^{\prime}}\models\Diamond_{up}\neg\Diamond_{s}\chi\). By the induction hypothesis, \(A^{\prime}\) and \(B^{\prime}\) are indistinguishable, so \(M_{a^{\prime}}\models\Diamond_{up}\Diamond_{s}\chi\wedge\Diamond_{up} \neg\Diamond_{s}\chi\). But then there are distinguishable grid points one step to the north of \(A^{\prime}\), contradicting the induction hypothesis. From this contradiction, we prove the induction step and thereby the claim. Similar reasoning shows that any two grid points \(A,B\) that are labeled \((i+k,j)\) by virtue of being \(k\) steps to the right of the same grid point \(X\) are indistinguishable. Now, we can prove the next claim. **Claim 2:** Let \(X\), \(A\) and \(B\) be grid points, where \(X\) is labelled \((i,j)\), \(A\) is labelled \((i+1,j+1)\) by virtue of being above \(A^{\prime}\) which is to the right of \(X\), and \(B\) is labelled \((i+1,j+1)\) by virtue of being to the right of \(B^{\prime}\) which is above \(B\). Then \(A\) and \(B\) are PALC-indistinguishable. **Proof of claim 2:** Suppose towards a contradiction that for some \(\chi\in\mathsf{PALC}\) and \(a\in A\) we have \(M_{a}\models\chi\), while \(M_{b}\not\models\chi\) for all \(b\in B\). Then for \(x\in X\) we have \(M_{x}\models[\mathsf{centre}\to\Diamond_{s}\chi](\Diamond_{right}\Diamond_{ up}\Diamond_{s}\mathsf{centre}\wedge\Diamond_{up}\Diamond_{ right}\neg\Diamond_{s}\mathsf{centre})\), contradicting \(M_{x}\models right\&up\). From Claim 1 it follows that any \(A\) and \(B\) that are labelled \((i,j)\) by virtue of being \(i\) steps to the right and then \(j\) steps up from \((0,0)\) are PALC-indistinguishable. Claim 2 then lets us commute the "up" and "right" moves. Any path to \((i,j)\) can be obtained from the path that first goes right \(i\) steps then up \(j\) steps by a finite sequence of such commutations. Hence any grid points \(A\) and \(B\) that are labelled \((i,j)\) are PALC-indistinguishable. The tile formulas \(p_{i}\), for every \(\tau_{i}\in\mathrm{T}\), are PALC-formulas, so there is exactly one tile \(\tau_{i}\) that is assigned to the grid point \((i,j)\). Furthermore, _state_col_ then guarantees that each side of a grid point has the colour corresponding to the tile, and _adj_tiles_ guaranteees that the tile colours match. This shows that if \(\Psi_{\mathrm{T}}\) is satisfiable, then \(\mathrm{T}\) can tile \(\mathbb{N}\times\mathbb{N}\). ### Encoding the Recurring Tile The final formula that is satisfied in a grid model if and only if a given tiling has a tile that occurs infinitely often in the first column would be \[\Psi_{\mathrm{T}}\wedge\blacksquare_{\{v,s\}}[\blacksquare_{\{h,s\}}\neg p^{*}]\neg \Psi_{\mathrm{T}}.\] In other words, the recurring tiling problem can be reduced to the APALC-satisfiability problem, where the reduction maps the instance \((\mathrm{T},\tau^{*})\) of the recurring tiling problem to the satisfiability of \(\Psi_{\mathrm{T}}\wedge\blacksquare_{\{v,s\}}[\blacksquare_{\{h,s\}}\neg p^{*}]\neg \Psi_{\mathrm{T}}\). Intuitively, the formula states that if we remove all rows with the special tile, then our model is no longer a grid. See Figure 2, where on the left we have a grid with the special grey tile \(\tau^{*}\) appearing infinitely often in the first column (every other tile in the first column is grey). Formula \(\blacksquare_{\{h,s\}}\neg p^{*}\) holds only in those squares of the grid that lie on rows without the special tile. Thus, announcing \(\blacksquare_{\{h,s\}}\neg p^{*}\) removes all rows that has the grey tile (see the right part of Figure 2). Since the grey tile appears infinitely often in the original grid, we have to remove an infinite number of rows after the announcement of \(\blacksquare_{\{h,s\}}\neg p^{*}\), thus ensuring that what is left of the original model is not a grid. **Theorem 1**.: Let \(\mathrm{T}\) be an instance of the tiling problem with a special tile \(\tau^{*}\in\mathrm{T}\). Set \(\mathrm{T}\) can tile \(\mathbb{N}\times\mathbb{N}\) with \(\tau^{*}\) appearing infinitely often in the first column if and only if \(\Psi_{\mathrm{T}}\wedge\blacksquare_{\{v,s\}}[\blacksquare_{\{h,s\}}\neg p^{*}]\neg \Psi_{\mathrm{T}}\) is satisfiable. Proof.: First, let us can extend the labelling from the proof of Lemma 2 as follows: * For every \(q\in\mathsf{Position}\), if \(A\) and \(B\) are grid points, \(A\) is labeled \((i,j)\) and there is a \(q\) state in \(A\) that is \(v\) or \(h\)-indistinguishable from a \(q\) state in \(B\), then \(B\) is labeled \((i,j)\). Figure 2: Left: An original grid with a special grey tile \(\tau^{*}\) appearing infinitely often in the first column. Right: The grid after the public announcement of \(\blacksquare_{\{h,s\}}\neg p^{*}\). Crossed-out rows are not preserved after the announcement. It follows from _no_change_ that this extended labelling retains the property that any two grid points with the same label are PALC-indistinguishable. Furthermore, from _hor_ and _vert_ it follows that every grid point that is reachable by \(h\), \(v\) and \(s\) is now labelled with some coordinates \((i,j)\). Hence we can identify the \(\{h,v,s\}\)-reachable grid points in any model of \(\Psi_{\text{T}}\) with \(\mathbb{N}\times\mathbb{N}\). Now, assume that set T cannot tile the \(\mathbb{N}\times\mathbb{N}\) plane with a special tile \(\tau^{\prime}\in\text{T}\) appearing infinitely often in the first column. We argue that in this case, \(\Psi_{\text{T}}\wedge\blacksquare_{\{v,s\}}([\blacksquare_{\{h,s\}}\neg p^{*}]\neg p^{*})\) is not satisfiable. The first conjunct is straightforward. If T cannot tile the \(\mathbb{N}\times\mathbb{N}\) plane, then, by Lemma 2, \(\Psi_{\text{T}}\) is not satisfiable. So suppose that T can tile the plane, but only in such a way that \(\tau^{*}\) occurs finitely often. For every model \(M_{(0,0,1)}\) of \(\Psi_{\text{T}}\), there is then some \(k\in\mathbb{N}\) that is the last row in which \(p^{*}\) is true. The formula \(\blacksquare_{\{h,s\}}\neg p^{*}\) holds exactly on those rows where \(p^{*}\) does not hold in the first column. As a result, the update \([\blacksquare_{\{h,s\}}\neg p^{*}]\) does not remove any rows past row \(k\). The grid points \(\mathbb{N}\times\mathbb{N}_{>k}\) then still form a grid that is isomorphic to \(\mathbb{N}\times\mathbb{N}\), and that is tiled. See Figure 3 for a depiction of the situation. It follows that \(M_{(0,k,1)}\not\models[\blacksquare_{\{h,s\}}\neg p^{*}]\neg\Psi_{\text{T}}\), and therefore \(M_{(0,0,1)}\not\models\blacksquare_{\{v,s\}}[\blacksquare_{\{h,s\}}\neg p^{*}]\neg\Psi _{\text{T}}\). This is true for every model of \(\Psi_{\text{T}}\), so \(\Psi_{\text{T}}\wedge\blacksquare_{\{v,s\}}[\blacksquare_{\{h,s\}}\neg p^{*}]\neg\Psi _{\text{T}}\) is not satisfiable. If, on the other hand, T can tile the plane in such a way that \(\tau^{*}\) occurs infinitely often in the first column, then there is a model of \(\Psi_{\text{T}}\) where the modality \([\blacksquare_{\{h,s\}}\neg p^{*}]\) removes infinitely many rows, and therefore does not leave any infinite grid. So \(\Psi_{\text{T}}\wedge\blacksquare_{\{v,s\}}[\blacksquare_{\{h,s\}}\neg p^{*}]\neg\Psi _{\text{T}}\) is satisfiable. In the construction of \(\Psi_{\text{T}}\) and proofs of Lemmas 1 and 2, we used APALC quantifiers \([!]\). We can prove the similar results for GALC and CALC quantifers by substituting \([!]\) with \([\{h,v,s\}]\) and \([\langle\{h,v,s\}\rangle]\) correspondingly, and substituting \(\mathsf{PALC}\) with \(\mathsf{PALC}^{\{h,v,s\}}\). We get the hardness result from the \(\Sigma^{1}_{1}\)-completeness of the recurring tiling problem [19]. **Theorem 2**.: The satisfiability problem of QPALCs is \(\Sigma^{1}_{1}\)-hard. The \(\Sigma^{1}_{1}\)-hardness of the satisfiability problems of QPALCs together with the fact that the class of \(\Sigma^{1}_{1}\) problems is strictly greater than the class of co-RE problems [24, Chapter 4] imply that the sets of validites of the logics are not RE, which, in turn, implies that QPALCs are not finitely axiomatisable. **Corollary 1**.: The set of valid formulas of QPALCs is neither RE nor co-RE. **Corollary 2**.: QPALCs do not have finitary axiomatisations. Figure 3: Left: An original grid with a special grey tile \(\tau^{*}\) appearing finitely often in the first column. Right: The grid after the public announcement of \(\blacksquare_{\{h,s\}}\neg p^{*}\). Crossed-out rows are not preserved after the announcement. A full \(\mathbb{N}\times\mathbb{N}\) grid that is still available after the announcement is depicted with thick lines. ## 4 Discussion The existence of finitary axiomatisations of any of APAL, GAL, and CAL is a long-standing open problem. In this paper, we have showed that the satisfiability problem of the logics extended with common knowledge modality is \(\Sigma^{1}_{1}\)-hard, and thus they do not admit of finitary axiomatisations. Table 1 contains the overview of the known results, including those shown in this paper, and open questions. It is important to point out that the use of common knowledge is instrumental in our construction. Arguments from [15, 4] did not rely on common knowledge to enforce local grid properties globally, and instead the authors used an agent with the universal relation over the set of states. This approach is good enough if one wants to demonstrate the existence of a grid-like model. However, if we also require that the grid satisfies some property, like a special tile occurring infinitely often in the first column, then the presence of the global agent makes it harder to ensure this. The problem is that such an unrestrained relation may access other grids within the same model, and thus we may end up in the situation when the property is satisfied by a set of grids taken together and not by any single grid. Our construction is 'tighter' than those in [15, 4]. In particular, our vertical and _h_orizontal agents can'see' only one step ahead. This guarantees that we stay within the same grid. In order to force grid properties globally, we use common knowledge operators that allow us to traverse a given grid-like model in all directions. It is not yet clear how to have a 'tight' grid and still be able to traverse the model without common knowledge. With this work, apart from showing that QPALCs are \(\Sigma^{1}_{1}\)-hard, we also hope to have elucidated the exact obstacle one has to overcome in order to claim the same about QPALs. ### Acknowledgements We would like to thank the three anonymous reviewers for their encouraging comments and constructive suggestions, which helped us to improve the presentation of our result.
2303.17193
Characterization of random features of chaotic eigenfunctions in unperturbed basis
In this paper, we study random features manifested in components of energy eigenfunctions of quantum chaotic systems, given in the basis of unperturbed, integrable systems. Based on semiclassical analysis, particularly on Berry's conjecture, it is shown that the components in classically allowed regions can be regarded as Gaussian random numbers in certain sense, when appropriately rescaled with respect to the average shape of the eigenfunctions. This suggests that, when a perturbed system changes from integrable to chaotic, deviation of the distribution of rescaled components in classically allowed regions from the Gaussian distribution may be employed as a measure for the ``distance'' to quantum chaos. Numerical simulations performed in the LMG model and the Dicke model show that this deviation coincides with the deviation of the nearest-level-spacing distribution from the prediction of random-matrix theory. Similar numerical results are also obtained in two models without classical counterpart.
Jiaozi Wang, Wen-ge Wang
2023-03-30T07:15:59Z
http://arxiv.org/abs/2303.17193v1
# Characterization of random features of chaotic eigenfunctions in unperturbed basis ###### Abstract In this paper, we study random features manifested in components of energy eigenfunctions of quantum chaotic systems, given in the basis of unperturbed, integrable systems. Based on semiclassical analysis, particularly on Berry's conjecture, it is shown that the components in classically allowed regions can be regarded as Gaussian random numbers in certain sense, when appropriately rescaled with respect to the average shape of the eigenfunctions. This suggests that, when a perturbed system changes from integrable to chaotic, deviation of the distribution of rescaled components in classically allowed regions from the Gaussian distribution may be employed as a measure for the "distance" to quantum chaos. Numerical simulations performed in the LMG model and the Dicke model show that this deviation coincides with the deviation of the nearest-level-spacing distribution from the prediction of random-matrix theory. Similar numerical results are also obtained in two models without classical counterpart. ## I Introduction A commonsense in the field of quantum chaos is that energy eigenfunctions (EFs) of chaotic systems should show certain random feature [1; 2; 3; 4], though their Hamiltonian matrices are deterministic and some of them even may show a sparse structure. This property has vast applications in various fields [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15]. In particular, it is of relevance to thermalization [16; 17; 18; 19; 20; 21; 22; 23; 24], a topic which has attracted renewed interest in recent years. According to Berry's conjecture, for EFs of chaotic systems expressed in the configuration space, their components in classically allowed regions can be regarded as being given from certain Gaussian random numbers [1]. Based on this conjecture, it is natural to expect that, when expanded in the bases of unperturbed integrable systems, the EFs should show certain random feature as well within appropriate regions. Indeed, numerical simulations revealed such a feature for main bodies of EFs (see, e.g., Ref.[25]). However, more detailed study showed that the distribution of components of EFs usually exhibit notable deviation from the Gaussian distribution, which is predicted by the random-matrix theory (RMT) (see, e.g., Ref.[5]). More recently, numerical simulations show that, if EFs are rescaled with respect to their average shape, the above-discussed deviation can be considerably reduced [26; 27]. This gives a clue to a solution to a long-standing problem in the field of quantum chaos, that is, in which way statistical properties of EFs may be employed to give a quantitative measure for the "distance" to chaos. The above-mentioned numerical simulations suggest that deviation of the distribution of rescaled components of EFs from the Gaussian distribution should be a candidate for such a measure of "distance". However, presently, the situation is not completely clear, because in some cases this measure shows notable deviations from results obtained from statistical properties of spectra, e.g., from deviation of the nearest-level-spacing distribution from the prediction of the RMT [26; 27]. In this paper, based on semiclassical analysis, particularly on the Berry's conjecture, we study random features manifested in components of EFs of chaotic systems in integrable bases. Our analysis shows that the distribution of the components in classically-allowed regions indeed should have a Gaussian form, under a rescaling procedure which is more appropriate than that adopted in Refs.[26; 27]. Our numerical simulations performed in the Lipkin-Meshkov-Glick (LMG) model and the Dicke model show that, adopting this new rescaling procedure, deviation of the distribution of components from the Gaussian distribution coincides quite well with that obtained from the statistics of spectra. We also study some models without any classical counterpart and find similar results. The paper is organised as follows. In Sec.II, a detailed semiclassical analysis is carried out for random features manifested in components of EFs of chaotic systems in integrable bases. Numerical simulations in two models with classical counterparts are discussed in Sec.III. Then, in Sec.IV, we discuss numerical simulations performed in two models without classical counterpart. Finally, conclusions and discussions are given in Sec.V. ## II Random features of chaotic EFs In Sec.II.1, based on semiclassical analysis we discuss random features of chaotic EFs. Then, making use of results obtained, we discuss a quantitative characterization of the random feature in Sec.II.2. ### Semiclassical analysis of chaotic EFs Consider a quantum system, which has an \(f\)-dimensional classical counterpart, with a Hamiltonian \[H=H_{0}+\lambda V, \tag{1}\] where \(H_{0}\) indicates the Hamiltonian of an integrable system and \(V\) is a perturbation. Within certain regime of the parameter \(\lambda\), the classical counterpart of the system \(H\) undergoes a chaotic motion. In this section, we consider a chaotic system \(H\). In terms of action-angle variables, \(H_{0}\) is written as \[H_{0}=\mathbf{d}\cdot\mathbf{I}+c_{0}, \tag{2}\] where \(\mathbf{I}=(I_{1},I_{2},\cdots,I_{f})\) is the action variable, \(\mathbf{d}\) is a parameter vector, \(\mathbf{d}=(d_{1},d_{2},\cdots,d_{f})\), and \(c_{0}\) is a constant parameter. In the quantum case, we use \(|\mathbf{n}\rangle\) to denote the eigenbasis of \(\mathbf{I}\), with \(\mathbf{I}|\mathbf{n}\rangle=\mathbf{I}_{\mathbf{n}}|\mathbf{n}\rangle\), where \(\mathbf{n}=(n_{1},n_{2},\cdots,n_{f})\) is an integer vector and \(\mathbf{I}_{\mathbf{n}}=\mathbf{n}\hbar\). The Hamiltonian \(H_{0}\) has a diagonal form in this basis with eigenvalues denoted by \(E_{\mathbf{n}}^{0}\), \[H_{0}|\mathbf{n}\rangle=E_{\mathbf{n}}^{0}|\mathbf{n}\rangle. \tag{3}\] We use \(|E_{\alpha}\rangle\) to denote eigenstates of \(H\) with eigenvalues \(E_{\alpha}\) in energy order, \[H|E_{\alpha}\rangle=E_{\alpha}|E_{\alpha}\rangle. \tag{4}\] The expansion of \(|E_{\alpha}\rangle\) in the basis \(|\mathbf{n}\rangle\) is written as \[|E_{\alpha}\rangle=\sum_{\mathbf{n}}C_{\alpha\mathbf{n}}|\mathbf{n}\rangle, \tag{5}\] with \(C_{\alpha\mathbf{n}}=\langle\mathbf{n}|E_{\alpha}\rangle\). Below in this section, we discuss random features manifested in the components \(C_{\alpha\mathbf{n}}\) and their statistical properties in chaotic systems. In terms of the wave functions of \(|E_{\alpha}\rangle\) and \(|\mathbf{n}\rangle\) in the momentum space, denoted by \(\psi_{\alpha}(\mathbf{p})\) and \(\psi_{\mathbf{n}}^{0}(\mathbf{p})\), respectively, the components \(C_{\alpha\mathbf{n}}\) are written as \[C_{\alpha\mathbf{n}}=\int(\psi_{\mathbf{n}}^{0}(\mathbf{p}))^{*}\psi_{\alpha}(\mathbf{p})d\bm {p}. \tag{6}\] Generically, a wave function \(\psi_{\alpha}(\mathbf{p})\) can be written in the following form, \[\psi_{\alpha}(\mathbf{p})=A_{\alpha}(\mathbf{p})\sqrt{\Pi_{\alpha}(\mathbf{p})}, \tag{7}\] where \(\Pi_{\alpha}(\mathbf{p})\) indicates local average of \(|\psi_{\alpha}(\mathbf{p})|^{2}\). Then, \(C_{\alpha\mathbf{n}}\) is written as \[C_{\alpha\mathbf{n}}=\int A_{\alpha}(\mathbf{p})(\psi_{\mathbf{n}}^{0}(\mathbf{p}))^{*}\sqrt{ \Pi_{\alpha}(\mathbf{p})}d\mathbf{p}. \tag{8}\] According to the Berry's conjecture [1], in a chaotic system the quantity \(A_{\alpha}(\mathbf{p})\) should have random phases. This implies that the components \(C_{\alpha\mathbf{n}}\) can be effectively regarded as some random numbers. In realistic physical models, the average shape of \(|C_{\alpha\mathbf{n}}|^{2}\) is usually not uniform with respect to the perturbed and unperturbed energies. Due to this nonuniformity, the statistical distribution of the components \(C_{\alpha\mathbf{n}}\) can not have a Gaussian shape [5]. But, if they are rescaled such that the effect of average shape of EFs is appropriately taken into account, it should be possible for their distribution to have a Gaussian form. Below, we derive a semiclassical expression for the average shape of \(|C_{\alpha\mathbf{n}}|^{2}\) that is suitable for this purpose. The Wigner function supplies a useful tool in semiclassical analysis of eigenstates. We use \(\psi_{\alpha}(\mathbf{r})\) and \(\psi_{\mathbf{n}}^{0}(\mathbf{r})\) to indicate the wave functions of \(|E_{\alpha}\rangle\) and \(|\mathbf{n}\rangle\) in the coordinate space, respectively. The Wigner function corresponding to \(\psi_{\alpha}(\mathbf{r})\), denoted by \(W_{\alpha}(\mathbf{p},\mathbf{q})\), is written as \[W_{\alpha}(\mathbf{p},\mathbf{q})=\frac{1}{(2\pi\hbar)^{f}}\int_{-\infty}^{\infty} \psi_{\alpha}^{*}(\mathbf{q}+\frac{\mathbf{r}}{2})\psi_{\alpha}(\mathbf{q}-\frac{\mathbf{r}}{2 })e^{i\mathbf{p}\cdot\mathbf{r}/\hbar}d\mathbf{r}, \tag{9}\] and similar for the Wigner function corresponding to \(\psi_{\mathbf{n}}^{0}(\mathbf{r})\), denoted by \(W_{\mathbf{n}}^{0}(\mathbf{p},\mathbf{q})\). As is known, in a chaotic system, the averaged Wigner function, with average taken within certain small regions of the phase space, has the following expression, [1; 2; 28; 29] \[\overline{W}_{\alpha}(\mathbf{p},\mathbf{q})=\frac{\delta(H(\mathbf{p},\mathbf{q})-E_{\alpha} )}{S(E_{\alpha})}, \tag{10}\] where \(S(E)\) represents the area of an energy surface with \(H(\mathbf{p},\mathbf{q})=E\), \[S(E)=\int d\mathbf{p}d\mathbf{q}\delta(E-H(\mathbf{p},\mathbf{q})). \tag{11}\] Equation (10) gives that \[\Pi_{\alpha}(\mathbf{p})=\frac{1}{S(E_{\alpha})}\int\delta(E_{\alpha}-H(\mathbf{p}, \mathbf{q}))d\mathbf{q}. \tag{12}\] Equation (10) implies that most eigenstates within a narrow energy window in a chaotic system should have close shapes. Therefore, when computing the average shape of \(|C_{\alpha\mathbf{n}}|^{2}\) for the purpose discussed above, one may perform an average within such a narrow energy window. For the convenience in discussion, we write a coarse-grained \(\delta\)-function as \(\delta_{\epsilon}(E)\), \[\delta_{\epsilon}(E)=\begin{cases}\frac{1}{\epsilon}&E\in[-\frac{\epsilon}{2},\frac{\epsilon}{2}],\\ 0&\text{otherwise},\end{cases} \tag{13}\] where \(\epsilon\) is a small parameter. The choice of energy window \(\epsilon\) should satisfy the following requirements: It is small in the classical case such that the energy surface almost does not change within the window, while, it is sufficiently large in the quantum case such that many energy levels are included within the window. Then, the average shape of EFs, denoted by \(\langle|C_{\alpha\mathbf{n}}|^{2}\rangle\), is computed by \[\langle|C_{\alpha\mathbf{n}}|^{2}\rangle=\frac{1}{N_{E_{\alpha}}}\sum_{\alpha^{ \prime}}|C_{\alpha^{\prime}\mathbf{n}}|^{2}\delta_{\epsilon}(E_{\alpha^{\prime}}-E _{\alpha}), \tag{14}\] where \[N_{E_{\alpha}}=\sum_{\alpha^{\prime}}\delta_{\epsilon}(E_{\alpha^{\prime}}-E _{\alpha}). \tag{15}\] In order to derive an explicit expression for \(\langle|C_{\alpha\mathbf{n}}|^{2}\rangle\), we make use of the following well-known expression of \(|C_{\alpha\mathbf{n}}|^{2}\), \[|C_{\alpha\mathbf{n}}|^{2}=(2\pi\hbar)^{f}\int d\mathbf{p}d\mathbf{q}W_{\alpha}(\mathbf{p},\bm {q})W^{0}_{\mathbf{n}}(\mathbf{p},\mathbf{q}). \tag{16}\] Let us divide the phase space into small cells, denoted by \(c_{\sigma}\) with a label \(\sigma\), each having a volume \(\delta\Omega\), meanwhile, keep the ratio \(\delta\Omega/\hbar^{f}\) large such that there are many quantum states "lying" within each small cell. Then, \(|C_{\alpha\mathbf{n}}|^{2}\) is written as \[|C_{\alpha\mathbf{n}}|^{2}=(2\pi\hbar)^{f}\sum_{\sigma}\int_{c_{\sigma}}d\mathbf{p}d \mathbf{q}W_{\alpha}(\mathbf{p},\mathbf{q})W^{0}_{\mathbf{n}}(\mathbf{p},\mathbf{q}). \tag{17}\] Substituting Eq.(17) into Eq.(14) and performing the summation over the perturbed states \(|E_{\alpha^{\prime}}\rangle\), one gets that \[\langle|C_{\alpha\mathbf{n}}|^{2}\rangle=(2\pi\hbar)^{f}\sum_{\sigma}\int_{c_{ \sigma}}d\mathbf{p}d\mathbf{q}\langle W_{\alpha}(\mathbf{p},\mathbf{q})\rangle W^{0}_{\mathbf{n}} (\mathbf{p},\mathbf{q}), \tag{18}\] where \(\langle W_{\alpha}(\mathbf{p},\mathbf{q})\rangle\) indicates the average of \(W_{\alpha}(\mathbf{p},\mathbf{q})\) over perturbed states within a small energy window \(\epsilon\). As many energy levels are included within the window \(\epsilon\), \(\langle W_{\alpha}(\mathbf{p},\mathbf{q})\rangle\) should vary slowly within each small cell \(c_{\sigma}\), such that \[\langle W_{\alpha}(\mathbf{p},\mathbf{q})\rangle\simeq\langle W_{\alpha}(\mathbf{p}_{ \sigma},\mathbf{q}_{\sigma})\rangle\quad\text{ for }(\mathbf{p},\mathbf{q})\in c_{\sigma}, \tag{19}\] where \((\mathbf{q}_{\sigma},\mathbf{p}_{\sigma})\) indicates the center of \(c_{\sigma}\). Then, Eq.(18) gives that \[\langle|C_{\alpha\mathbf{n}}|^{2}\rangle\simeq(2\pi\hbar)^{f}\sum_{\sigma}\langle W _{\alpha}(\mathbf{p}_{\sigma},\mathbf{q}_{\sigma})\rangle\overline{W}^{0}_{\mathbf{n}}( \mathbf{p}_{\sigma},\mathbf{q}_{\sigma})\delta\Omega, \tag{20}\] where \(\overline{W}^{0}_{\mathbf{n}}(\mathbf{p}_{\sigma},\mathbf{q}_{\sigma})\) represents the average of the Wigner function of the integrable system within the cell \(c_{\sigma}\), \[\overline{W}^{0}_{\mathbf{n}}(\mathbf{p}_{\sigma},\mathbf{q}_{\sigma})=\frac{1}{\delta \Omega}\int_{c_{\sigma}}W^{0}_{\mathbf{n}}(\mathbf{p},\mathbf{q})d\mathbf{p}d\mathbf{q}. \tag{21}\] Due to the classical smallness and quantum mechanical largeness of the energy windows \(\epsilon\) discussed previously, \(\langle W_{\alpha}(\mathbf{p},\mathbf{q})\rangle\) in Eq.(18) obeys Eq.(10) in an approximate way, with \[\langle W_{\alpha}(\mathbf{p},\mathbf{q})\rangle\simeq\overline{W}_{\alpha}(\mathbf{p}, \mathbf{q}), \tag{22}\] and its dependence on \(\epsilon\) can be neglected. It is known that [1] \[\overline{W}^{0}_{\mathbf{n}}(\mathbf{p},\mathbf{q})=\frac{\delta(\mathbf{I}(\mathbf{p},\mathbf{q})- \mathbf{I}_{\mathbf{n}})}{(2\pi)^{f}}. \tag{23}\] Substituting Eqs.(22) and (23) into Eq.(20) and noting that the smallness of the cells \(c_{\sigma}\) enables one to change the summation over \(\sigma\) back to the integration over phase space, one gets the following semiclassical expression, \[\langle|C_{\alpha\mathbf{n}}|^{2}\rangle\simeq\hbar^{f}\ \Pi(E_{\alpha},\mathbf{I}_{ \mathbf{n}}), \tag{24}\] where \[\Pi(E,\mathbf{I})=\frac{S(E,\mathbf{I})}{S(E)}, \tag{25}\] \[S(E,\mathbf{I})=\int d\mathbf{p}d\mathbf{q}\delta(E-H(\mathbf{p},\mathbf{q}))\delta( \mathbf{I}-\mathbf{I}(\mathbf{p},\mathbf{q})). \tag{26}\] Here, \(S(E,\mathbf{I})\) indicates the overlap of the energy surface of \(H(\mathbf{p},\mathbf{q})=E\) and the torus of \(\mathbf{I}(\mathbf{p},\mathbf{q})=\mathbf{I}\). Since Eq.(10) works for classically allowed regions only, so does Eq.(24). Sometimes, quantities like \(\Pi(E,\mathbf{I})\) are called classical analog of averaged EFs [26; 30; 31]. Finally, we consider rescaled components denoted by \(R_{\alpha\mathbf{n}}\), defined by \[R_{\alpha\mathbf{n}}=\frac{C_{\alpha\mathbf{n}}}{\sqrt{\langle|C_{\alpha\mathbf{n}}|^{2} \rangle}}. \tag{27}\] Discussions given above show that this quantity \(R_{\alpha\mathbf{n}}\) can be regarded as a Gaussian random number with mean zero. Note that \(\langle|R_{\alpha\mathbf{n}}|^{2}\rangle=1\) according to Eq.(27). ### A measure for "distance" to quantum chaos Let us use \(f(R)\) to denote the distribution of \(R_{\alpha\mathbf{n}}\). According to results given in the above section, for a chaotic system, \(f(R)\) should have a Gaussian form, i.e., \[f(R)=f_{G}(R), \tag{28}\] where \(f_{G}(R)\) is the Gaussian distribution, \[f_{G}(R)=\frac{1}{\sqrt{2\pi}}\exp(-R^{2}/2), \tag{29}\] In the RMT, the Gaussian distribution is predicted directly for components of EFs [4]. But, for Hamiltonians in realistic models with chaotic classical counterparts, as discussed above, it is the distribution of the rescaled components \(R_{\alpha\mathbf{n}}\) that should have a Gaussian form. On the other hand, in a nearly integrable system, the quantity \(A_{\alpha}(\mathbf{p})\) on the rhs of Eq.(8) does not have random phases and EFs with close energies may have quite different shapes. As a result, the distribution \(f(R)\) in nearly integrable systems should usually show notable deviation from \(f_{G}(R)\). The above discussions suggest that deviation of \(f(R)\) from \(f_{G}(R)\) may be employed as an measure for the "distance" to quantum chaos. In order to quantitatively characterize the deviation, one may consider a quantity \(\Delta_{EF}\) defined by \[\Delta_{EF}=\int|I_{f}(R)-I_{f_{G}}(R)|dR, \tag{30}\] where \(I_{f}(R)\) and \(I_{f_{G}}(R)\) indicate the cumulative distributions of \(f(R)\) and \(f_{G}(R)\), respectively, e.g., \(I_{f}(R)=\int_{-\infty}^{R}drf(r)\). As is well known, cumulative distributions usually exhibit less fluctuations compared with the origin distributions. In the field of quantum chaos, the most-often used criterion for quantum chaos is given by statistical properties of spectra, e.g., by closeness of the nearest-level-spacing distribution \(P(s)\) to the prediction of RMT [4]. It is known that the following distribution \(P_{W}(s)\), which is obtained from Wigner's surmise, \[P_{W}(s)=\frac{\pi}{2}s\exp(-\frac{\pi}{4}s^{2}), \tag{31}\] gives a good approximation to the nearest-level-spacing distribution of the Gaussian orthogonal ensemble (GOE) in the large size limit. Quantitatively, the above-discussed closeness can be characterized by the following quantity \(\Delta_{E}\), \[\Delta_{E}=\int|I_{P}(s)-I_{P_{W}}(s)|ds, \tag{32}\] where \(I_{P}(s)\) and \(I_{P_{W}}(s)\) are cumulative distributions of \(P(s)\) and \(P_{W}(s)\), respectively. In previous numerical simulations, deviation of the distribution of another rescaled components, denoted by \(R^{\prime}_{\alpha\mathbf{n}}\), from the Gaussian distribution was studied as a measure for the "distance" to chaos, where \(R^{\prime}_{\alpha\mathbf{n}}=C_{\alpha\mathbf{n}}/\sqrt{\langle|C_{\alpha\mathbf{n}}|^{2} \rangle^{\prime}}\)[26; 27; 36]. Here, in the computation of \(\langle|C_{\alpha\mathbf{n}}|^{2}\rangle^{\prime}\), in addition to an average over perturbed states with energies close to \(E_{\alpha}\), a further average is taken over unperturbed states \(|\mathbf{n}^{\prime}\rangle\) with unperturbed energies close to \(E_{\mathbf{n}}^{0}\) by a small quantity \(\epsilon_{0}\). Specifically, \[\langle|C_{\alpha\mathbf{n}}|^{2}\rangle^{\prime}=\sum_{\alpha^{\prime},\mathbf{n}^{ \prime}}\frac{|C_{\alpha^{\prime}\mathbf{n}^{\prime}}|^{2}}{N_{E_{\alpha}}N_{E_{ \mathbf{n}}}}\delta_{\epsilon}(E_{\alpha^{\prime}}-E_{\alpha})\delta_{\epsilon_{0} }(E_{\mathbf{n}^{\prime}}-E_{\mathbf{n}}), \tag{33}\] where \(N_{E_{\alpha}}\) is defined in Eq.(15) and \[N_{E_{\mathbf{n}}}=\sum_{\mathbf{n}^{\prime}}\delta_{\epsilon_{0}}(E_{\mathbf{n}^{\prime} }-E_{\mathbf{n}}). \tag{34}\] It was found that, in some cases (not rare) in which the classical counterparts undergo chaotic motion and the distributions \(P(s)\) are quite close to \(P_{W}(s)\), the distributions of \(R^{\prime}_{\alpha\mathbf{n}}\), denoted by \(g(R^{\prime})\), deviate notably from the Gaussian distribution. In view of the semiclassical analysis given in the previous section, it is understandable that deviation of \(g(R^{\prime})\) from \(f_{G}(R^{\prime})\) may be larger than that of \(f(R)\) from \(f_{G}(R)\). In fact, unperturbed basis states \(|\mathbf{n}\rangle\) with close energies \(E_{\mathbf{n}}^{0}\) may correspond to quite different values of \(\mathbf{I_{\mathbf{n}}}\), meanwhile, according to Eq.(24), the values of \(\langle|C_{\alpha\mathbf{n}}|^{2}\rangle\) of those \(\mathbf{n}\), for which \(\mathbf{I_{\mathbf{n}}}\) are far from each other, are usually quite different. As a result, taking average over unperturbed basis states with close \(E_{\mathbf{n}}^{0}\) may drive the distribution of rescaled components away from the Gaussian distribution. Therefore, in order to obtain rescaled components that have a Gaussian distribution, no average should be taken over the unperturbed energies \(E_{\mathbf{n}}^{0}\). We would remark that, when the torus of \(\mathbf{I}=\mathbf{I_{\mathbf{n}}}\) does not change rapidly with \(\mathbf{n}\), an average over a neighborhood of \(\mathbf{n}\) is allowed. We did not mention this averaging procedure in the above discussions, because it is unnecessary in the derivation of Eq.(24). ## III Numerical simulations in two models with classical counterparts In order to test the above analytical results, numerical simulations have been performed in two models possessing classical counterparts, the LMG model and the Dicke model. In this section, we first briefly discuss the two models, then, present numerical results about the distribution \(f(R)\) and about the suggested "distance" to chaos, namely, \(\Delta_{EF}\), in comparison with other "distances". ### Models The first model we employ is a three-orbital LMG model [32]. This model is composed of \(\Omega\) particles, occupying three energy levels labeled by \(r=0,1,2\), each with \(\Omega\)-degeneracy. Here, we are interested in the collective motion of this model, for which the dimension of the Hilbert space is \(\frac{1}{2}(\Omega+1)(\Omega+2)\). We use \(\epsilon_{r}\) to denote the energy of the \(r\)-th level and, for brevity, we set \(\epsilon_{0}=0\). The Hamiltonian of the model, in the form in Eq.(1), has [7] \[H_{0}=\epsilon_{1}K_{11}+\epsilon_{2}K_{22}, \tag{35}\] \[V=\sum_{t=1}^{4}\mu_{t}V^{(t)}, \tag{36}\] where \[V^{(1)}=K_{10}K_{10}+K_{01}K_{01},\ V^{(2)}=K_{20}K_{20}+K_{02}K _{02},\] \[V^{(3)}=K_{21}K_{20}+K_{02}K_{12},\ V^{(4)}=K_{12}K_{10}+K_{01}K _{21}. \tag{37}\] Here, the operators \(K_{rs}\) are defined by \[K_{rs}=\sum_{\gamma=1}^{\Omega}a_{r\gamma}^{\dagger}a_{s\gamma},\quad r,s=0,1,2, \tag{38}\] where \(a_{r\gamma}^{\dagger}\) and \(a_{r\gamma}\) are fermionic creation and annihilation operators obeying the usual anti-commutation relations. For symmetric states, the operators \(K_{rs}\) can be written in terms of bosonic creation and annihilation operators \(b_{r}^{\dagger}\) and \(b_{r}\)[37], \[K_{rs}=b_{r}^{\dagger}b_{s},\quad K_{r0}=K_{0r}^{\dagger}=b_{r}^{\dagger} \sqrt{\Omega-b_{1}^{\dagger}b_{1}-b_{2}^{\dagger}b_{2}} \tag{39}\] for \(r,s=1,2\). Under the transformation, \[b_{r}^{\dagger}=\sqrt{\frac{\Omega}{2}}(q_{r}-ip_{r}),\quad b_{r}=\sqrt{\frac {\Omega}{2}}(q_{r}+ip_{r}) \tag{40}\] Figure 3: The distribution \(f(R)\) (open circles) and \(g(R^{\prime})\) (solid blocks with dashed lines) for \(\lambda=1\) in the LMG model with (a) \(\Omega=80\) and (b) \(\Omega=1000\). The solid curves indicate the Gaussian distribution \(f_{G}(R)\). Figure 4: Similar to Fig.3, but for the Dicke model with \(\lambda=1\), (a) \(N=80\), (b)\(N=1000\). for \(r=1,2\), it is easy to verify that \(q_{r}\) and \(p_{s}\) obey the following commutation relation, \[[q_{r},p_{s}]=\frac{i}{\Omega}\delta_{rs}. \tag{41}\] Hence, \(1/\Omega\) plays the role of an effective Planck constant, \[\hbar_{\rm eff}=\frac{1}{\Omega}. \tag{42}\] It is straightforward to find that the classical counterpart of model has the following Hamiltonian [5; 7], \[H(\mathbf{p},\mathbf{q})=H_{0}(\mathbf{p},\mathbf{q})+\lambda V(\mathbf{p},\mathbf{q}), \tag{43}\] where \[H_{0}(\mathbf{p},\mathbf{q})=\frac{\epsilon_{1}^{\prime}}{2}(p_{1}^{2}+ q_{1}^{2})+\frac{\epsilon_{2}^{\prime}}{2}(p_{2}^{2}+q_{2}^{2}),\] \[V(\mathbf{p},\mathbf{q})=\mu_{1}^{\prime}(q_{1}^{2}-p_{1}^{2})(1-G/2)+ \mu_{2}^{\prime}(q_{2}^{2}-p_{2}^{2})(1-G/2)\] \[+\frac{\mu_{3}^{\prime}}{\sqrt{2}}[(q_{2}^{2}-p_{2}^{2})q_{1}-2q_ {2}p_{1}p_{2}]\sqrt{1-G/2}\] \[+\frac{\mu_{4}^{\prime}}{\sqrt{2}}[(q_{1}^{2}-p_{1}^{2})q_{2}-2q_ {1}p_{1}p_{2}]\sqrt{1-G/2}, \tag{44}\] with \(G=q_{1}^{2}+p_{1}^{2}+q_{2}^{2}+p_{2}^{2}\leq 2\). Here, \(\epsilon_{1}^{\prime}=\epsilon_{1}\Omega,\epsilon_{2}^{\prime}=\epsilon_{2} \Omega,\mu_{1}^{\prime}=\mu_{1}\Omega^{2},\mu_{2}^{\prime}=\mu_{2}\Omega^{2}, \mu_{3}^{\prime}=\mu_{3}\Omega^{2}\), and \(\mu_{4}^{\prime}=\mu_{4}\Omega^{2}\). In our numerical simulations, we set \(\epsilon_{1}^{\prime}=44.1,\epsilon_{2}^{\prime}=64.5,\mu_{1}^{\prime}=62.1, \mu_{2}^{\prime}=70.2,\mu_{3}^{\prime}=76.5\), and \(\mu_{4}^{\prime}=65.7\). Under this choice of the parameters, for a fixed value of \(\lambda\), different values of \(\Omega\) correspond to a same classical counterpart. The second model is a single-mode Dicke model[33; 34], which describes the interaction between a single bosonic mode and a collection of \(N\) two-level atoms. The system can be described in terms of the collective operator \(\mathbf{J}\) for the \(N\) atoms, with \[J_{z}=\sum_{i=1}^{N}s_{z}^{(i)},\ \ J_{\pm}=\sum_{i=1}^{N}s_{\pm}^{(i)}, \tag{45}\] where \(s_{\pm}^{(i)}=s_{x}^{(i)}\pm is_{y}^{(i)}\) and \(s_{x(y,z)}^{(i)}\) are Pauli matrices divided by 2 for the \(i\)-th atom. The Dicke Hamiltonian is written as [34] \[H=\omega_{0}J_{z}+\omega a^{\dagger}a+\frac{\lambda}{\sqrt{N}}\mu(a^{\dagger}+ a)(J_{+}+J_{-}), \tag{46}\] which can also be written in the form of \(H=H_{0}+\lambda V\). The operator \(J_{z}\) and \(J_{\pm}\) obey the usual commutation rules for the angular momentum, \[[J_{z},J_{\pm}]=\pm J_{\pm},\ \ [J_{+},J_{-}]=2J_{z}. \tag{47}\] The Hilbert space of this model is spanned by vectors \(|j,m\rangle\) with \(m=-j,-j+1,\cdots,j-1,j\), known as Dicke states. They are eigenstates of \(\mathbf{J}^{2}\) and \(J_{z}\), with \(J_{z}|j,m\rangle=m|j,m\rangle\) and \(\mathbf{J}^{2}|j,m\rangle=j(j+1)|j,m\rangle\). Below, we take \(j\) as its maximal value, namely, \(j=N/2\); it is a constant of motion, since \([\mathbf{J}^{2},H]=0\). Another conserved observable in the Dicke model is the parity \(\Pi\), given by \(\Pi=\exp(i\pi\hat{N})\), where \(\hat{N}=a^{\dagger}a+J_{z}+j\) is an operator for the "excitation number", co Figure 6: “Distances” to classical and quantum chaos in the LMG model and the Dicke model. The measure \(\Delta_{EF}\) (solid squares) is the same as that in Fig.5. The measure \(\Delta_{cl}\) (open diamonds) is defined in Eq.(60) and was computed from properties of the corresponding classical phase spaces. (a) The LMG model with \(\Omega=80\), (b) the LMG model with \(\Omega=1000\), (c) the Dicke model with \(N=80\), and (d) the Dicke model with \(N=1000\). Figure 5: “Distance” to chaos in the LMG model and the Dicke model. The measures \(\Delta_{EF}\) (solid squares) in Eq.(30) and \(\Delta_{EF}^{\prime}\) (open circles) in Eq.(59) are computed from statistical properties of EFs and the measure \(\Delta_{E}\) (open triangles) in Eq.(32) is computed from the statistics of spectra. (a) The LMG model with \(\Omega=80\), (b) The LMG model with \(\Omega=1000\), (c) the Dicke model with \(N=80\), and (d) the Dicke model with \(N=1000\). The effective Planck constants are given by \(1/\Omega\) and \(1/N\), respectively, in the two modes. The two measures \(\Delta_{EF}\) and \(\Delta_{E}\) give almost the same results for the “distance” to chaos, when the systems are not far from chaos. excitation quanta in the system. In our numerical study, we consider the subspace with \(\Pi=+1\). Making use of the Holstein-Primakoff representation of the angular momentum operators, \[J_{+}=b^{\dagger}\sqrt{2j-b^{\dagger}b},\quad J_{-}=\sqrt{2j-b^{ \dagger}b}\;b,\] \[J_{z}=(b^{\dagger}b-j), \tag{48}\] where \(b\) and \(b^{\dagger}\) are bosonic operators satisfying \([b,b^{\dagger}]=1\), the Hamiltonian can be further written in the following form, \[H=\omega_{0}(b^{\dagger}b-j)+\omega a^{\dagger}a\] \[+\lambda\mu(a^{\dagger}+a)\left(b^{\dagger}\sqrt{1-\frac{b^{ \dagger}b}{2j}}+\sqrt{1-\frac{b^{\dagger}b}{2j}}b\right). \tag{49}\] We write Fock states related to \(a^{\dagger}\) and \(b^{\dagger}\) as \(|n_{a}\rangle\) and \(|n_{b}\rangle\), respectively, for which \[a^{\dagger}a|n_{a}\rangle=n_{a}|n_{a}\rangle,\quad b^{\dagger}b|n_{b}\rangle= n_{b}|n_{b}\rangle. \tag{50}\] According to Eq.(48), \(n_{b}\) should be truncated at \((n_{b})_{\rm max}=N\). In numerical simulations, we set \((n_{a})_{\rm max}=N\). Other parameters are \(\omega_{0}=\omega=1/N\) and \(\mu=1/N\). Under the transformation \[\begin{cases}b^{\dagger}=\sqrt{\frac{N}{2}}(q_{1}-ip_{1}),\quad b=\sqrt{\frac {N}{2}}(q_{1}+ip_{1}),\\ a^{\dagger}=\sqrt{\frac{N}{2}}(q_{2}-ip_{2}),\quad a=\sqrt{\frac{N}{2}}(q_{2}+ ip_{2}),\end{cases} \tag{51}\] one finds that \[[q_{r},p_{s}]=\frac{i}{N}\delta_{rs} \tag{52}\] for \(r=1,2\), and, hence, gets an effective Planck constant \[\hbar_{\rm eff}=\frac{1}{N}. \tag{53}\] The Hamiltonian of the classical counterpart of the model is written as \[H(\mathbf{p},\mathbf{q})=H_{0}(\mathbf{p},\mathbf{q})+\lambda V(\mathbf{p},\mathbf{q}), \tag{54}\] where \[H_{0}(\mathbf{p},\mathbf{q})=\frac{1}{2}(q_{1}^{2}+p_{1}^{2}+q_{2}^{2}+p _{2}^{2}-1),\] \[V(\mathbf{p},\mathbf{q})=2q_{1}q_{2}\sqrt{1-\frac{q_{1}^{2}+p_{1}^{2}}{2}}. \tag{55}\] ### Numerical results In this section, we discuss results of numerical simulations performed in the LMG model and the Dicke model. We first test validity of the semiclassical result given in Eq.(24). As seen in Fig.1 and Fig.2, the average shape \(\langle|C_{\alpha\mathbf{n}}|^{2}\rangle\) and its classical analog \(\Pi(E_{\alpha},\mathbf{I_{n}})\) indeed show similar features in these two models. We have computed the difference between the two shapes, given by \[d_{c}=\sum_{\mathbf{n}}|\langle|C_{\alpha\mathbf{n}}|^{2}\rangle-\Pi_{N}(E_{\alpha}, \mathbf{I_{n}})|, \tag{56}\] where \(\Pi_{N}(E_{\alpha},\mathbf{I_{n}})\) is the normalised \(\Pi(E_{\alpha},\mathbf{I_{n}})\). We found that \(d_{c}=0.065\) in the LMG model and \(d_{c}=0.08\) in the Dicke model. Then, we discuss properties of the distribution \(f(R)\) for components \(R_{\alpha\mathbf{n}}\) in classically allowed regions with \(\Pi(E_{\alpha},\mathbf{I_{n}})\neq 0\). We found that this distribution is indeed quite close to the Gaussian form \(f_{G}(R)\), when the underlying classical dynamics is chaotic, as illustrated in Fig.3 and Fig.4 with \(\lambda=1\). In the computation of \(f(R)\), 100 EFs in the middle energy region were used. The energy windows \(\epsilon\) are as follows: In the LMG mode, \(\epsilon\approx 3\) for \(\Omega=80\) and \(\epsilon\approx 0.02\) for \(\Omega=1000\), in contrast to the total energy domain \(\Delta E=64.5\) in the unperturbed system; in the Dicke mode, \(\epsilon\approx 0.2\) for \(N=80\) and \(\epsilon\approx 0.002\) for \(N=1000\), in contrast to the total energy domain \(\Delta E=2\). For comparison, we have also computed the distribution \(g(R^{\prime})\) given by another rescaling procedure, in which an average over unperturbed states is also performed (see discussions given in Sec.II.2). In this rescaling procedure, as discussed in Refs.[26; 31], the average shape of EFs is expected to have the following semiclassical approximation, \[\langle|C_{\alpha\mathbf{n}}|^{2}\rangle^{\prime}\simeq\frac{S(E,E_{0})}{(2\pi \hbar)^{f}\rho(E)\rho_{0}(E_{0})}, \tag{57}\] where \(\rho_{0}(E_{0})\) and \(\rho(E)\) are the density of states of the two systems \(H_{0}\) and \(H\), respectively, and \(S(E,E_{0})\) indicates the overlap of the perturbed energy surface of \(H=E\) and the unperturbed energy surface of \(H_{0}=E_{0}\), \[S(E,E_{0})=\int d\mathbf{q}d\mathbf{p}\delta(E-H(\mathbf{p},\mathbf{q}))\delta(E_{0}-H_{0}(\bm {p},\mathbf{q})). \tag{58}\] The difference between \(\langle|C_{\alpha\mathbf{n}}|^{2}\rangle^{\prime}\) in Eq.(57) and \(\langle|C_{\alpha\mathbf{n}}|^{2}\rangle\) in Eq.(24) is quite clear. In the computation of \(g(R^{\prime})\), only those rescaled components \(R^{\prime}_{\alpha\mathbf{n}}\) in the region with nonzero \(S(E_{\alpha},E_{\mathbf{n}})\) were used. We found that, unlike the case of \(f(R)\) discussed above, the distribution \(g(R^{\prime})\) usually shows obvious deviation from \(f_{G}(R^{\prime})\) when the classical system is in the chaotic regime (Fig.3 and Fig.4). Here, in the additional average for the unperturbed system, 100 EFs in the middle energy region were used, with \(\epsilon_{0}\approx 0.645\) in the LMG model and \(\epsilon_{0}\approx 0.02\) in the Dicke model. Variation of the measure \(\Delta_{EF}\) in Eq.(30) with the controlling parameter \(\lambda\) is given in Fig.5, together with the often-used measure given by \(\Delta_{E}\) of the statistics of spectra. In order to improve the statistics, for each value of \(\lambda\), we used data obtained from several values of \(\lambda^{\prime}\) in a neighborhood of \(\lambda\), \(\lambda^{\prime}\in[\lambda-0.05,\lambda+0.05]\). The agreement of the two measures \(\Delta_{EF}\) and \(\Delta_{E}\) is already good in the case of not quite large \(\Omega\) in the LMG model [Fig.5(a) with \(\Omega=80\)]. The agreement becomes better, when the value of \(\Omega\) is increased such that the system becomes closer to its classical limit [Fig.5(b)]. Similar results were also found in the Dicke model [Fig.5(c) and (d)]. Therefore, in these two models, the difference \(\Delta_{EF}\) can be regarded as a good measure for the "distance" to chaos. For comparison, we have also computed the difference \(\Delta^{\prime}_{EF}\) given by the other rescaling procedure, \[\Delta^{\prime}_{EF}=\int|I_{g}(R^{\prime})-I_{f_{G}}(R^{\prime})|dR^{\prime}, \tag{59}\] where \(I_{g}(R^{\prime})\) denotes the cumulative distribution for \(g(R^{\prime})\). Due to the obvious difference between the distribution \(g(R^{\prime})\) and the Gaussian distribution shown in Fig.3 and Fig.4, one expects a notable difference between \(\Delta^{\prime}_{EF}\) and \(\Delta_{E}\). Indeed, as shown in Fig.5, unlike the case with \(\Delta_{EF}\) discussed above, the agreement between \(\Delta^{\prime}_{EF}\) and \(\Delta_{E}\) is not good. We have also computed a "distance" to chaos in the classical counterparts, denoted by \(\Delta_{cl}\), which measures the proportion of regular region in energy surface. The measure is defined by \[\Delta_{cl}=\lim_{N_{T}\rightarrow\infty}\frac{N_{R}}{N_{T}}, \tag{60}\] where \(N_{T}\) is a total number of points taken randomly in an energy surface of interest and \(N_{R}\) is the number of the points for which \(\lambda_{L}<\lambda_{m}\). Here, \(\lambda_{m}\) is some small quantity and \(\lambda_{L}\) is the Lyapunov exponent, defined as follows in the long time limit, \[\lambda_{L}=\lim_{t\rightarrow\infty}\lim_{d_{0}\to 0}\frac{1}{t}\ln \frac{|d_{t}|}{|d_{0}|}, \tag{61}\] where \(d_{0}\) denotes the initial phase-space distance and \(d_{t}\) denotes the distance at a time \(t\). In our numerical simulation, we took \(t=1000\), \(N_{T}=5000\), \(\lambda_{m}=0.02\) in the LMG model and \(t=50000\), \(N_{T}=5000\), \(\lambda_{m}=0.001\) in the Dicke model. In Fig.6, it is seen that the agreement between the distances to quantum and classical chaos, characterized by \(\Delta_{EF}\) and \(\Delta_{cl}\), respectively, is quite good. Some examples of Poincare surfaces of sections in the two models are shown in Fig.7 and Fig.8. ## IV Numerical simulations in models without classical counterparts In this section, we study the distribution of rescaled components of EFs in models without any classical coun Figure 9: The distribution \(f(R)\) (open circles) and \(g(R^{\prime})\) (solid blocks with dashed lines) for \(\lambda=0.5\) in the defect Ising model and the defect XXZ model. The solid curve indicates the Gaussian distribution \(f_{G}(R)\). (a) The defect Ising model with \(N=10\); (b) the defect Ising model with \(N=15\); (c) the defect XXZ model with \(N=12\) and \(S_{z}=-1\); (d) the defect XXZ model with \(N=19\) and \(S_{z}=-3.5\). terpart. It seems reasonable to expect that the final result of Sec.II.1, that is, that the distribution of appropriately rescaled components should have a Gaussian form, may be valid to some extent in this type of models as well. Here, a major problem met is the determination of the region of components that should be taken into account. As discussed previously, in a system with a classical counterpart, this region corresponds to the classically energetically allowed region. For a system without any classical counterpart, this is a highly nontrivial problem. In this paper, we do not intend to solve this problem, but, to circumvent it by restricting ourselves to models, whose EFs occupy almost the whole unperturbed energy region. In this type of models, one can simply use all components of the EFs when computing \(f(R)\). Specifically, we study a defect XXZ model and a defect Ising model, adopting a periodic boundary condition in numerical simulations. The defect XXZ model [35] is a modified XXZ model, in which an external magnetic field is applied on two sites of the \(N\) spins. The unperturbed Hamiltonian and the perturbation have the following forms, \[H_{0}=\sum_{i=1}^{N}s_{x}^{i}s_{x}^{i+1}+s_{y}^{i}s_{y}^{i+1}+ \mu_{z}\sum_{i=1}^{N}s_{z}^{i}s_{z}^{i+1} \tag{62}\] \[V=\mu_{1}s_{z}^{1}+\mu_{4}s_{z}^{4}, \tag{63}\] where the periodic boundary condition implies that \(s_{a}^{N+1}=s_{a}^{1}\) for \(a=x,y,z\). The system is a quantum chaotic system for \(\lambda\) within an appropriate regime, while, it exhibits the so-called many-body localisation for \(\lambda\) sufficiently large. The Hamiltonian \(H\) is commutable with \(S_{z}\), the \(z\)-component of the total spin, and we consider a subspace with a definite value of \(S_{z}\) in our numerical study. Other parameters used in this model are \(\mu_{1}=1.11\), \(\mu_{4}=1.61\), and \(\mu_{z}=1\). The defect Ising model is a transverse Ising model, in which an additional magnetic field is applied on two sites of the \(N\) spins, with \[H_{0}=\sum_{i}^{N}s_{z}^{i}s_{z}^{i+1}+\mu_{x}\sum_{i=1}^{N}s_{x} ^{i}, \tag{64}\] \[V=\mu_{1}s_{z}^{1}+\mu_{4}s_{z}^{4}. \tag{65}\] Similarly, it is a quantum chaotic system for \(\lambda\) in an appropriate regime and exhibits many-body localisation for \(\lambda\) sufficiently large. The parameters used are \(\mu_{1}=1.11\), \(\mu_{4}=1.61\), and \(\mu_{x}=0.6\). Our numerical simulations reveal that, in these two models, the distributions \(f(R)\) are also quite close to the Gaussian form \(f_{G}(R)\), when the statistics of the spectra is close to the prediction of RMT as illustrated in Fig.9 with \(\lambda=0.5\). Unlike the two models discussed in the previous section, the distributions \(g(R^{\prime})\) are also close to the Gaussian form at \(\lambda=0.5\). In the computation of \(f(R)\), 50 EFs in the middle energy region were used. The energy windows \(\epsilon\) are as follows: In the defect Ising model, \(\epsilon\approx 0.2\) and \(\epsilon_{0}=0.02\) for \(N=10\) in contrast to the total energy domain \(\Delta E=7.11\), and \(\epsilon\approx 0.07\) for \(N=15\) in contrast to Figure 10: Similar to Fig.5, but for the defect Ising model with (a) \(N=10\) and (b)\(N=15\). Figure 11: Similar to Fig.5, but for the defect XXZ model with (a) \(N=12\), \(S_{z}=-1\), and (b)\(N=19\), \(S_{z}=-3.5\). \(\Delta E=10.64\); in the defect XXZ model, \(\epsilon\approx 0.3\) and \(\epsilon_{0}=0.02\) for \(N=12\), \(S_{z}=-1\) in contrast to \(\Delta E=8.03\), and \(\epsilon\approx 0.01\) for \(N=19\), \(S_{z}=-3.5\) in contrast to the total energy domain \(\Delta E=10.66\). The two measures \(\Delta_{EF}\) and \(\Delta_{E}\) exhibit similar behaviors, like the cases discussed in the previous section for the LMG and Dicke models (Fig.10 and Fig.11). Thus, at least in these two models, the difference \(\Delta_{EF}\) can be regarded as a good measure for the "distance" to chaos. In consistence with the behaviors of the distribution \(g(R^{\prime})\) illustrated in Fig.9, the two quantities \(\Delta^{\prime}_{EF}\) and \(\Delta_{EF}\) are close in most regions where the systems are chaotic systems according to their spectra statistics. That is, in most cases, an average over the unperturbed energy does not bring much difference in the defect Ising and defect XXZ models. This may be partially related to the fact that EFs in these two models occupy almost the whole energy region for \(\lambda\) not small. There are still some regions of \(\lambda\) in Fig.10(b) and Fig.11(b) with relatively large Hilbert spaces, in which \(\Delta^{\prime}_{EF}\) shows some notable deviation from \(\Delta_{EF}\) and \(\Delta_{E}\). Some examples of the distributions \(f(R)\) and \(g(R^{\prime})\) in this case are shown in Fig.12 and Fig.13, together with the corresponding distributions of \(I_{P}(s)\) and \(I_{P_{W}}(s)\). ## V Conclusions In this paper, based on semiclassical analysis, it has been shown that those components of EFs of quantum chaotic systems, which lie in classically-allowed regions of integrable bases, can be regarded as random numbers in a sense similar to that stated in the Berry's conjecture. For the distribution \(f(R)\) of these components to have a Gaussian form, which is predicted by the RMT, an appropriated rescaling procedure with respect to the average shape of EFs is needed, where the average should be taken over perturbed states with neighbouring energies. It is found that an additional average over unperturbed basis states with neighbouring unperturbed energies may cause deviation of the distribution of rescaled components of EFs from the Gaussian form. The above results suggest that deviation of the distribution \(f(R)\) from the Gaussian distribution may be used as a measure for the "distance" to quantum chaos. In two models possessing classical counterparts, when the perturbed system goes from integrable to chaotic with the increase of perturbation strength, our numerical simulations show that this deviation coincides with the deviation of the nearest-level-spacing distribution from the prediction of RMT. It is known that specific dynamics of the underlying classical systems may induce certain modifications to the Figure 12: (a) The distributions of \(f(R)\) (open circles) and \(g(R^{\prime})\) (solid blocks with dashed lines) in the defect Ising model with \(N=15\) and \(\lambda=0.06\). (b) The corresponding cumulative distribution of the nearest-level-spacing distribution (open circles). The solid curve indicates the cumulative distribution given by the Wigner surmise. Figure 13: Similar to Fig.12, but for the defect XXZ model with \(N=19\), \(S_{z}=-7\) and \(\lambda=0.12\). Berry's conjecture [38; 39; 40; 41; 42; 43; 44; 45]. Since the main result of this paper is based on this conjecture, specific underlying classical dynamics may have some influence in results of this paper as well. In particular, it may induce some deviation of the distribution \(f(R)\) for some EFs from the Gaussian distribution. However, if sufficiently many EFs are used in the computation of \(f(R)\), it is reasonable to expect that the induced deviation should be small. In two models without simple classical counterpart, we have found similar numerical results about the distribution of \(f(R)\). Analytical explanation of this point is still lacking. It seems that the following feature of these two models may be of relevance. That is, in both models the matrices of the perturbations \(V\) in the unperturbed bases do not have a clear band structure; in other words, the perturbation couples basis vectors far separated in energy. We hope that these numerical results may stimulate more investigations. ###### Acknowledgements. This work was partially supported by the Natural Science Foundation of China under Grant Nos. 11275179, 11535011, and 11775210.
2310.18398
Maglev for Dark Matter: Dark-photon and axion dark matter sensing with levitated superconductors
Ultraprecise mechanical sensors offer an exciting avenue for testing new physics. While many of these sensors are tailored to detect inertial forces, magnetically levitated (Maglev) systems are particularly interesting, in that they are also sensitive to electromagnetic forces. In this work, we propose the use of magnetically levitated superconductors to detect dark-photon and axion dark matter through their couplings to electromagnetism. Several existing laboratory experiments search for these dark-matter candidates at high frequencies, but few are sensitive to frequencies below $\mathrm{1\,kHz}$ (corresponding to dark-matter masses $m_\mathrm{DM}\lesssim10^{-12}\,\mathrm{eV}$). As a mechanical resonator, magnetically levitated superconductors are sensitive to lower frequencies, and so can probe parameter space currently unexplored by laboratory experiments. Dark-photon and axion dark matter can source an oscillating magnetic field that drives the motion of a magnetically levitated superconductor. This motion is resonantly enhanced when the dark matter Compton frequency matches the levitated superconductor's trapping frequency. We outline the necessary modifications to make magnetically levitated superconductors sensitive to dark matter, including specifications for both broadband and resonant schemes. We show that in the $\mathrm{Hz}\lesssim f_\mathrm{DM}\lesssim\mathrm{kHz}$ frequency range our technique can achieve the leading sensitivity amongst laboratory probes of both dark-photon and axion dark matter.
Gerard Higgins, Saarik Kalia, Zhen Liu
2023-10-27T18:00:03Z
http://arxiv.org/abs/2310.18398v2
# Maglev for Dark Matter: ###### Abstract Ultraprecise mechanical sensors offer an exciting avenue for testing new physics. While many of these sensors are tailored to detect inertial forces, magnetically levitated (Maglev) systems are particularly interesting, in that they are also sensitive to electromagnetic forces. In this work, we propose the use of magnetically levitated superconductors to detect dark-photon and axion dark matter through their couplings to electromagnetism. Several existing laboratory experiments search for these dark-matter candidates at high frequencies, but few are sensitive to frequencies below \(1\,\mathrm{kHz}\) (corresponding to dark-matter masses \(m_{\mathrm{DM}}\lesssim 10^{-12}\,\mathrm{eV}\)). As a mechanical resonator, magnetically levitated superconductors are sensitive to lower frequencies, and so can probe parameter space currently unexplored by laboratory experiments. Dark-photon and axion dark matter can source an oscillating magnetic field that drives the motion of a magnetically levitated superconductor. This motion is resonantly enhanced when the dark matter Compton frequency matches the levitated superconductor's trapping frequency. We outline the necessary modifications to make magnetically levitated superconductors sensitive to dark matter, including specifications for both broadband and resonant schemes. We show that in the \(\mathrm{Hz}\lesssim f_{\mathrm{DM}}\lesssim\mathrm{kHz}\) frequency range our technique can achieve the leading sensitivity amongst laboratory probes of both dark-photon and axion dark matter. + Footnote †: preprint: FERMILAB-PUB-23-624-SQMS + Footnote †: preprint: FERMILAB-PUB-23-624-SQMS ## I Introduction Discerning the nature of dark matter (DM) remains one of the major outstanding problems in fundamental physics. The mass of the particles which constitute DM is largely unconstrained, and so numerous candidates have been proposed over the years, but one class which has garnered increased attention lately is ultralight bosonic DM [1; 2]. This class consists of DM candidates with masses \(\lesssim 1\,\mathrm{eV}\). As the local energy density of dark matter has been measured to be \(\rho_{\mathrm{DM}}\approx 0.3\,\mathrm{GeV/cm}^{3}\)[3], these candidates, in turn, have large number densities. This necessitates that these candidates must be bosonic, and moreover, should behave like classical fields [4; 5]. Some of the most popular ultralight DM candidates include QCD axions [6; 7; 8], axionlike particles [9; 10], and dark photons [11; 12; 13]. These candidates are particularly intriguing because the QCD axion can solve the strong CP problem [14; 15; 16], while axionlike particles and dark photons are predicted by a variety of string compactifications [17; 18; 19]. These ultralight candidates may possess couplings to electromagnetism [11; 20], and a variety of laboratory experiments have been proposed to search for such couplings [21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44]. Many of these experiments search for electromagnetic fields sourced by ultralight DM. In particular, in the regime where the Compton wavelength of the dark matter \(\lambda_{\mathrm{DM}}\) is much larger than the size of the experiment, the typical signal that these ultralight DM candidates would produce is an oscillating magnetic field [45; 27]. Various experiments searching for ultralight DM utilize systems which take advantage of resonant enhancements, e.g. lumped-element circuits [27; 37], resonant cavities [41; 22], or layers of dielectric disks [29; 31; 40], in order to increase their sensitivity to DM of a particular Compton frequency (mass). The frequency range to which each of these experiments is sensitive is set, respectively, by: the inductance and capacitance of the circuit, the size of the cavity, and the spacing between the layers. It is thus difficult for any of these techniques to probe frequencies below \(1\,\mathrm{kHz}\), corresponding to DM masses \(m_{\mathrm{DM}}\lesssim 10^{-12}\,\mathrm{eV}\). In this work, we propose to utilize a mechanical resonator, specifically a magnetically-levitated superconducting particle (SCP), in order to detect the oscillating magnetic field sourced by DM, at frequencies in the Hz to kHz range. Magnetically levitated superconductors function as ultraprecise accelerometers [46; 47; 48], and can be employed in a wide range of precision sensing applications. In comparison with optical levitation, magnetostatic levitation allows for the suspension of significantly larger loads [46], up to even train-scale objects [49]. Magnetically levitated superconductors have been utilized for gravimetry [46; 47], and have the potential to test quantum physics on macroscopic scales [50; 51]. The usage of accelerometers to detect \(B-L\) dark matter has been actively explored in recent years [28; 52; 53; 54; 55; 56], and magnetically levitated systems have been proposed as one promising candidate, due to their excellent acceleration sensitivity [54; 55; 57; 58]. Here, we highlight that magnetically levitated systems are also excellent magnetometers, and as such, can be sensitive to electromagnetically coupled ultralight DM. The fundamental property underlying the magnetic levitation of a superconductor is its superdiamagnetism, which means that nearly all magnetic fields are expelled out of its interior [59]. In the presence of an external magnetic field, currents are driven along the surface of the SCP which screen the interior from the magnetic field. These surface currents then experience a Lorentz force from the external magnetic field, leading to a net force on the SCP (see Fig. 1). This principle can be used to trap a SCP near the center of an applied static quadrupole field, with trapping frequencies typically in the Hz to kHz range [48; 60; 61]. The levitation apparatus must be surrounded by magnetic shielding in order to isolate the SCP from environmental fields. Inside this shield, ultralight DM can source an oscillating magnetic field signal, similar to the one sourced in experiments like DM Radio [27]. If the apparatus is positioned off-center within the shield, this signal can be nonzero in the vicinity of the apparatus. This additional field can then perturb the equilibrium position of the SCP, leading to oscillatory motion of the particle. If the frequency of this oscillation, which is set by the DM mass, matches the trapping frequency, then the motion will be resonantly enhanced. Magnetically levitated SCPs thus provide an excellent context in which to resonantly search for electromagnetic couplings of DM with \(m_{\rm DM}\lesssim 10^{-12}\,\mathrm{eV}\). In this work, we will explore both resonant and broadband detection schemes to search for ultralight DM in this mass range. This work is structured as follows. In Sec. II, we outline how a superconductor can be magnetically levitated. We discuss the physics of trapping a SCP, as well as the possible readout schemes and potential range of system parameters. In Sec. III, we review the physics of the ultralight DM candidates considered in this work. These include dark-photon dark matter (DPDM) and axion DM.1 In Sec. IV, we discuss the relevant noise sources for our setup and project sensitivities to both DM candidates. We consider both a broadband scheme using a single experiment and a scanning scheme using several resonant experiments, and outline the parameter choices relevant to each of these schemes. Finally, in Sec. V, we discuss our results and possible future improvements. We also perform detailed computations in our appendices. In Appendix A, we derive the response of a spherical SCP to an applied magnetic field. In Appendix B, we derive the axion DM magnetic field signal sourced inside a rectilinear magnetic shield. We make all the code used in this work publicly available on Github [62]. Footnote 1: Throughout this work, we simply use “axion” to refer to both the QCD axion and axionlike particles. ## II Levitated superconductors In this section, we discuss the magnetic levitation of a SCP. First, we show how a SCP can be trapped near the center of a static quadrupolar magnetic field. Next, we discuss various methods of reading out the motion of a SCP inside the trap. Finally, we outline the physical limitations of the setup, and the resulting range of parameters that can be achieved with such a system. ### Trapping The magnetic trap is formed by a quadrupolar field, which confines the SCP near its center, at the point where the magnetic field vanishes. Such a magnetic field can be created by two coils carrying currents in opposite directions, also known as an anti-Helmholtz-like configuration (see Fig. 1). To understand the effect of the trap on the SCP, let us expand the magnetic field in the vicinity of the trap center to linear order as \[B_{i}(\mathbf{x},t)=B_{0,i}(t)+b_{ij}(t)x_{j}, \tag{1}\] where the Einstein summation convention is implicit in the second term. Here \(\mathbf{B}_{0}\) represents the magnetic field at the center of the coordinate system, while \(b_{ij}\) describes the magnetic field gradients near the center. (Note that Gauss's law of magnetism enforces \(\sum_{i}b_{ii}=0\)). In the absence of beyond-the-Standard-Model effects, the only contribution to \(\mathbf{B}\) is the applied quadrupole trap, which is static. Because of this, we can choose a coordinate system in which the magnetic field vanishes at the origin, i.e. \(\mathbf{B}_{0}(t)=0\), and in which \(b_{ij}\) is diagonal. Then Eq. (1) simplifies to \[\mathbf{B}_{\rm trap}(\mathbf{x},t)=b_{xx}x\mathbf{\hat{x}}+b_{yy}y\mathbf{\hat{y}}+b_{zz}z\bm {\hat{z}}. \tag{2}\] When we introduce a DM signal, the total magnetic field will not take this simple form, as \(\mathbf{B}\) will exhibit a time dependence. A superconducting sphere of volume \(V\), located at position \(\mathbf{x}\) within the magnetic field in Eq. (1), will experience a force2 Footnote 2: Throughout, we use natural units \(\hbar=c=k_{B}=\mu_{0}=1\). \[F_{i}(\mathbf{x},t)=-\frac{3}{2}Vb_{ji}B_{j}(\mathbf{x},t) \tag{3}\] (see Appendix A or Refs. [48; 63] for derivation). Microscopically, this force occurs because the local magnetic field drives surface currents on the SCP, in order to screen the magnetic field out of its interior. These currents then experience a Lorentz force in the presence of the magnetic field (see Fig. 1). Note that because the net force is given by the difference between the Lorentz forces on either side of the sphere, Eq. (3) depends not only on the magnetic field, but also on its gradient \(b_{ij}\) across the sphere. This force can alternatively be understood by rewriting Eq. (3) as \(\mathbf{F}=-\nabla U\), where \[U=\frac{3}{4}V|\mathbf{B}|^{2}. \tag{4}\] Heuristically, this potential can be interpreted as the amount of energy that it takes for the superconducting sphere to screen out the local magnetic field. The sphere will therefore settle at the point of lowest total magnetic field. In the case of a static quadrupole field, this will be the center of the trap \(\mathbf{x}=0\). This can be seen even more directly by plugging Eq. (2) into Eq. (3) to find \[\mathbf{F}_{\text{trap}}(\mathbf{x},t)=-\frac{3}{2}V\left(b_{xx}^{2}x\hat{\mathbf{x}}+b_{yy }^{2}y\hat{\mathbf{y}}+b_{zz}^{2}z\hat{\mathbf{z}}\right). \tag{5}\] This expression makes it clear that the trap creates a restoring force towards \(\mathbf{x}=0\), so that the system acts as a harmonic oscillator. The resonant frequencies of the trap are simply given by [48] \[f_{i}=\sqrt{\frac{3}{8\pi^{2}\rho}}\,b_{ii}, \tag{6}\] where \(\rho\) is the density of the sphere. As we will see in Sec. III, the magnetic field signal induced by ultralight DM can drive this harmonic oscillator. If the frequency of the driving signal (which is set by the ultralight DM mass) matches one of the trapping frequencies in Eq. (6), then the oscillator will ring up resonantly. ### Readout The motion of the levitated SCP can be read out in different fashions. One method relies on placing a pickup coil close to the particle. As the SCP moves, it distorts the magnetic trapping field, causing the magnetic flux threading the pickup coil to change. This flux can be transferred to a sensitive magnetometer, such as a SQUID [48; 50; 61] or a SQUID coupled to a microwave resonator [64], which outputs a signal describing the particle motion. Another method also makes use of a pickup coil close to the particle, but uses a different mechanism for sensing the particle motion. As the SCP moves, it changes the inductance of the pickup coil, due to the SCP's superdiagrammentism. This inductance change can be measured to probe the particle motion [47; 65]. Alternatively, the particle motion can be measured using optical interferometry [66]. Specifically, one can form a Michelson interferometer, with a reflective SCP acting as the mirror at the end of one of the interferometer arms. In principle, each of these methods allows the particle motion to be probed close to the standard quantum limit (SQL). In this work, we will primarily consider the SQUID readout. The sensitivity of this readout scheme will be discussed further in Sec. IV. ### Range of system parameters Here we discuss physical limitations of this levitation geometry, which set the viable range of parameters that can be achieved. First, Eq. (6) implies that the frequency range of our setup is constrained by the range of achievable magnetic field gradients and particle densities, namely3 Footnote 3: Throughout the rest of this work, we write \(f_{0}=\omega_{0}/2\pi\) and \(b_{0}\), rather than \(f_{i}\) and \(b_{ii}\), to refer to the trapping frequency and magnetic field gradient, in contexts where we are agnostic about which mode is being excited. These quantities are still related by Eq. (6). \[f_{0}\sim 170\,\text{Hz}\cdot\sqrt{\frac{0.1\,\text{g/cm}^{3}}{\rho}}\left( \frac{b_{0}}{10\,\text{T/m}}\right). \tag{7}\] Densities of \(0.1\,\text{g/cm}^{3}\) can be achieved by using a hollow SCP. A SCP of mass \(1\,\text{g}\) and density \(0.1\,\text{g/cm}^{3}\) would require a thickness of \(\sim 50\,\mu\text{m}\) (see Ref. [46] for levitation of similarly sized hollow SCPs). Such a particle is around \(3\,\text{cm}\) across. Field gradients of up to \(\sim 100\,\text{T/m}\) have been produced in cm-scale traps [48], so we find it reasonable to consider trapping frequencies \(f_{0}\lesssim 100\,\text{Hz}\). Additionally, for sufficiently low trapping frequencies, gravity can displace the vertical equilibrium position of the SCP. By balancing the force of gravity \(\mathbf{F}_{g}=-mg\mathbf{\hat{z}}\) with Eq. (5), we see that the vertical displacement of the equilibrium will be \[\Delta z=\frac{g}{4\pi^{2}f_{z}^{2}}\sim 3\,\text{cm}\cdot\left(\frac{3\,\text{Hz }}{f_{z}}\right)^{2}. \tag{8}\] To avoid significant displacements from gravity, in this work, we will focus on the range of trapping frequencies \(3\,\text{Hz}\lesssim f_{0}\lesssim 100\,\text{Hz}\). The size of the SCP is also constrained by the critical fields of the superconducting material out of which it is made. As the size of the SCP is increased, the magnetic field strength at its surface will increase (due to the magnetic field gradients \(b_{ii}\)), and so its superconductivity can be broken if the SCP is too large. When the SCP is located at the center of the trap, the maximum magnetic field strength on its surface is given by \[B_{\text{max}} \sim b_{0}\mathcal{R} \tag{9}\] \[\sim 80\,\text{mT}\cdot\left(\frac{m}{1\,\text{g}}\right)^{1/3} \left(\frac{\rho}{0.1\,\text{g/cm}^{3}}\right)^{1/6}\left(\frac{f_{0}}{100\, \text{Hz}}\right), \tag{10}\] where \(\mathcal{R}\) is the characteristic length of the SCP. Typical type-I superconducting materials, such as Pb and Ta, have critical field strengths of up to \(80\,\mathrm{mT}\)[67, 68], so in this work we restrict ourselves to SCPs no larger than \(m=1\,\mathrm{g}\). We note, however, that thin films of TiN have been shown to have critical field strengths of up to \(5\,\mathrm{T}\)[69], so larger SCPs may be possible. Finally, as this system acts as a harmonic oscillator, it exhibits a characteristic dissipation rate \(\gamma\). We anticipate the main source of dissipation to be gas collisions with the SCP. The dissipation rate from gas collisions is given by [52, 70] \[\gamma\sim\frac{PA}{m\bar{v}_{\mathrm{gas}}} \sim 2\pi\cdot 10^{-8}\,\mathrm{Hz}\cdot\left(\frac{P}{10^{-7}\, \mathrm{Pa}}\right)\left(\frac{1\,\mathrm{g}}{m}\right)^{1/3}\] \[\cdot\left(\frac{0.1\,\mathrm{g/cm}^{3}}{\rho}\right)^{2/3} \sqrt{\left(\frac{m_{\mathrm{gas}}}{4\,\mathrm{Da}}\right)\left(\frac{10\, \mathrm{mK}}{T}\right)}, \tag{11}\] where \(P\) is the gas pressure, \(A\) is the cross-sectional area of the SCP, and \(\bar{v}_{\mathrm{gas}}\sim\sqrt{T/m_{\mathrm{gas}}}\) is the mean velocity of the gas molecules (which have mass \(m_{\mathrm{gas}}\)). Other potential sources of dissipation include flux creep and eddy current damping. Flux creep is the movement of unpinned flux lines within the SCP [48, 71]. Flux pinning occurs in type-II superconductors, and so flux creep can be eliminated by using a SCP made from a type-I superconducting material with few crystalline domains. Eddy current damping occurs when the motion of the SCP causes magnetic field changes which drive currents in nearby resistive conductors with nonzero resistance. This dissipation can be mitigated by surrounding the levitation apparatus by a superconducting shield (see Fig. 1), and ensuring all materials inside the shield are either superconductors or electrical insulators. We therefore expect \(\gamma\sim 2\pi\cdot 10^{-8}\,\mathrm{Hz}\) to be an achievable benchmark for the dissipation rate.[4] Figure 1: Magnetic levitation of a SCP. The levitation apparatus (shown on the left) consists of two current-carrying coils arranged in an anti-Helmholtz-like configuration, i.e. carrying currents in opposite directions. Together these coils source a quadrupole magnetic field (shown in purple), which can trap a SCP. If the SCP is displaced from the center of the trap (the point at which \(\mathbf{B}=0\)), surface currents (shown in light blue) will run on the SCP to screen the magnetic field out of its interior. These surface currents then experience a Lorentz force in the presence of the magnetic field, leading to a net restoring force (shown in red) which drives the SCP back to the center of the trap. The trap is typically located within a magnetic shield (shown on the right). Inside of this shield, ultralight DM can be parametrized by an effective current (shown in dark blue), which sources an oscillating magnetic field signal (shown in green). In the DPDM case, the direction of the effective current is given by the DPDM polarization. In the axion case, it is given by the quadrupole magnetic field trap. The DM-induced magnetic field can displace the equilibrium position of the trap, resulting in oscillatory motion of the SCP. Note that since this magnetic field signal vanishes at the center of the shield, the trap must be located off-center within the shield in order to be sensitive to DPDM or axion DM. ## III Dark matter signals In this section, we review two ultralight DM candidates, dark-photon dark matter (DPDM) and axion DM, and derive the signals that they can effect on a levitated SCP through their coupling to electromagnetism. As we will see, both DM candidates can be described by an effective current. Within the confines of the magnetic shield surrounding the levitation setup, this effective current sources an oscillating magnetic field signal, just as inside shielded experiments like DM Radio [27]. This magnetic field will then drive oscillatory motion of the SCP. ### Dark-photon dark matter A kinetically mixed dark photon \(A^{\prime}\) of mass \(m_{A^{\prime}}\) and kinetic mixing parameter \(\varepsilon\) is described by the Lagrangian5 Footnote 5: The Lagrangian for the mixed photon–dark-photon system can be written in multiple different bases (see Sec. II.1 and Appendix A of Ref. [45] for a detailed review). In this work, we operate only in the so-called “interaction basis,” in which the Lagrangian is given by Eq. (12). In this basis, only \(A\) interacts with SM currents at leading order. However, \(A\) and \(A^{\prime}\) are not propagation eigenstates, and so will mix as they propagate through vacuum. \[\mathcal{L}_{A^{\prime}} \supset-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}-\frac{1}{4}F^{\prime}_{\mu \nu}F^{\prime\mu\nu}+\frac{1}{2}m_{A^{\prime}}^{2}A^{\prime}_{\mu}A^{\prime\mu}\] \[\quad+\varepsilon m_{A^{\prime}}^{2}A_{\mu}A^{\prime\mu}-J_{ \rm EM}^{\mu}A_{\mu}, \tag{12}\] where \(F^{\prime}_{\mu\nu}=\partial_{\mu}A^{\prime}_{\nu}-\partial_{\nu}A^{\prime}_{\mu}\) is the field-strength tensor for the dark photon, and \(J_{\rm EM}^{\mu}\) is the Standard Model electromagnetic current. By comparing the last two terms in Eq. (12), we can see that \(A^{\prime}\) has a similar effect to a current. In particular, if we take \(\varepsilon\ll 1\) so that there is negligible backreaction on \(A^{\prime}\) and consider the limit where the DPDM is non-relativistic \(v_{\rm DM}\sim 10^{-3}\ll 1\), then the only effect \(A^{\prime}\) has on electromagnetism is to modify the Ampere-Maxwell law by [45]6 Footnote 6: Throughout, we use unbolded symbols \(A^{\prime}\) to denote four-vectors and bolded symbols \(A^{\prime}\) to denote three-vectors. \[\nabla\times\mathbf{B}-\partial_{t}\mathbf{E}=\mathbf{J}_{\rm eff}, \tag{13}\] where \[\mathbf{J}_{\rm eff}=-\varepsilon m^{2}\mathbf{A}^{\prime} \tag{14}\] is the "effective current" induced by the DPDM. Naively, Eq. (13) implies that the DPDM may generate either an electric or magnetic field. A well-controlled magnetic levitation setup must however occur inside some magnetic shielding (see Fig. 1). This magnetic shield typically acts a perfect conductor, and so the tangential electric field at its surface must vanish. The DM-induced signal will have a wavelength matching the Compton wavelength of the DM, \(\lambda_{\rm DM}\gtrsim 10^{7}\,\)m (for \(f_{\rm DM}\lesssim 100\,\)Hz). This wavelength sets the length scale on which the electric field can vary, and will be much larger than the characteristic size of the shielding. Therefore, since the tangential electric field vanishes at the walls of the shield, it will typically be small everywhere inside the shield. In other words, the dominate signal of DPDM inside the shield will typically be a _magnetic_ field (see Refs. [27; 45] for similar discussion and examples). Because the electric field can be neglected, this magnetic field signal should satisfy7 Footnote 7: Note that the \(\mathbf{B}\) predicted by Eq. (15) is the observable magnetic field associated with \(A\), not the dark magnetic field associated with \(A^{\prime}\). While the latter is suppressed by \(v_{\rm DM}\), \(\mathbf{B}\) need not be. \[\nabla\times\mathbf{B}\approx\mathbf{J}_{\rm eff}. \tag{15}\] As an example, let us consider the case where the shield is a cylinder of radius \(L\) (and arbitrary height). Suppose that the DPDM is polarized along the axis of the cylinder, which we will identify with the \(z\)-axis. That is, in the non-relativistic limit, the spatial components of \(A^{\prime}\) are given by \[\mathbf{A}^{\prime}(\mathbf{x},t)=A^{\prime}_{0}\cos(m_{A^{\prime}}t)\mathbf{\hat{z}}, \tag{16}\] (and the temporal component of \(A^{\prime}\) is suppressed by \(v_{\rm DM}\)). This corresponds to an effective current, given by Eq. (14). If \(m_{A^{\prime}}L\ll 1\), then Eq. (15) applies, and solving it yields the magnetic field signal [27; 45]8 Footnote 8: The DPDM amplitude is normalized by \(\frac{1}{2}m_{A^{\prime}}^{2}\langle|\mathbf{A}^{\prime}|^{2}\rangle=\rho_{\rm DM} \approx 0.3\,\)GeV/cm\({}^{3}\), where the average \(\langle\cdots\rangle\) is taken over many coherence times (the timescale over which the amplitude in Eq. (16) varies; see discussion in Sec. IV). Generically, \(\mathbf{A}^{\prime}\) can point in any direction, but will have some nonzero projection onto the \(z\)-axis. Therefore in this estimate and in the DPDM sensitivity in Fig. 3, we take \(m_{A^{\prime}}A^{\prime}_{0}\sim\sqrt{\frac{2\rho_{\rm DM}}{3}}\). The estimate in Eq. (25) and the axion sensitivity in Fig. 3, on the other hand, take \(m_{a}a_{0}\sim\sqrt{2\rho_{\rm DM}}\), since the axion DM has no inherent direction. \[\mathbf{B}_{A^{\prime}}(\mathbf{x},t) =-\frac{1}{2}\varepsilon m_{A^{\prime}}^{2}A^{\prime}_{0}r\cos(m_{ A^{\prime}}t)\mathbf{\hat{\phi}} \tag{17}\] \[\sim 5\times 10^{-20}\,\text{T}\left(\frac{\varepsilon}{10^{-8}} \right)\left(\frac{f_{A^{\prime}}}{100\,\text{Hz}}\right)\left(\frac{r}{1\,\text {m}}\right), \tag{18}\] where \(r\) denotes the distance from the axis of the cylindrical shield, and \(\mathbf{\hat{\phi}}\) denotes the azimuthal direction. Note that \(B_{A^{\prime}}\) vanishes at the center of the cylindrical shield \(r=0\). Therefore, in order to be sensitive to the DPDM signal, it will be important that the magnetic levitation setup is positioned _off-center_ within the magnetic shield. The total field that the magnetically levitated particle experiences will be a combination of the static quadrupole trap and the oscillating DPDM signal. In other words, Eq. (1) will consist of the terms in Eq. (2), along with an additional (time-dependent) contribution from the DPDM signal given by Eq. (17). As the quadrupole gradient \(b_{ij}\) is much larger than the gradient of Eq. (17), the second term in Eq. (1) will receive negligible corrections. In particular, this implies that the trapping frequencies will remain unchanged. Instead, the dominant effect of the DPDM signal in Eq. (17) will be to give a time-dependent contribution to the first term in Eq. (1). Concretely, let us choose coordinates similar to those used in Eq. (2), i.e. let \(\mathbf{x}=0\) denote the point for which the _time-averaged_ magnetic field vanishes, \(\langle\mathbf{B}_{0}(t)\rangle=0\). Moreover, we can take coordinates where \(b_{ij}\) is diagonal. Let us suppose the trap is oriented so that one of these coordinate directions is the \(z\)-direction (the axial direction of the shield). Then if the center of the trap \(\mathbf{x}=0\) is displaced by a distance \(r\) along the \(x\)-direction from the axis of the shield, the total magnetic field in the vicinity of the trap center will be \[\mathbf{B}(\mathbf{x},t)=\mathbf{B}_{\rm trap}(\mathbf{x},t)-\frac{1}{2}\varepsilon m_{A^{ \prime}}^{2}A_{0}^{\prime}r\cos(m_{A^{\prime}}t)\mathbf{\hat{y}}, \tag{19}\] where \(\mathbf{B}_{\rm trap}\) is as in Eq. (2). Plugging this into Eq. (3), we find that the SCP experiences a force \[\mathbf{F}(\mathbf{x},t)=\mathbf{F}_{\rm trap}(\mathbf{x},t)+\frac{3}{4}\varepsilon m_{A^{ \prime}}^{2}A_{0}^{\prime}\cdot Vby_{yy}r\cdot\cos(m_{A^{\prime}}t)\mathbf{\hat{y }}, \tag{20}\] where \(\mathbf{F}_{\rm trap}\) is the restoring force from Eq. (5). The second term represents a driving force, which will drive oscillatory motion along the \(y\)-direction. If \(m_{A^{\prime}}\approx 2\pi f_{y}\), this translational mode will be resonantly driven. ### Axion dark matter Levitated SCPs may also be sensitive to axion DM which couples to photons. An axionlike particle \(a\), with mass \(m_{a}\) and coupling \(g_{a\gamma}\) to photons, is described by the Lagrangian \[\mathcal{L}_{a}\supset\frac{1}{2}\partial_{\mu}a\partial^{\mu}a-\frac{1}{4}F _{\mu\nu}F^{\mu\nu}-\frac{1}{2}m_{a}a^{2}+\frac{1}{4}g_{a\gamma}aF_{\mu\nu} \tilde{F}^{\mu\nu}, \tag{21}\] where \(\tilde{F}^{\mu\nu}=\frac{1}{2}\epsilon^{\mu\nu\rho\sigma}F_{\rho\sigma}\). In the non-relativistic limit, the axion DM is uniform in space and oscillates at its Compton frequency (corresponding to its mass \(m_{a}\)), i.e. it takes the form \[a(\mathbf{x},t)=a_{0}\cos(m_{a}t). \tag{22}\] Much like in the case of DPDM, in the non-relativistic limit, the only effect of the last term in Eq. (21) is to add an effective current to the Ampere-Maxwell law, as in Eq. (13). In the axion case, this current takes the form [72; 73; 20] \[\mathbf{J}_{\rm eff}=-g_{a\gamma}(\partial_{t}a)\mathbf{B}. \tag{23}\] One important difference from the DPDM case is that an applied magnetic field is required in order for the axion to convert into an electromagnetic signal [as can be seen from the presence of \(\mathbf{B}\) in Eq. (23)]. Conveniently, in our case, the quadrupole trap itself can act as the necessary applied magnetic field! As in the DPDM case, this current should produce an oscillating magnetic field inside the shield. However, in the axion case, the magnetic field response is much more difficult to compute. From Eq. (23), we see that in the axion case, the direction of the effective current is set by the static magnetic field. Therefore the effective current, in this case, will inherit the complicated shape of the trapping field (which depends on how exactly the trap is implemented). Moreover, just as in the DPDM case, the trap must be positioned off-center within the shield, otherwise the magnetic field sourced by \(\mathbf{J}_{\rm eff}\) will vanish at the center of the trap, by symmetry (see Appendix B). The computation thus amounts to determining the response of a cavity to a complicated asymmetric current distribution. In Appendix B, we compute the signal in the case where the shield is rectilinear and the trap is created by two coils in an anti-Helmholtz-like configuration. The exact signal must be computed numerically, but we can derive a parametric estimate analytically, in terms of the dimensions of the shield \(L\), the radius of the coils \(R\), and the distance between the coils \(2h\). We find that the axion-induced magnetic field response at the center of the trap should be \[B_{a}(0,t) \sim\mathcal{O}(0.1)\cdot\frac{g_{a\gamma}m_{a}a_{0}b_{0}\left(R^ {2}+h^{2}\right)^{5/2}}{L^{3}}\sin(m_{a}t) \tag{24}\] \[\sim 3\times 10^{-20}\,\mathrm{T}\left(\frac{g_{a\gamma}}{10^{-10} \,\mathrm{GeV}^{-1}}\right)\left(\frac{f_{0}}{100\,\mathrm{Hz}}\right)\] \[\cdot\sqrt{\frac{\rho}{0.1\,\mathrm{g/cm}^{3}}}\left(\frac{h}{10 \,\mathrm{cm}}\right)^{5}\left(\frac{100\,\mathrm{cm}}{L}\right)^{3}, \tag{25}\] where we have taken \(h\sim R\). The constant of proportionality in Eq. (24) depends on the exact position of the trap within the cavity (and as mentioned above, will be zero if the trap is positioned in a sufficiently symmetric location). As in the DPDM case, this magnetic field will drive the oscillatory motion of the SCP. ## IV Sensitivity In this section, we derive the sensitivity of levitated SCPs to ultralight DM. To do so, we must first discuss the relevant noise sources. This section discusses three primary sources: thermal noise, measurement imprecision noise, and measurement backaction noise. The latter two of these depend on the readout scheme that is used. This work considers a SQUID readout, although similar noise sources exist for other readout schemes. Once we have enumerated the noise sources, we discuss the trade-off between imprecision and backaction noise, controlled by the coupling strength of the readout scheme. We outline two possible choices in this trade-off, one corresponding to a broadband detection scheme and one corresponding to a resonant detection scheme. Finally, we estimate the sensitivity of both these schemes to DPDM and axion DM. ### Noise sources The first relevant noise source in our system is thermal noise. By the fluctuation-dissipation theorem, the thermal force noise acting on the SCP is given by \(S_{FF}^{\rm th}=4m\gamma T\), where \(m\) is the mass of the SCP, and \(\gamma\) and \(T\) are the dissipation rate and temperature of the system [74]. To compare with Eqs. (18) and (25), it will be useful to translate this into a noise power spectral density (PSD) for the magnetic field [via Eq. (3)] \[S_{BB}^{\rm th} =\frac{16m\gamma T}{9V^{2}b^{2}}=\frac{8\rho\gamma T}{3m\omega_{0} ^{2}} \tag{26}\] \[\sim 7\times 10^{-39}\,{\rm T}^{2}/{\rm Hz}\left(\frac{1\,{\rm g}}{m} \right)\left(\frac{\rho}{0.1\,{\rm g}/{\rm cm}^{3}}\right)\] \[\cdot\left(\frac{\gamma}{2\pi\cdot 10^{-8}\,{\rm Hz}}\right) \left(\frac{T}{10\,{\rm mK}}\right)\left(\frac{100\,{\rm Hz}}{f_{0}}\right)^{ 2}. \tag{27}\] The second noise source of interest is imprecision noise. As mentioned above, the details of this noise source depend on the readout scheme used. Here, we consider a SQUID readout, in which case imprecision noise arises from flux noise within the SQUID. The DM-induced magnetic field exerts a force on the SCP, causing it to move and distort the local magnetic field. This, in turn, changes the flux measured by the SQUID. Conversely, uncertainty in the measured flux of the SQUID results in uncertainty in the DM-induced magnetic field. Let us denote the internal flux noise of the SQUID by \(S_{\phi\phi}(\omega)\). We can parameterize the coupling between the position of the SCP and the measured flux of the SQUID by a parameter \(\eta\), which can be varied, e.g., by changing the inductance of the pickup coil or its position relative to the SCP [48]. The flux noise of the SQUID is then related to noise in the position of the SCP via \(S_{xx}^{\rm imp}=S_{\phi\phi}/\eta^{2}\). We can convert this position noise into a magnetic field noise PSD (as a function of frequency \(\omega\)) via \[S_{BB}^{\rm imp}(\omega) =\frac{4S_{FF}^{\rm imp}}{9V^{2}b^{2}}=\frac{4S_{xx}^{\rm imp}( \omega)}{9V^{2}b^{2}|\chi(\omega)|^{2}} \tag{28}\] \[=\frac{2\rho S_{\phi\phi}(\omega)}{3m^{2}\omega_{0}^{2}\eta^{2}| \chi(\omega)|^{2}}, \tag{29}\] where \[\chi(\omega)=\frac{1}{m(\omega_{0}^{2}-\omega^{2}-i\gamma\omega)} \tag{30}\] denotes the mechanical susceptibility. Note that while the thermal noise in Eq. (26) is frequency-independent, the imprecision noise in Eq. (29) does depend on frequency. In particular, the imprecision noise becomes significantly suppressed at the trapping frequency \(\omega=\omega_{0}\). The final relevant source of noise is back-action noise. This arises from current noise \(S_{JJ}(\omega)\) within the SQUID. A current \(J\) circulating in the SQUID will generate local magnetic fields which back-react on the SCP with a force \(-\eta J\)[48]. The larger the coupling \(\eta\) is, the stronger the back-reaction on the SCP will be. Therefore, when choosing \(\eta\), there exists a trade-off between imprecision noise and back-action noise. The magnetic field noise PSD associated with back-action noise is given by \[S_{BB}^{\rm back}(\omega)=\frac{2\rho\eta^{2}S_{JJ}(\omega)}{3m^{2}\omega_{0} ^{2}}. \tag{31}\] As with thermal noise, back-action noise is frequency-independent (up to any frequency dependence coming from \(S_{JJ}\); see next section). We also note one additional noise source, namely vibrational noise. External vibrations of the system lead to position noise \(S_{xx}^{\rm vib}\), and as in the case of imprecision noise, this will manifest as noise in the force and magnetic field. Vibrational noise is, however, not inherent to the readout scheme, and can be mitigated by various means. As in Ref. [48], the experimental apparatus can be hung from a vibration isolation system to reduce vibrational noise. Further, instead of utilizing just a single levitation apparatus, a second copy can be set up at the center of the same shield. Then both copies will experience the same external vibrations, while only the first will be sensitive to the DM signal. The relative displacement of the two sensors can then be used to isolate the DM signal from external vibrations. We leave a more detailed study of vibrational noise to future work. ### Choice of the coupling \(\eta\) Before we can estimate the size of the imprecision and back-action noise sources, we must decide on an appropriate choice for the coupling \(\eta\). First, let us observe that \(S_{\phi\phi}\) and \(S_{JJ}\) are described by an uncertainty relation \(\sqrt{S_{\phi\phi}S_{JJ}}=\kappa\), where \(\kappa\geq 1\) is referred to as the SQUID's energy resolution [75]. The limiting case \(\kappa=1\) corresponds to the SQL. State-of-the-art SQUIDs can achieve \(\kappa\approx 5\)[76; 77; 78]. We note that SQUIDs typically display \(1/f\) noise at frequencies \(\lesssim 10\,{\rm kHz}\), which would make \(S_{\phi\phi}\), \(S_{JJ}\), and \(\kappa\) frequency-dependent. This \(1/f\) noise can be avoided by up-converting the signal, using for instance a superconducting capacitor bridge transducer [79; 80] or a superconducting inductance bridge transducer [81]. In our subsequent estimates, we will assume that this upconversion can be achieved, so that we can treat \(\kappa\), \(S_{\phi\phi}\) and \(S_{JJ}\) as frequency-independent. In this case, the combination of all noise sources can be written as \[S_{BB}^{\rm tot} =S_{BB}^{\rm th}+S_{BB}^{\rm imp}+S_{BB}^{\rm back}\] \[=\frac{2\rho\left(4m\gamma T+\kappa\tilde{\eta}^{-2}|\chi(\omega) |^{-2}+\kappa\tilde{\eta}^{2}\right)}{3m^{2}\omega_{0}^{2}}, \tag{32}\] where \(\tilde{\eta}=\eta\sqrt[4]{\frac{S_{JJ}}{S_{\phi\phi}}}\). We can vary the relative sizes of these contributions by changing the coupling \(\eta\). As mentioned above, there is, however, a trade-off between imprecision noise and back-action noise, when we do so. Since both back-action and thermal noise are frequency-independent, there is no benefit in decreasing \(\eta\) beyond the point where thermal noise dominates over back-action noise. Thus, (if possible) we should always take \[\tilde{\eta}\geq\sqrt{\frac{4m\gamma T}{\kappa}}. \tag{33}\] On the other hand, at low frequencies \(\omega\ll\omega_{0}\), we have \(\chi\approx 1/(m\omega_{0}^{2})\), and so imprecision noise (for a fixed test mass \(m\)) is frequency-independent as well. Therefore, increasing \(\eta\) beyond the point where back-action noise dominates over imprecision noise is not beneficial at frequencies lower than the trapping frequency.9 In other words, we also want Footnote 9: Increasing \(\eta\) further can still be beneficial at high frequencies \(\omega\gg\omega_{0}\), but the sensitivity in this regime degrades rapidly (see Fig. 2), so we do not consider increasing \(\eta\) further to be a productive way of improving sensitivity. Instead, if one wants to probe higher frequencies, it is better to increase \(\omega_{0}\). \[\tilde{\eta}\leq\frac{1}{\sqrt{|\chi(0)|}}=\sqrt{m}\omega_{0}. \tag{34}\] We expect that it should generally be possible to saturate this upper bound by appropriate design of the readout; e.g., see the supplemental material of Ref. [48]. Meanwhile, the coupling can always be decreased by worsening the readout efficiency. Therefore, we expect that \(\tilde{\eta}\) can be varied across the entire range from Eq. (33) to Eq. (34). In our sensitivity calculations below, we consider two choices of \(\eta\), corresponding to the limiting cases in Eq. (33) and (34). We refer to these as the "resonant" and "broadband" choices, respectively, as the former maximizes sensitivity at \(\omega=\omega_{0}\), while the latter maximizes sensitivity at \(\omega\ll\omega_{0}\). Fig. 2 shows sample noise curves for these two different choices, along with the individual noise contributions in each case. ### Projections With these choices for \(\eta\), we can project sensitivity curves for DPDM and axion DM using our proposed setups. The simpler case is to utilize the broadband choice. In this case, good sensitivity to a wide range of masses can be achieved by running a single experiment with a fixed resonant frequency \(\omega_{0}\). From Eq. (32) with the choice of \(\eta\) as in Eq. (34), we can see that, in the regime where imprecision and backaction noise dominate over thermal noise (see Fig. 2), the total noise at low frequencies \(\omega\ll\omega_{0}\) is independent of \(\omega_{0}\). Therefore, our choice of the resonant frequency will not affect our sensitivity at low frequencies (in the DPDM case),10 and so it is best to choose \(\omega_{0}\) as large as possible to minimize the frequency range that suffers the high-frequency suppression. Our projections in Fig. 3 take \(f_{0}=100\,\)Hz. Figure 2: Noise curves for resonant (solid) and broadband (dashed) choices of \(\eta\). The black curves show the total noise, while the colored curves show the thermal (red), back-action (orange), and imprecision (blue) noise contributions. To compute these curves, we use the same parameter values as in Eq. (27), except with \(f_{0}=10\,\)Hz and \(\kappa=5\). Note that in the case of the resonant choice, the back-action noise coincides with the thermal noise. Moreover, thermal noise is independent of the choice of \(\eta\). Therefore, the red curve represents the thermal noise in both cases, as well as the back-action noise in the resonant case. For short integration times \(t_{\rm int}\), the signal-to-noise ratio (SNR) for such an experiment can be determined as \[{\rm SNR}=\frac{B_{\rm DM}^{2}}{S_{BB}^{\rm tot}/t_{\rm int}}, \tag{35}\] where \(B_{\rm DM}\) is the magnetic field signal in Eqs. (17) and (24) for the DPDM case and the axion case, respectively. However, Eqs. (16) and (22) [and so also Eqs. (17) and (24)] are only valid on timescales \(t_{\rm int}\) shorter than the coherence time \(t_{\rm coh}\sim 2\pi/(m_{\rm DM}v_{\rm DM}^{2})\sim 10^{6}/f_{\rm DM}\) of the DM. On timescales longer than this, the amplitudes \(A_{0}^{\prime}\) and \(a_{0}\) in Eqs. (17) and (24) vary stochastically (see footnote 8 for a discussion of their normalization). For \(t_{\rm int}>t_{\rm coh}\), we can then treat each coherence time as an independent experiment. To get the SNR for the full \(t_{\rm int}\), we sum the SNRs from each individual coherence time in quadrature [82] \[{\rm SNR}=\frac{B_{\rm DM}^{2}}{S_{BB}^{\rm tot}/t_{\rm coh}}\cdot\sqrt{\frac{ t_{\rm int}}{t_{\rm coh}}}. \tag{36}\] The blue and orange curves labeled "broadband" in Fig. 3 show the projected sensitivities to DPDM and axion DM, respectively. These are computed by setting \({\rm SNR}=3\) in Eq. (36), utilizing the broadband choice for \(\eta\) in \(S_{BB}^{\rm tot}\), fixing a trapping frequency \(f_{0}=100\,{\rm Hz}\), and taking an integration time of \(t_{\rm int}=1\,{\rm yr}\). The blue curves in Fig. 3 show the sensitivities which can be achieved with parameters representative of an existing levitation setup, such as in Ref. [48], if they can improve their coupling strength close to the bound in Eq. (34). In principle, the only other modification required for such a setup to be sensitive to ultralight DM is to shift the trap off-center within the shield. The orange curves show the sensitivities that can be achieved with an improved setup. Most notably, this setup considers a SCP that is much larger and hollow, along with a reduced dissipation rate and larger apparatus dimensions. The parameter values used for these setups are shown in Table 1. In the DPDM case, we consider only the sensitivity to the \(z\)-component of \(\mathbf{A}^{\prime}\), for simplicity, but we note that marginally better sensitivity could be achieved by considering all three components. In the axion case, we take the trap to be located at a position \(\mathbf{r}_{0}=(0.7L,0.8L,0.5L)\) within the shield. In the resonant case, we achieve excellent sensitivity near \(\omega_{0}\), but worse sensitivity away from it. We will, therefore, need to perform several experiments of shorter durations, each with a different \(\omega_{0}\). The trapping frequency can be scanned e.g., by varying the current running through the coils, which will change \(b_{0}\). Each such experiment will only effectively probe some small range \(\delta\omega\) of frequency space. We can estimate this width by determining when \(S_{BB}^{\rm tot}\) doubles in size, that is \[S_{BB}^{\rm tot}\left(\omega_{0}+\frac{\delta\omega}{2}\right)=2S_{BB}^{\rm tot }(\omega_{0}) \tag{37}\] (using the resonant choice Eq. (33) for \(\eta\)). Assuming thermal noise and backaction noise dominate over imprecision noise at \(\omega=\omega_{0}\) (see Fig. 2), this implies \[\delta\omega =\frac{4\sqrt{2}\gamma T}{\kappa\omega_{0}} \tag{38}\] \[\sim 2\pi\cdot 0.2\,{\rm Hz}\left(\frac{\gamma}{2\pi\cdot 10^{-8} \,{\rm Hz}}\right)\] \[\cdot\left(\frac{T}{10\,{\rm mK}}\right)\left(\frac{5}{\kappa} \right)\left(\frac{10\,{\rm Hz}}{f_{0}}\right). \tag{39}\] We will, therefore, need to run experiments at several trapping frequencies \(\omega_{i}\), separated from each other by roughly \(\delta\omega_{i}=\delta\omega(\omega_{0}=\omega_{i})\) [so the \(\omega_{i}\) values will be closer together at higher frequencies]. As our sensitivity improves more slowly for \(t_{\rm int}>t_{\rm coh}\) [see Eq. (36)], we will fix the integration time of each experiment to be \(t_{\rm int,}=t_{\rm coh}(m_{\rm DM}=\omega_{i})\).11 If we wish to scan over a total frequency range of \(\Delta\omega\), the total integration time \begin{table} \begin{tabular}{c c c} Parameter & Existing & Improved \\ \hline \hline SCP mass \(m\) & \(10\,\mu{\rm g}\) & \(1\,{\rm g}\) \\ SCP density \(\rho\) & \(10\,{\rm g}/{\rm cm}^{3}\) & \(0.1\,{\rm g}/{\rm cm}^{3}\) \\ Dissipation rate \(\gamma\) & \(2\pi\cdot 10^{-5}\,{\rm Hz}\) & \(2\pi\cdot 10^{-8}\,{\rm Hz}\) \\ Temperature \(T\) & \multicolumn{2}{c}{\(10\,{\rm mK}\)} \\ SQUID energy resolution \(\kappa\) & \multicolumn{2}{c}{\(5\)} \\ \hline Distance from axis \(r\) & \(10\,{\rm cm}\) & \(1\,{\rm m}\) \\ \hline Shield dimension \(L\) & \(10\,{\rm cm}\) & \(1\,{\rm m}\) \\ Coil radius \(R\) & \(1\,{\rm cm}\) & \(10\,{\rm cm}\) \\ Coil separation \(h\) & \(1\,{\rm cm}\) & \(10\,{\rm cm}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Parameters used to compute the sensitivity curves in Fig. 3. One column shows parameter values representative of an existing setup, as in Ref. [48], while the other shows parameter values for an improved setup. The first set of parameters is common to both the DPDM and axion DM scenarios. The parameter \(r\) is relevant in the DPDM scenario [as in Eq. (17)], while the parameters \(L\), \(R\), and \(h\) are relevant in the axion DM scenario [as in Eq. (24)]. will then be \[\sum_{i}t_{\text{int},i} =\sum_{i}\frac{\kappa\pi}{2\sqrt{2}\gamma Tv_{\text{DM}}^{2}}\delta \omega_{i}\] \[=\frac{\kappa\pi}{2\sqrt{2}\gamma Tv_{\text{DM}}^{2}}\Delta\omega \tag{41}\] \[\sim 1\,\text{yr}\left(\frac{\kappa}{5}\right)\left(\frac{2\pi \cdot 10^{-8}\,\text{Hz}}{\gamma}\right)\] \[\cdot\left(\frac{10\,\text{mK}}{T}\right)\left(\frac{\Delta f}{74 \,\text{Hz}}\right). \tag{42}\] The solid red curves in Fig. 3 show the projected sensitivities for this scanning scheme (and the "improved" parameters mentioned above). We scan from \(f_{0}=3\,\text{Hz}\) up to \(77\,\text{Hz}\), so that the total integration time is \(1\,\text{yr}\). The SNR for the experiment with trapping frequency \(\omega_{i}\) is calculated using Eq. (36), with \(t_{\text{int}}=t_{\text{coh}}(\omega_{i})\) and the resonant choice for \(\eta\) in \(S_{\text{BB}}^{\text{t}}\). The SNRs of the individual experiments are then combined in quadrature,12 i.e. Footnote 12: Adding the SNRs in quadrature is necessary when the DM signal is not coherent from one experiment to the next. Because the experiment with trapping frequency \(f_{i}\) integrates for \(t_{\text{coh}}(\omega_{i})\), the SNRs must be summed in quadrature for \(m_{\text{DM}}\geq\omega_{i}\). In principle, the SNRs can be summed linearly for \(m_{\text{DM}}<\omega_{i}\), but we expect the gain to be marginal as the sensitivity for masses \(m_{\text{DM}}\geq 2\pi\cdot 3\,\text{Hz}\) is dominated by the single experiment with trapping frequency near \(m_{\text{DM}}\). For simplicity, we therefore sum in quadrature for all masses. \[\text{SNR}^{2}=\sum_{i}\text{SNR}_{i}^{2}, \tag{43}\] where the index \(i\) runs over the individual experiments. The sensitivity in Fig. 3 takes a total \(\text{SNR}=3\). For \(3\,\text{Hz}<f_{\text{DM}}<77\,\text{Hz}\), the sensitivity is dominated by the peak sensitivity of the experiment with trapping frequency \(f_{0}=f_{\text{DM}}\). Outside this frequency range, the low/high-frequency tails of several experiments contribute to the combined sensitivity. The dashed red curves also show the sensitivities of a single experiment with \(f_{0}=10\,\text{Hz}\). In Fig. 3, we also show existing constraints in various shades of grey.13 The DPDM constraints include limits from: unshielded magnetometer measurements by the SNIPe Hunt collaboration [44]; a synchronized quantum sensor network (SQSN) inside a shielded room [43]; non-observation of CMB-photon conversion into (non-DM) dark photons by the FIRAS instrument [86]; heating of the dwarf galaxy Leo T [90]; and resonant conversion of DPDM during the dark ages [91]. The axion constraints include limits from: SNIPe Hunt; the CAST helioscope search for solar axions [30]; non-observation of gamma rays in coincidence with SN1987A [92]; and X-ray observations of the quasar H1821+643 from the Chandra telescope [93]. Laboratory constraints (SNIPe Hunt, SQSN, and CAST) are shown in darker shades of grey, while astrophysical/cosmological constraints are shown in lighter shades. Footnote 13: Several of these limits were acquired from Refs. [84, 85]. See also Refs. [88, 85, 86, 87] for other limits in this mass range which are not shown here, and Ref. [89] for a brief discussion of the caveats regarding those limits. ## V Discussion In this work, we explored the prospect of utilizing magnetically levitated superconductors to search for ultralight DM at frequencies below kHz. If ultralight DM couples to electromagnetism, it can source an oscillating magnetic field inside an experimental apparatus. Various experimental methods exist to probe such magnetic field signals at high frequencies, but few existing or proposed experiments are sensitive to DM with masses corresponding to frequencies \(f_{\text{DM}}\lesssim\text{kHz}\). We showed that levitated superconductors can function as excellent magnetometers, which are sensitive to signals in the Hz to kHz frequency range. This makes them well suited to detect ultralight DM in the \(4\times 10^{-15}\lesssim m_{\text{DM}}\lesssim 4\times 10^{-12}\,\text{eV}\) mass range. A superconductor immersed in a magnetic field configuration will tend to settle at the point of lowest magnetic field. This fact can trap a SCP at the center of a quadrupole magnetic field. Ultralight DM can source a nonzero oscillating magnetic field signal near the trap if such a trap is located off-center within a magnetic shield. This DM-sourced field can perturb the equilibrium position of the SCP, leading to oscillatory motion. If the frequency of this magnetic field signal matches the trapping frequency (typically in the Hz to kHz range), then the motion can be resonantly enhanced. This makes levitated superconductors unique among axion and dark-photon experiments in that they can resonantly search for these DM candidates for masses \(m_{\text{DM}}\lesssim 10^{-12}\,\text{eV}\). We discussed three primary noise sources for a levitated SCP experiment: thermal noise, imprecision noise, and back-action noise. The first is fixed by the experiment's dissipation rate and temperature, while the parameters of the readout system fix the latter two. In particular, a trade-off exists between imprecision and back-action noise, which allows for two different operation schemes of a levitated SCP experiment. In the "broadband" scheme (the blue and orange curves in Fig. 3), sensitivity to a wide range of frequencies is maximized by equating back-action noise with below-resonance imprecision noise. In this case, a single experiment run for a long duration can achieve excellent sensitivity at many DM masses. In the "resonant" scheme (the red curves in Fig. 3), sensitivity on-resonance is maximized by equating thermal and back-action noise. In this case, several shorter-duration experiments are required to scan a large range of DM masses. Fig. 3 shows that, with a strongly coupled readout, existing levitation experiments (blue curves) can already achieve sensitivity to DPDM comparable to other laboratory experiments in this mass range. A dedicated setup (orange and red curves) using larger hollow spheres, a lower dissipation rate, and a larger apparatus can achieve even better sensitivity. In particular, in the DPDM case, it can exceed the existing laboratory constraints and approach the best astrophysical heating constraints, while in the axion DM case, it can be the best laboratory probe and approach constraints from SN1987A. Since these astrophysical constraints can depend quite sensitively on the modeling of complex systems, it is valuable to have complementary laboratory probes. Both the broadband (orange) and scanning (red) schemes enable good sensitivities for this improved setup. While our projections already show that levitated superconductors can be promising ultralight DM detectors, this technique could potentially be improved in several ways. Firstly, the thermal noise floor can be decreased by further lowering the temperature of the system. Doing so will likely not affect the sensitivity of our broadband scheme, as thermal noise is typically subdominant, but it can improve the sensitivity of the resonant scheme. We also note that utilizing an array of sensors and/or a squeezed readout can further improve the sensitivity and scan rate of the experiment, which demands further investigation [94, 95, 96]. Secondly, different geometries of the levitation apparatus can be considered. Our projections in Fig. 3 assume a spherical SCP levitated between circular anti-Helmholtz-like coils, which would result in \(b_{xx}=b_{yy}=-\frac{1}{2}b_{zz}\). By utilizing elliptic coils, the degeneracy between \(b_{xx}\) and \(b_{yy}\) can be broken, potentially allowing for frequency hierarchies \(b_{xx}\ll b_{zz}\). This would enable the apparatus to probe lower frequencies while maintaining a small displacement of the equilibrium position due to gravity [see Eq. (8)]. A coaxial levitation geometry could also enable a hierarchy between the radial and axial frequencies [47]. Additionally, the SCP shape can be varied to decrease the effective density even further below the densities used in the improved setup of Fig. 3. For instance, a SCP in the shape of a ring can have a much smaller mass than a sphere of the same effective volume [97, 98]. Finally, a larger signal can be created in the axion case by utilizing a larger static magnetic field for axion-photon conversion. In this work, we have assumed that the magnetic field allowing the axion DM to convert is the same magnetic field that traps the SCP. However, an addi Figure 3: Sensitivity of levitated superconductors to DPDM (left) and axion DM (right). The blue curves show the sensitivity achievable with parameters representative of an existing setup (with increased readout efficiency), as in Ref. [48]. In contrast, the orange and red curves show the sensitivity of a new setup with improved parameters, including a larger hollow SCP. The parameter values for both setups are shown in Table 1. The blue and orange curves consider a single experiment conducted for \(t_{\text{int}}=1\,\text{yr}\), using a trapping frequency of \(f_{0}=100\,\text{Hz}\) and the “broadband” choice of coupling \(\eta\) (see main text). The dashed red curves represent a single experiment conducted for \(t_{\text{int}}=t_{\text{coh}}\sim 30\,\text{hr}\), using a trapping frequency of \(f_{0}=10\,\text{Hz}\) and the “resonant” choice of \(\eta\). The solid red curves show the aggregate sensitivity of scanning this resonant setup over many trapping frequencies from \(f_{0}=3\,\text{Hz}\) to \(77\,\text{Hz}\) (so that the total integration time is \(1\,\text{yr}\)). We also show existing constraints in various shades of grey (see main text for descriptions). Laboratory constraints (SNIPE Hunt, SQSN, and CAST) are shown in darker shades of grey, while astrophysical/cosmological constraints are shown in lighter shades. These sensitivity curves demonstrate that existing levitation setups with improved readout efficiencies are comparable to other laboratory probes of DPDM. In addition, a focused, dedicated setup can achieve the leading sensitivity amongst such probes of both DPDM and axion DM. tional magnetic field can be applied, which enhances the axion-photon conversion rate without affecting the trapping physics. The calculation in Appendix B shows that the axion signal in the vicinity of the trap is affected by the static magnetic field everywhere inside the shield (not just in the vicinity of the trap). It is, therefore, plausible that a large static magnetic field can be sourced at the opposite end of the shield so that it does not significantly affect the operation of the levitated SCP apparatus, but it has a large effect on the axion magnetic field signal. We leave a detailed study of this idea to future work. ###### Acknowledgements. We thank Asher Berlin, Dan Carney, Yifan Chen, Roni Harnik, Junwu Huang, Rafael Lang, Jackob Taylor, and Yue Zhao for their helpful discussions. Z.L. and S.K. are supported in part by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Superconducting Quantum Materials and Systems Center (SQMS) under contract number DE-AC02-07CH11359. Z.L. is supported in part the DOE grant DE-SC0011842. Z.L. and S.K. acknowledge the support of the Aspen Center for Physics, supported by National Science Foundation grant PHY-2210452, where part of this work was completed. G.H. acknowledges support from the Swedish Research Council (Grant No. 2020-00381). The code used for this research is made publicly available through Github [62] under CC-BY-NC-SA.
2303.16299
Comparison of Methods that Combine Multiple Randomized Trials to Estimate Heterogeneous Treatment Effects
Individualized treatment decisions can improve health outcomes, but using data to make these decisions in a reliable, precise, and generalizable way is challenging with a single dataset. Leveraging multiple randomized controlled trials allows for the combination of datasets with unconfounded treatment assignment to better estimate heterogeneous treatment effects. This paper discusses several non-parametric approaches for estimating heterogeneous treatment effects using data from multiple trials. We extend single-study methods to a scenario with multiple trials and explore their performance through a simulation study, with data generation scenarios that have differing levels of cross-trial heterogeneity. The simulations demonstrate that methods that directly allow for heterogeneity of the treatment effect across trials perform better than methods that do not, and that the choice of single-study method matters based on the functional form of the treatment effect. Finally, we discuss which methods perform well in each setting and then apply them to four randomized controlled trials to examine effect heterogeneity of treatments for major depressive disorder.
Carly Lupton Brantner, Trang Quynh Nguyen, Tengjie Tang, Congwen Zhao, Hwanhee Hong, Elizabeth A. Stuart
2023-03-28T20:43:00Z
http://arxiv.org/abs/2303.16299v2
Comparing Machine Learning Methods for Estimating Heterogeneous Treatment Effects by Combining Data from Multiple Randomized Controlled Trials ###### Abstract Individualized treatment decisions can improve health outcomes, but using data to make these decisions in a reliable, precise, and generalizable way is challenging with a single dataset. Leveraging multiple randomized controlled trials allows for the combination of datasets with unconfounded treatment assignment to improve the power to estimate heterogeneous treatment effects. This paper discusses several non-parametric approaches for estimating heterogeneous treatment effects using data from multiple trials. We extend single-study methods to a scenario with multiple trials and explore their performance through a simulation study, with data generation scenarios that have differing levels of cross-trial heterogeneity. The simulations demonstrate that methods that directly allow for heterogeneity of the treatment effect across trials perform better than methods that do not, and that the choice of single-study method matters based on the functional form of the treatment effect. Finally, we discuss which methods perform well in each setting and then apply them to four randomized controlled trials to examine effect heterogeneity of treatments for major depressive disorder. treatment effect heterogeneity, combining data, personalized medicine, machine learning ## 1 Introduction When tailoring treatment regimens to individual patients, one must strive to understand how different treatment options might affect the specific patient based on their characteristics or context. Rather than using a one-size-fits-all approach, clinicians and researchers are turning more towards personalized medicine with the goal of improving clinical outcomes. In this setting, the focus of estimation becomes conditional average treatment effects, i.e., how well the treatment is expected to work conditional on the person's known characteristics. The benchmark for estimating treatment effects in an unbiased manner is most often a randomized controlled trial (RCT). In an RCT, participants are randomly assigned to treatment or control, therefore ensuring unconfounded treatment assignment and unbiased treatment effect estimates in the given sample. However, these trials often have sample sizes that are large enough to detect main effects but lack power to estimate heterogeneous treatment effects (Enderlein, 1988). To overcome these specific issues, researchers have started combining information from multiple studies to improve treatment effect estimation. Multiple studies allow for larger sample sizes and at times a more representative sample of the target population. In the setting with multiple RCTs, meta-analysis or hierarchical models are common techniques to combine studies and estimate treatment effects (Debray et al., 2015; Seo et al., 2021). These approaches often do not explicitly target conditional average treatment effects though, and often only use aggregate-level data which makes it challenging to estimate treatment effects conditional on individual-level characteristics. Furthermore, meta-analysis is commonly applied within a parametric framework, which is highly interpretable but requires prespecification of effect moderators and distributional assumptions for parameters. Non-parametric approaches are worth exploring in this setting because they allow for high levels of flexibility in outcome and treatment effect functions. Relationships between covariates and treatment effect can be complex and non-linear in reality, and non-parametric machine learning methods can better handle those scenarios. Many non-parametric approaches exist to estimate heterogeneous treatment effects (Kunzel et al., 2019; Athey et al., 2019; Green and Kern, 2012; Kennedy, 2020; Nie and Wager, 2021; Dandl et al., 2022); however, these approaches have generally been developed only for the single-study setting. Several of the common approaches are discussed in the section to follow (3.1), and we subsequently extend these methods for use in multiple studies. Recent research has investigated a few non-parametric approaches for the multiple study setting, mostly geared towards combining data from one RCT with a large observational dataset (Yang et al., 2020, 2022; Kallus et al., 2018; Rosenman et al., 2020). In that work, the focus is often on estimating the bias present in the observational data to determine the level at which the observational study estimates can be combined with the RCT estimates. These methods are therefore not as straightforward to use in the multiple RCT setting. With multiple RCTs, each individual trial has the benefit of unconfounded treatment assignment, but significant cross-trial heterogeneity could still exist due to both observed and unobserved factors. The focus in this case is no longer de-biasing one of the datasets, but instead determining the amount of heterogeneity present and how to account for it. Brantner and colleagues Brantner et al. (2023) wrote a comprehensive review of methods geared towards combining datasets to estimate treatment effect heterogeneity. That review included approaches for multiple RCTs; the most common were individual participant-level data one-stage meta-analyses (Debray et al., 2015). One alternative approach focuses on combining RCTs to estimate conditional average treatment effects in a non-parametric framework (Tan et al., 2021). That work by Tan and colleagues was done in the federated learning setting, in which individual-level data could not be shared across study sites and instead only aggregate results or models could be shared. In the sections to follow, we tailor Tan et al.'s method to when individual-level data can be shared across trials. To our knowledge, this paper is the first to describe and compare machine learning options for estimating heterogeneous treatment effects using data from multiple RCTs, in the setting in which all data can be shared across trials. Because not many methods exist to do this, we demonstrate several options for extending current methods for single studies to the multiple-study setting. We also build off of Tan et al.'s Tan et al. (2021) approach by adapting it to the case when individual-level data can be shared across trials. We conduct extensive simulations with varying data generation parameters to determine which of the single-study and aggregation methods perform best depending on different amounts of cross-trial heterogeneity in the effects. We then apply the approaches to a set of four RCTs of depression treatments and discuss the variability in estimates across the approaches and potential substantive conclusions that can be made. ## 2 Notation The estimand considered in this paper is the conditional average treatment effect (CATE), defined under Rubin's potential outcomes framework (Rubin, 1974). Let \(A\) denote a binary treatment indicator (often treatment versus control), \(\mathbf{X}\) represent covariates, and \(Y\) represent a continuous outcome. Under Rubin's framework, \(Y(0)\) and \(Y(1)\) denote the potential outcomes under control and treatment, respectively. Let \(S\) be a categorical variable representing the trial in which the individual participated and ranging from \(1\) to \(K\), where \(K\) is the total number of RCTs. Finally, represent the probability of receiving treatment given covariates and trial membership (propensity score) as \(\pi_{s}(\mathbf{X})=P(A=1|\mathbf{X},S=s)\). With a continuous outcome, the CATE is \[\tau(\mathbf{X})=E(Y(1)|\mathbf{X})-E(Y(0)|\mathbf{X}). \tag{1}\] In this paper, we note that the goal estimand is this "universal" CATE (1) built off of potential outcomes that are not dependent upon study membership. However, many methods in the following sections target a study-specific CATE: \[\tau_{s}(\mathbf{X})=E(Y(1)|\mathbf{X},S=s)-E(Y(0)|\mathbf{X},S=s). \tag{2}\] To identify the estimand when combining data across RCTs, many of the standard causal inference assumptions are required, including the Stable Unit Treatment Value Assumption (SUTVA) within each RCT. Other standard assumptions include: unconfoundedness (Assumption 1), consistency (Assumption 2) and positivity (Assumptions 3 and 4) [Tan et al., 2021]. Assumption 2 varies slightly depending on the estimand; under the universal CATE estimand (Equation 1), we assume overall consistency, while under the study-specific estimand (Equation 2), we assume consistency within each study. Assumption 4, which requires that any \(\mathbf{X}\) is possible to be observed in all studies, can be relaxed depending on the method. **Assumption 1**: \(\{Y(0),Y(1)\}\perp\!\!\!\perp A\mid\mathbf{X},S=s\;\;\text{for all studies $s$}\)_._ **Assumption 2**: \(Y=AY(1)+(1-A)Y(0)\;\;\text{almost surely (in each study)}\)_._ **Assumption 3**: _There exists a constant \(c>0\) such that \(c<\pi_{s}(\mathbf{x})<1-c\) for all studies \(s\) and for all \(\mathbf{x}\) values in each study._ **Assumption 4**: _(Can be relaxed) There exists a constant \(d>0\) such that \(d<P(S=s|\mathbf{X}=\mathbf{x})<1-d\) for all \(\mathbf{x}\) and \(s\)._ ## 3 Methods This paper includes methods developed for treatment effect estimation in a single study and aggregation approaches that apply these methods to multiple studies. We chose some methods based off of those explored in Tan et al.'s Tan et al. (2021) paper that have a similar goal to the present work. This section discusses three single-study methods and five aggregation options that apply the single-study methods to the multi-study setting. ### Single-Study Methods For a given RCT, many machine learning methods have been developed for CATE estimation. The single-study methods that exist can be grouped into multiple categories, as delineated by Brantner et al Brantner et al. (2023). For ease of comparison, three approaches are included that are common in practice, user-friendly, and have been shown to be effective in previous literature: the S-learner, X-learner (Kunzel et al., 2019), and causal forest (Athey et al., 2019). The first two approaches are multi-step procedures that involve first estimating the conditional outcome mean under treatment or control and then combining the two into one CATE function, while the causal forest involves tree-based partitioning of the covariate space by treatment effect. In this paper, we use random forests as the base learners for both the S-learner and the X-learner to best compare with the causal forest, which is inherently forest-based. #### 3.1.1 S-Learner The first single-study machine learning method used in this paper is called the "S-learner" (Kunzel et al., 2019). This method is classified as a "meta-learner" in that it combines base learners (i.e., regression models) of any form in a specific way (Kunzel et al., 2019). The S-learner uses a base learner (i.e., a random forest) to estimate a conditional outcome mean function given observed covariates and assigned treatment: \[\mu(\mathbf{X},A)=E(Y|\mathbf{X},A).\] The conditional outcome mean function in this approach is not specific to treatment group, but instead treatment is included together with the covariates as features to be used by the random forest. The CATE can then be directly estimated by plugging in \(0\) and \(1\) for the treatment indicator to obtain predicted outcomes under treatment and control for each individual and calculate \[\hat{\tau}(\mathbf{X})=\hat{\mu}(\mathbf{X},1)-\hat{\mu}(\mathbf{X},0).\] #### 3.1.2 X-Learner The second approach considered here is another meta-learner called the "X-learner" (Kunzel et al., 2019). The X-learner takes a similar approach as the S-learner by modeling the conditional outcome mean functions before estimating the CATE directly. However, rather than estimating one outcome mean function for \(Y(1)\) and \(Y(0)\) simultaneously, the X-learner estimates two functions separately and then imputes treatment effects for each treatment group. Specifically, the X-learner involves three steps. First, the conditional outcome mean functions are estimated using base learners (in this case, random forests) like in the S-learner, but separately by treatment group: \[\mu_{1}(\mathbf{X})=E(Y(1)|\mathbf{X})\;\;\text{ and }\;\;\mu_{0}(\mathbf{X})=E(Y(0)|\mathbf{X}).\] Next, the unobserved potential outcomes for individuals in the treatment and control groups are predicted using those models: \[\tilde{D}_{i:A=1}=Y_{i:A=1}-\hat{\mu}_{0}(\mathbf{X}_{i:A=1})\ \ \ \text{and}\ \ \ \tilde{D}_{i:A=0}=\hat{\mu}_{1}(\mathbf{X}_{i:A=0})-Y_{i:A=0}.\] Then \(\tilde{D}\) is regressed on \(\mathbf{X}\) to estimate \(\tau(\mathbf{X})\). This is done within each treatment group separately, resulting in two estimates, labeled \(\hat{\tau}_{1}(\mathbf{X})\) and \(\hat{\tau}_{0}(\mathbf{X})\). Finally, these are combined to obtain one CATE estimate: \[\hat{\tau}(\mathbf{X})=g(\mathbf{X})\hat{\tau}_{1}(\mathbf{X})+(1-g(\mathbf{X}))\hat{\tau}_{0}( \mathbf{X}),\] where the weight \(g(\mathbf{X})\) is often an estimate of the propensity score (the case in this paper) or can be chosen otherwise (Kunzel et al., 2019). #### 3.1.3 Causal Forest The third single-study approach is the causal forest (Athey et al., 2019). The causal forest is similar to a random forest, but the focal estimand is the treatment effect itself, rather than the outcome for a given individual. The causal forest is based off of a causal tree, which involves recursive partitioning of the covariates to best split based on treatment effect heterogeneity. Here, the treatment effect is estimated as the difference in average outcomes between the treatment and control group individuals within leaves. From there, the causal forest is the weighted aggregation of many causal trees. One potential challenge with causal forests is that bias could occur when there is overlap between the data used to form the trees and data used to estimate the treatment effects within leaves. A solution to that problem, called "honesty", has been proposed (Wager and Athey, 2018).This concept ensures that for every individual involved in creating the tree, their outcome is used either for splitting the tree or estimating the treatment effect within a leaf, but not both. Honesty has been used some in the literature, but there is not a widespread conclusion as to whether trees should be fit with or without honesty depending on the scenario. Dandl and colleagues compared honesty versus adaptive (not honest) forests in their simulations including causal forests and found that in their setting that was meant to represent an RCT, the adaptive forests performed better (Dandl et al., 2022). Additionally, honesty requires large sample sizes. Thus, we do not include honesty in the causal forests in the primary simulations but do investigate it in a second round of method comparisons. ### Aggregation Methods In many contexts, there are multiple RCTs available that compare the same two treatments. It is then worth considering methods that allow combining across trials. When aggregating to the multi-study level, the question becomes: how much does the treatment effect vary based on study membership? This variability can range along a continuum, where on one end is the possibility that the trials are all very homogeneous in terms of the CATE, meaning that participants in trial \(j\) and in trial \(k\) who have the same covariate values would have the same treatment effect. At the other extreme, individuals with the same covariates but in different trials could have completely different treatment effects. These differences can be due to heterogeneity in the sites in which the trials were conducted, heterogeneity in trial procedures (including the treatment or control conditions themselves), heterogeneity in trial samples, or other reasons. The aggregation methods to follow take different approaches to incorporating trial membership into the treatment effect estimation, ranging from assuming trial membership does not matter at all, to allowing it to matter just as much as any other characteristic. #### 3.2.1 Complete Pooling A complete pooling approach is very straightforward: the researcher simply takes all data from each of the \(K\) RCTs, creates a single dataset, and then fits one of the three previously described methods (S-learner, X-learner, or causal forest) to the pooled dataset. This approach is quick and easy to do, but requires many assumptions. Namely, this approach assumes a high level of homogeneity across trials and specifically that the CATE function is shared across studies. This method is included because it represents a naive comparison point and because it provides universal CATE estimates (i.e., not study-specific). #### 3.2.2 Pooling with Trial Indicator An alternative pooling approach is to incorporate trial membership in the models but essentially still perform the pooling as before. Here, all of the individual data from each RCT is combined into one comprehensive dataset, but a categorical variable is included that represents the trial in which the individual participated. Then, the researcher can apply one of the single-study approaches to this full dataset, allowing for all of the covariates, including trial membership, to be involved in the treatment effect function. In this way, if trial membership is important for estimating effects, estimates should be somewhat informed by trial membership; otherwise, the treatment effect estimates should be similar across trials. While the previous complete pooling approach gives estimates that were not trial-specific, this approach yields trial-specific CATE estimates. #### 3.2.3 Ensemble Approach The next approach is based off of Tan and colleagues' (2021) methods for federated learning, originally developed for scenarios in which individual data cannot be shared across trial sites. Their original approach fits trial-specific models and then applies those models to data from a single coordinating site to derive an ensemble. We propose an adaptation of Tan's approach for settings where individual-level data from all trials are available to the analyst. This adaptation of Tan et al.'s approach involves three steps. 1. First, the researcher builds localized models for the CATE within each trial, using one of the three single-study methods previously discussed (S-learner, X-learner, or causal forest). 2. Next, they apply these localized models to each individual across all of the RCTs to get for each individual their _trial-specific CATE estimates_, i.e., the predicted effects had the individual been part of study 1, study 2 and so on. For \(K\) studies with a total of \(N\) individuals in all studies combined, there will be \(K\) trial-specific CATE models. Once each of these models are applied to all \(N\) data points, every individual will have \(K\) different estimates of their CATE. So there will ultimately be \(N*K\) CATE estimates in what Tan et al. define as an "augmented" dataset. The difference between the second step here and what Tan et al. did is that we apply the study-specific models to all data points in all trials, rather than having to restrict to a single coordinating site. 3. The third and final step is to fit an ensemble model to the augmented dataset that has CATE estimates for every individual crossed with every trial. In this ensemble model, the response variable is the CATE estimate, and the predictors are the individual covariates and a categorical variable indicating the local model that had been used to compute the CATE estimate. We use three different options for this final ensemble model fit to the augmented dataset: a regression tree, a random forest, and a lasso regression. The regression tree and random forest were explored in Tan et al.'s paper (2021), while the lasso regression was added to provide a parametric comparison point. The resulting functions from these ensemble approaches are trial-specific estimates of the CATE; however, they have been adapted based on the CATEs from the other trials. Therefore, this method allows for trial heterogeneity but incorporates information across trials to hopefully improve the model from each trial. #### 3.2.4 IPD Meta-Analysis As a comparison point in the simulations to follow, we also include an individual patient-level data (IPD) meta-analysis with a random intercept for trial membership. This method serves as a parametric comparison to the primarily non-parametric approaches outlined above. A meta-analysis does not employ a single-study method like the S-learner, X-learner, or causal forest; instead, all of the data is pooled together and trial-level relationships can be included as fixed or random effects. The meta-analysis model can be parametrized in many different ways; in this paper we set it up to mostly mimic the setup of the first scenario in the simulation to follow: \[Y=(\alpha_{0}+a_{s})+(\alpha_{1}+b_{s})X_{1}+\alpha_{2}X_{2}+\alpha_{3}X_{3}+ \alpha_{4}X_{4}+(\zeta+z_{s})A+(\theta+t_{s})X_{1}A+\epsilon.\] In this model, we allow the intercept to include a fixed component (\(\alpha_{0}\)) and a random component by study (\(a_{s}\sim N(0,\sigma_{a}^{2})\)), and our residual error is \(\epsilon\sim N(0,\sigma^{2})\). The fixed effects are \(\boldsymbol{\alpha}=\{\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4}\}\), the coefficients relating the covariates to the outcome; \(\zeta\), the coefficient for treatment; and \(\theta\), the coefficient of the interaction between treatment and a moderator \(X_{1}\)(Seo et al., 2021). The random effects by study are \(b_{s}\sim N(0,\sigma_{b}^{2})\), the random slope for the covariate \(X_{1}\); \(z_{s}\sim N(0,\sigma_{z}^{2})\), the random slope for the treatment-\(X_{1}\) interaction term. From here, the estimate of the conditional average treatment effect can be calculated as \(\hat{\tau}_{s}(\boldsymbol{X})=(\hat{\zeta}+\hat{z}_{s})+(\hat{\theta}+\hat{t}_ {s})X_{1}\). The meta-analysis framework assumes that the CATE function is shared across studies, but that the mean potential outcome under control can differ across studies. Notably, this functional form of the CATE assumes linear relationships, and one must prespecify all variables that might be relevant to the main effect of the covariates and to the treatment effect. ## 4 Simulation Setup To compare both the single-study and aggregation methods, we performed a simulation study, simulating data from multiple randomized controlled trials and changing parameter values to compare which methods achieve the lowest mean squared error (MSE) between the estimated and true individual CATEs. Because there were three single-study methods and five aggregation methods being compared along with meta-analysis, there were \(3*5+1=16\) total combinations of methods applied to each simulated dataset. ### Data Generating Mechanism The data generating mechanism is based somewhat off of Tan et al. Tan et al. (2021) and Kunzel et al. Kunzel et al. (2019) since both used methods similar to those compared here. The potential outcomes are generated using the following model: \[Y_{i}(a)=m(\mathbf{x}_{i},s_{i})+\frac{2a-1}{2}*\tau(\mathbf{x}_{i},s_{i})+\epsilon_{i} \tag{3}\] where \(m(\mathbf{x}_{i},s_{i})\) represents the outcome mean conditional on covariates and trial, and \(\tau(\mathbf{x}_{i},s_{i})\) is the CATE. In the main setting for the data generation, we employed two options for \(m\) and \(\tau\). The first setup (1a) involves a linear \(m\) and piecewise linear \(\tau\): \[m(\mathbf{x},s)=x_{1}/2+\sum_{j=2}^{4}x_{j}+\beta_{s}+\delta_{s}*x_{1}\text{ and }\tau(\mathbf{x},s)=x_{1}*I(x_{1}>0)+\beta_{s}+\delta_{s}*x_{1}.\] The second setup (1b) involves a more complicated non-linear function for \(\tau\): \[m(\mathbf{x},s)=0\text{ and }\tau(\mathbf{x},s)=g(x_{1})g(x_{2})+\beta_{s}+\delta_{s}* x_{1}\] where \(g(x)=\frac{2}{1+\exp(-12(x-1/2))}\).(Kunzel et al., 2019) In both of these, the coefficients \(\beta_{s}\) represent trial-specific main effect coefficients, and \(\delta_{s}\) represent trial-specific interaction effect coefficients (interaction between trial and the moderator \(x_{1}\)). In both setups, \(x_{1}\) is an effect moderator, and in the second setup, \(x_{2}\) is as well. If the coefficients \(\beta_{s}\) and \(\delta_{s}\) differ across \(s\) (i.e., trial membership), then trial is making an impact in the moderation. From this information, the components simulated are listed as follows: 1. For each simulation, the number of trials was \(K=10\). 2. Each trial had a sample size of 500 individuals. 3. Within each trial, we simulated five continuous covariates per person, \(\mathbf{X}_{i}\sim N(0,I_{\text{S}})\), with each following a standard normal distribution. 4. Each person was then assigned a treatment status, \(0\) or \(1\), according to a propensity score of \(\pi_{i}=0.5\) within each trial. 5. Each person was also assigned an error term for their outcome function, so \(\epsilon_{i}\sim N(0,.01)\). 6. We then sampled trial-specific main effect and interaction effect terms. Each of the \(K=10\) studies was assigned a main effect term according to \(\beta_{s}\sim N(0,\sigma_{\beta}^{2})\) and an interaction effect term according to \(\delta_{s}\sim N(0,\sigma_{\beta}^{2})\). The values of the standard deviations were: \((\sigma_{\beta},\sigma_{\delta})\in\{(0.5,0),(1,0),(1,0.5),(1,1),(3,1)\}\). These standard deviation pairs are defined as (Low-Low), (Med-Low), (Med-Med), (Med-High), (High-High), respectively. 7. From this information, \(m\), \(\tau\), and \(Y\) were calculated under either of the two setups described above (1a and 1b). From the above setups, there were five standard deviation pairs for the trial effects, and two functional forms for \(m\) and \(\tau\), therefore yielding 10 total scenarios. We then included one other scenario (2) to see how the methods would perform when the functional form of the CATE itself differed across trials - a particularly challenging situation for pooling. For this scenario, we used the same form for \(Y_{i}\) as in Equation (3), and now we set \(m\) and \(\tau\) to be such that \(m\) is linear and \(\tau\) depends on study: \[m(\mathbf{x},s)=x_{1}/2+\sum_{j=2}^{4}x_{j}\] and \[\tau(\mathbf{x},s)=I(s\in\{1,2,3,4\})*g(x_{1})g(x_{2})+I(s\in\{5,6,7,8\})*x_{1}*I(x_{1}> 0)+I(s\in\{9,10\})*0\] where \(g(x)\) is as previously defined. For each simulation setup, we generated 1,000 simulated datasets. Necessary packages included causalToolbox for the S-learner and X-learner (Kunzel et al., 2019), grf for the causal forest(Athey et al., 2019), rpart for the ensemble tree (Therneau et al., 2015), ranger for the ensemble forest (Wright and Ziegler, 2015), glmnet for the ensemble lasso (Friedman et al., 2017), and lme4 for the mixed effects meta-analysis (Bates, 2010). Some functions were based off of those in the fiedtree package (Tan et al., 2021) but were adapted to the setting in which data could be shared across trials. For each method and each iteration, performance of the different approaches was assessed based on the mean squared error (MSE) between the true individual CATE estimates and the estimated individual CATE estimates, and these MSEs were ultimately averaged across the 1,000 repetitions. ## 5 Simulation Results The following tables and figures display the performance results across 1,000 iterations of each parameter combination/scenario. Table A2 in the Appendix shows the average and standard deviation of the MSEs across all iterations of a given scenario and approach. Figure 1 shows these average MSEs for every approach for each scenario, broken down by the standard deviations of the trial main and interaction effects. In the piecewise linear and non-linear CATE scenarios, as the trial coefficients (both main and interaction effects) increase in variability, the MSE increases, meaning the methods estimate individual CATEs more poorly. This is consistent with the idea that when trial membership is involved in the treatment effect function, the CATEs vary across trials and therefore are harder to estimate when data is pooled across studies. Notably, this increase in MSE happens much more quickly for the complete pooling approaches. In the piecewise linear scenario (1a), the most consistently effective approaches in terms of MSE are when the causal forest is used as the single-study method and when the aggregation approach is either pooling with trial indicator, ensemble forest, or ensemble lasso. The X-learner also performs relatively well in terms of MSE. Meta-analysis performs well, which is expected because the model was set up to mostly match the true functional form of the CATE in this scenario. For the non-linear scenario (1b), the ensemble lasso and meta-analysis perform notably worse, which makes sense due to the complexity of the functional form of the CATE, as it includes the product of two expit functions, and the lasso and meta-analysis assume a parametric linear relationship between covariates and outcome. The ensemble forest and pooling with trial indicator again estimate the CATEs well, with all single-study methods performing similarly. While the S-learner was not very effective with the piecewise linear CATE (1a), it was more effective with the non-linear CATE (1b). Figure 1: Average MSE for each parameter combination across all approaches.* *The y-axis was cut off for ease of visualization; note that for the High-High SD combination, some methods were therefore missing from the graph because the MSEs were very high (complete pooling and meta-analysis). The SD of study main and study interaction coefficient pairs are as follows: Low-Low: 0.5, 0; Med-Low: 1, 0; Med-Med: 1, 0.5; Med-High: 1, 1; High-High: 3, 1. Facet labels refer to the simulation scenarios: 1a, 1b, and 2. Figure 1 also displays the results for the variable CATE scenario (2). Here, the causal forest is clearly performing the best of the three single-study methods, while the S-learner is not performing as well. The most effective aggregation methods are again pooling with trial indicator and ensemble forests, along with ensemble trees. The least successful aggregation approach is the ensemble lasso, followed by meta-analysis and complete pooling. Figure 2 displays the average MSE across all data generation setups (i.e., piecewise linear, non-linear, and variable CATE) and all iterations. Consistent with our scenario-specific findings, the complete pooling approaches are ineffective at estimating the individual CATEs compared to the pooling with study indicator, ensemble tree, ensemble forest, ensemble lasso, and meta-analysis. Within those more effective aggregation approaches, the three single-study options perform somewhat similarly but with the S-learner doing the worst in almost all cases. The overall best approaches, in terms of average MSE, are the causal forest with pooling with trial indicator, the causal forest with an ensemble forest, and the X-learner with an ensemble forest. To more formally examine these results, we regressed the average MSE across iterations on the methods and parameter combinations, just within the piecewise linear and non-linear CATE scenarios and excluding meta-analysis. Specifically, the regression is such that: \[MSE= \beta_{0}+\beta_{1}*singestudy+\beta_{2}*aggregation+\beta_{3}*singestudy*aggregation\] \[+\beta_{4}*main_{sd}+\beta_{5}*interaction_{sd}+\beta_{6}*scenario +\epsilon.\] From this regression, there were no significant differences in performance across single-study methods, but all aggregation methods performed significantly better than complete pooling. The ensemble forest had the best average MSE for the S-learner and X-learner, and pooling with trial indicator had the best average MSE for the causal forest. Finally, we also performed 500 more iterations of each scenario and parameter combination with the same methods previously described, but with honest causal forests instead of traditional "adaptive" causal forests. The resulting average MSEs are presented in the Appendix; we found similar results to the original 1,000 repetitions with adaptive causal forests, but the honest causal forests had slightly higher MSEs on average, indicating worse estimation accuracy than the adaptive causal forests. The honest causal forests had higher average MSE compared to the X-learner with each of the aggregation methods except for pooling with trial indicator (Figure A6). ## 6 Application to Real Dataset After the simulations demonstrated differences across methods in several data generation setups, we applied the various methods to an existing dataset containing multiple randomized controlled trials that compared the same two medications. Figure 2: Average MSE across all scenarios and iterations. ### Treatments for Major Depressive Disorder The applied dataset used in the current paper consists of four randomized controlled trials [Mahabelshwarkar et al., 2013, Boulenger et al., 2014, Baldwin et al., 2012], each of which included three treatments: dluoxetine, vortioxetine, and placebo, where dluoxetine and vortioxetine are both treatments for major depressive disorder (MDD). At the time of the trials, dluoxetine had been more commonly used to treat MDD so was primarily included in the trials as an active reference, while vortioxetine was a newer treatment not yet marketed Schatzberg et al. [2014]. Each of the four trials compared at least two different dosages of vortioxetine and therefore had more participants taking vortioxetine as opposed to dluoxetine or placebo. For the purposes of the current application, we removed placebo participants and lumped all dosages of vortioxetine together to investigate the potential differences between the efficacy of the active medications (duloxetine and vortioxetine), as well as identify features that might be moderating this difference. Participants in each of the four trials shared similar eligibility criteria. All four trials required patients to be between the ages of 18 to 75, to have a Major Depressive Episode (MDE) as a primary diagnosis according to the DSM-IV-TR criteria over at least three months, and to have a Montgomery-Asberg Depression Rating Scale (MADRS) Montgomery and Asberg [1979] score of at least 22 (one trial) or 26 (three trials) at both screening and baseline[Mahabelshwarkar et al., 2013, 2015, Boulenger et al., 2014, Baldwin et al., 2012]. A primary outcome in the trials is the change in MADRS (Montgomery-Asberg Depression Rating Scale) Montgomery and Asberg [1979] score from baseline to the last observed follow-up in the study. Participants were meant to stay in the study for 8 weeks, at which point their final MADRS score was collected. For those who did not remain in the trial for 8 weeks, a last observation carried forward imputation approach was used for simplicity. This imputation approach is not the best way to account for missing data and many other options exist [Little et al., 2012], but it is used here for simplicity because this example is primarily illustrative. Predictors/effect modifiers used in the models were age, sex (female or male), smoking status (ever smoked or never smoked), weight, baseline MADRS score, baseline HAM-A (Hamilton Anxiety Rating) score Hamilton [1959], comorbidity indicators (if ever had diabetes mellitus, hypothyroidism, anxiety), and medication indicators (if they are concomitantly taking an antidepressant, antipsychotic, thyroid medication). Since the outcome is the difference in MADRS score (MADRS at follow up minus MADRS at baseline), a more negative outcome indicates a better result. After removing any individuals with missing treatment assignment or missing outcomes and removing individuals who were assigned to the placebo group, sample sizes were 575, 436, 418, and 418 for each of the trials. Further descriptive information about the samples in the four RCTs is reported in the Appendix (Table A3). Little missing covariate data was present in the sample; however, conditional mean imputation was performed for missing values of weight (n=1) and baseline HAM-A score (n=2). Following data preparation, we used each of the aforementioned method combinations (i.e., causal forest, S-learner, and X-learner as single-study methods paired with complete pooling, pooling with trial indicator, ensemble tree, ensemble forest, and ensemble lasso) to estimate the CATEs for every individual across the four trials. We then compared the CATEs estimates across methods to see their concordance levels. Notably, it is not possible to compare the method performances with the truth, as the true CATEs are unknown in this real dataset. ### Results All methods broadly led to the conclusion of a positive average CATE. This indicates that vortioxetine is estimated to have less of a beneficial effect on the MADRS score on average. In each of the four RCTs, both treatments were associated with a reduction in depressive symptom severity over time (shown through a reduction in MADRS score), but this reduction was smaller for the vortioxetine group than the dluoxetine group. Table 1 contains the mean and standard deviation of the CATEs according to each method. Broadly, the S-learner approaches estimated lower CATEs on average than the other approaches, and there is some consistency between the aggregation approaches within each single-study method (S-learner, X-learner, and causal forest). There were especially high levels of similarity in the average CATE estimates across the causal forest methods, shown in the last column of Table 1. The variability of the CATE estimates differs depending on the approach as well; causal forest approaches had higher standard deviations than approaches that used the S-learner and X-learner. Complete pooling also yielded the highest standard deviations for CATE estimates out of all of the aggregation approaches. As a comparison point, we used a multiple linear regression to estimate an average treatment effect of 2.39 (SE = 0.5), which is similar to the averages of the CATEs according to the X-learner and causal forest approaches. We then focused in on the resulting model from the causal forest with pooling with trial indicator, since that approach performed the best on average in the simulations. The CATE estimates according to this approach with their associated 95% confidence intervals are displayed in Figure 3. These estimates support that the majority of individuals have a positive CATE estimate, but they also display very high levels of uncertainty, with all confidence intervals including zero. To learn more about the moderation within the CATE model, we can explore variable importance measures. In the causal forest as applied through the grf package Athey et al. (2019), variable importance is calculated as a weighted sum of the number of times the variable was used in a split at each level of the forest. Figure 4 displays the variable importance measures according to the grf package (Athey et al., 2019) for all covariates, first in separate causal forests for each study (4a), and second according to the causal forest with pooling with trial indicator (4b). From Figure 4a, there are a few variables that are consistently identified as effect moderators across studies (age, weight, baseline MADRS score, and baseline HAM-A score), and there are several that are not found to be major moderators (the comorbidity and medication indicators). However, notably there are some differences according to the separate models, indicating that the treatment effect functions are slightly different within each study. Figure 4b then displays the resulting importance measures from one aggregation model fit to all studies. Here, we can see that the same four variables (age, weight, baseline MADRS, and baseline HAM-A) are involved in a high proportion of the splits in the causal forest, and study membership is involved in some splits as well. The fact that these study indicators are not more highly involved in the partitioning of the treatment effect is a good sign, though, that there is not a very high level of heterogeneity in CATE estimates across studies. The variable importance plots do not demonstrate the direction of the moderating effect, however. We briefly investigate these directional effects through an interpretation tree (Figure 5) and through exploratory plots such as Figure A7. This interpretation tree was formed by fitting a regression tree, where the CATE estimates according to the causal forest with pooling with trial indicator were the outcomes, and the features (predictors) were every covariate in the original CATE model. The tree confirms what was shown in Figure 4 - that age, weight, baseline MADRS, and baseline HAM-A score are the strongest effect moderators. Study membership does not show up in this interpretation tree, supporting that there is low heterogeneity across trials. This is a helpful visual to see the direction of the relationships aggregated across trials, but it is exploratory and should not be interpreted in great detail. Another similar approach for investigating the CATE function in terms of individual moderators is to fit the best linear projection of the CATE estimates using \begin{table} \begin{tabular}{|l|c|c|c|} \hline & S-Learner & X-Learner & **Causal Forest** \\ \hline **Complete Pooling** & 1.38 (1.6) & 2.57 (1.4) & 2.37 (2.8) \\ **Pooling with Trial Indicator** & 0.91 (1.3) & 2.52 (1.3) & 2.37 (2.7) \\ **Ensemble Tree** & 0.89 (1.3) & 2.35 (1.5) & 2.23 (2.5) \\ **Ensemble Forest** & 0.89 (1.1) & 2.36 (1.4) & 2.30 (2.2) \\ **Ensemble Lasso** & 0.89 (1.2) & 2.32 (1.4) & 2.23 (2.1) \\ \hline \end{tabular} \end{table} Table 1: Mean (SD) of CATEs from all individuals in sample according to different single-study and aggregation method combinations.* *The CATEs are individual-level estimates that indicate the difference in the estimated effect of vortioxetine versus duloxetine on the difference in MADRS score for a given patient. A positive CATE indicates that vortioxetine is estimated to have a smaller reduction of the MADRS score. Figure 3: Point estimates and 95% confidence intervals for CATEs according to causal forest with pooling with trial indicator. a function in the grf package Athey et al. (2019); the resulting coefficients from this regression using doubly-robust estimates of the CATE are reported in Table A4. Broadly, these interpretations of the CATE function derived by the causal forest with pooling with trial indicator do not display high levels of heterogeneity or highlight clear groups of individuals who might differ in their treatment effect. Although the interpretation tree shows a partition of the covariate space into groups that have different average CATEs, the other plots and best linear projection do not show high levels of heterogeneity, except for slightly higher average CATE estimates for older individuals. This data application shows how to effectively apply the methods compared in Figure 4: Variable importance measures (a) within studies, and (b) according to the causal forest with pooling with trial indicator Figure 5: Interpretation tree for causal forest with pooling with trial indicator.* *Circled numbers represent the average CATE estimate for individuals in that leaf. simulations to a real dataset and assess potential moderation. The methods all agree broadly on the direction of the average treatment effect but have some differences in the individual CATE estimates and which variables generate effect heterogeneity. ## 7 Discussion In this paper, we compared methods to estimate the conditional average treatment effect in a single trial and methods to extend the single-trial approaches to multiple trials. In the absence of notable cross-trial heterogeneity of treatment effects, the methods examined all performed well, but when trial membership was involved in the treatment effect function, some methods performed worse than others. Specifically, and not surprisingly, methods that ignore trial membership (complete pooling) do not effectively estimate the CATE when there is cross-trial heterogeneity. On the other hand, some methods performed well no matter the level of heterogeneity: pooling with trial indicator and ensemble forests had consistently low mean squared error despite increasing the variability of the trial membership coefficients in the treatment effect. This was especially true when the single-study method used was the causal forest (Figure 1). When considering the three single-study approaches, the most consistently favorable method in the simulations was the causal forest, followed by the X-learner. The S-learner performed well in certain scenarios, such as scenario 1b, where the treatment effect function involved a bounded, non-linear expit function. The performances of the S-learner and X-learner in our simulations and applied example were consistent with results found previously (Kunzel et al., 2019), in that the S-learner seemed to be somewhat biased towards 0 in the applied example (Table 1) and performed worse in the simulations when the treatment effect function was complicated (variable CATE scenario and the piecewise linear and non-linear CATE with high variability). The X-learner performed well in the simulations with complex CATEs and with structural forms of the CATE, again consistent with previous work (Kunzel et al., 2019). The causal forest performed well across all scenarios. These simulation results and the results from the applied data example of MDD medications demonstrate that it is important to carefully select the single-study method for a given question, as each of the three options can provide different estimates. A good starting point would be to consider expert knowledge of how heterogeneous across studies and complicated the outcomes or treatment effect might be. These results also indicate the need for more diagnostics in future work to help researchers determine which approach to choose. The simulations also incorporated some comparisons between the non-parametric and parametric approaches. Specifically, the usage of a lasso regression as an ensemble showed how a parametric ensemble could perform compared to the ensemble tree and forest. The lasso performed very well when the treatment effect function was piecewise linear (scenario 1a) but quickly suffered in performance when the function was more non-linear (scenarios 1b and 2). Furthermore, the inclusion of a mixed effects meta-analysis demonstrated a common parametric technique used in the multiple-study setting. This model was set up to perform well when the CATE function was piecewise linear (scenario 1a), but it yielded high MSE in the non-linear and complex scenarios that it was not correctly parametrized for (scenario 1b and 2). These comparisons demonstrate that non-parametric machine learning approaches are very beneficial when the treatment effect function is complicated and non-linear, as the non-parametric methods do not require correct specification of any parameters. Although interpretability becomes more of a challenge, the non-parametric methods allow for flexible relationships and hopefully high levels of accuracy in CATE estimation. In this work, we did not explore an exhaustive list of potential single-study and aggregation methods. We attempted to select single-study methods that were common, user-friendly, and shown to be effective or potentially effective in previous literature. However, as this is an ever-growing field, future work could include other single-study methods (Wendling et al., 2018; Powers et al., 2018) to see how they compare to the ones used in this study. For example, it would be interesting to investigate the performance of the X-learner with a different base learner, such as Bayesian additive regression trees (BART) Chipman et al. (2010); Kunzel et al. (2019). The simulations included three scenarios and some sub-scenarios that should be possible in real data, but there of course are many other possible data generation setups that could be implemented and could show slightly different results. It is important to note that with the exception of the complete pooling and meta-analysis approaches, the resulting CATE estimates are trial-specific. Unless trial was not picked up in the aggregation methods, the majority of the methods discussed will produce trial-specific estimates of the CATE. This allows for improved accuracy of estimates but might be less helpful in real world applications. We are interested in continuing to identify ways in which researchers could aggregate across trials to develop estimates that are accurate but not trial-specific - this could be crucial for use of the resulting methods and models in practice, on data not coming from the specific trials used in the model formulation. However, the trial-specific estimates can still be useful; for example, if trials were done in separate hospitals, CATEs of future patients could be predicted using the hospital that they are being treated in, and the model that estimates their treatment effect should be more accurate after taking into consideration the data from the other hospitals.
2305.12371
Machine Translation by Projecting Text into the Same Phonetic-Orthographic Space Using a Common Encoding
The use of subword embedding has proved to be a major innovation in Neural Machine Translation (NMT). It helps NMT to learn better context vectors for Low Resource Languages (LRLs) so as to predict the target words by better modelling the morphologies of the two languages and also the morphosyntax transfer. Even so, their performance for translation in Indian language to Indian language scenario is still not as good as for resource-rich languages. One reason for this is the relative morphological richness of Indian languages, while another is that most of them fall into the extremely low resource or zero-shot categories. Since most major Indian languages use Indic or Brahmi origin scripts, the text written in them is highly phonetic in nature and phonetically similar in terms of abstract letters and their arrangements. We use these characteristics of Indian languages and their scripts to propose an approach based on common multilingual Latin-based encodings (WX notation) that take advantage of language similarity while addressing the morphological complexity issue in NMT. These multilingual Latin-based encodings in NMT, together with Byte Pair Embedding (BPE) allow us to better exploit their phonetic and orthographic as well as lexical similarities to improve the translation quality by projecting different but similar languages on the same orthographic-phonetic character space. We verify the proposed approach by demonstrating experiments on similar language pairs (Gujarati-Hindi, Marathi-Hindi, Nepali-Hindi, Maithili-Hindi, Punjabi-Hindi, and Urdu-Hindi) under low resource conditions. The proposed approach shows an improvement in a majority of cases, in one case as much as ~10 BLEU points compared to baseline techniques for similar language pairs. We also get up to ~1 BLEU points improvement on distant and zero-shot language pairs.
Amit Kumar, Shantipriya Parida, Ajay Pratap, Anil Kumar Singh
2023-05-21T06:46:33Z
http://arxiv.org/abs/2305.12371v1
Machine Translation by Projecting Text into the Same Phonetic-Orthographic Space Using a Common Encoding ###### Abstract The use of subword embedding has proved to be a major innovation in Neural Machine Translation (NMT). It helps NMT to learn better context vectors for Low Resource Languages (LRLs) so as to predict the target words by better modelling the morphologies of the two languages and also the morphosyntax transfer. Even so, their performance for translation in Indian language to Indian language scenario is still not as good as for resource-rich languages. One reason for this is the relative morphological richness of Indian languages, while another is that most of them fall into the extremely low resource or zero-shot categories. Since most major Indian languages use Indic or Brahmi origin scripts, the text written in them is highly phonetic in nature and phonetically similar in terms of abstract letters and their arrangements. We use these characteristics of Indian languages and their scripts to propose an approach based on common multilingual Latin-based encodings (WX notation) that take advantage of language similarity while addressing the morphological complexity issue in NMT. These multilingual Latin-based encodings in NMT, together with Byte Pair Embedding (BPE) allow us to better exploit their phonetic and orthographic as well as lexical similarities to improve the translation quality by projecting different but similar languages on the same orthographic-phonetic character space. We verify the proposed approach by demonstrating experiments on similar language pairs (Gujarati\(\leftrightarrow\)Hindi, Marathi\(\leftrightarrow\)Hindi, Nepali\(\leftrightarrow\)Hindi, Maithili\(\leftrightarrow\)Hindi, Punjab\(\leftrightarrow\)Hindi, and Urdu\(\leftrightarrow\)Hindi) under low resource conditions. The proposed approach shows an improvement in a majority of cases, in one case as much as \(\sim\)_10_ BLEU points compared to baseline techniques for similar language pairs. We also get up to \(\sim\)_1_ BLEU points improvement on distant and zero-shot language pairs. Saidhana Vol., No., pp.- DOI 1 (c) Indian Academy of Sciences Neural Machine Translation, Common Phonetic-Orthographic Space, Similar Languages, Byte Pair Encoding, Transformer Model ## 1 Introduction Machine Translation (MT) has an interesting history in computation and research [20] with new paradigms being introduced over decades. MT achieved a watershed moment with the introduction of numerous algorithmic, architectural and training enhancements, such as Statistical Machine Translation (SMT) and Neural Machine Translation (NMT) [64]. SMT is a statistical-based MT paradigm, operating at the granularity of words and phrases, consisting of a translation model, a language model, and a decoder [17, 62, 66]. Further, the relatively recent success of deep neural networks has given us end-to-end variations of translation models such as recurrent NMT [21, 63], attention-based NMT, and self-attention-based Transformer [6]. There have been parallel and related developments in language models, such as Bidirectional Encoder Representations from Transformers (BERT) [22] and ALBERT [23]. Another variant of this, mBART, has provided benchmark solutions in NMT as well [4]. However, training an effective and accurate MT system still requires a large amount of parallel corpus consisting of source and target language pairs. When we talk about low-resource languages, the first problem is to find a fair amount of parallel corpus, sometimes even monolingual corpus, which makes it challenging to create tools and applications for extremely poor resource languages. Creating a large parallel corpus for MT for each language pair that falls into the low resource category is an expensive, time-consuming, and labor-intensive task. So, the solution to improve NMT in a low-resource context is to bootstrap the process by leveraging the morphological, structural, functional, and perhaps deep semantic features of such languages. Fortunately, for similar languages, it also is possible to exploit the similarities for better modeling of closely related languages. We need to focus on features that help the MT system better learn the close relationships between such languages. Conference on Machine Translation (WMT) has also conducted shared tasks for similar language translations from 2019 [24]. When we talk about Indian languages, most languages except Hindi come under extremely low resource categories. Even Hindi is, from some points of view either a low or medium resource language [72, 73]. India being a country with rich linguistic diversity, there is a need for MT systems across the Indian (or South Asian) languages. India is also inhabited by a vast population who speak languages belonging to three prominent families, Indo-Aryan (a subfamily of Indo-European), Dravidian, and Tibeto-Burman, but due to very long contact and interactions, they have gone through a process of 'convergence', forming India as a linguistic area [25]. Due to this long term contact, there are more similarities among these languages than we would otherwise expect. In addition, significant fractions of their vocabularies, to varying degrees, have words originating in or borrowed from Sanskrit, Persian, Arabic, Turkish and English, among other languages. For some of the major languages, and even for some of the'regional' or'minority languages' (since they were widely used for a long duration in the past for literary purposes), there are records available and there is a varying degree of well-developed tradition of at least (spoken) literary usage. However, only some languages, most of which are officially recognized, have some written tradition, particularly for non-literary prose. The rest have very little written data, or even if it is there, it is usually not in a machine-readable format. Therefore, they can be treated as extremely low or zero-resource languages. There is a need for development of MT systems for such languages, and the similarity between these languages helps in developing such MT systems. In this article, we propose an approach based on leveraging the features of similar languages by simply, programmatically1, converting them into an intermediate Latin-based multilingual notation. The notation that we use here is the commonly used WX-notation [26], which is often used in NLP tools and systems for Indian languages developed in India. This notation (like many other similar notations) can project all the Indic or Brahmi origin scripts [40], which have -- in many cases -- different Unicode blocks, into a common character space. Our intuition, is that this should help in capturing phonological, orthographic, and, to some extent, morphosyntactic similarities that will help a neural network-based model in better multilingual learning and translation across this languages [38, 39, 67]. We do this by using this WX-converted text to learn byte pair encoding-based embeddings. The effect of this is that the similar but different languages are projected onto the same orthographic-phonetic space [41], and hence also in the same common morphological and lexical space, allowing better modeling of multilingual relationships in the context of India as a linguistic area. Footnote 1: Using encoding converters, such as [https://yypi.org/project/wxconv/](https://yypi.org/project/wxconv/) In addition, using WX has another benefit, even for a single script such as Devanagari. Brahmi-derived scripts have different symbols for dependent vowels (called _mataas_) which modify a consonant and independent vowels (written as _aksharas_) which are pronounced as syllables. WX uses the same symbols for these two variants of the same vowel, while Unicode uses different codes and the scripts themselves use different graphical symbols. After conversion to WX, we apply some of the state-of-the-art NMT techniques to build our MT systems. These NMT systems, such as the Transformer, should learn better the relationships between languages. We select six pairs of similar languages: Gujarati (GU)\(\leftrightarrow\)Hindi (HI), Marathi (MR)\(\leftrightarrow\)Hindi (HI), Nepali (NE)\(\leftrightarrow\)Hindi (HI), Maithili (MAI)\(\leftrightarrow\)Hindi (HI), Punjabi (PA)\(\leftrightarrow\)Hindi (HI), and Urdu (UR)\(\leftrightarrow\)Hindi (HI). Table 1 contains some of the language features that help in figuring out how selected languages are similar to Hindi. For example, Hindi, Gujarati, Marathi, Nepali, Matithil, Punjabi, and Urdu belong to Indo-Aryan Language families, and all the selected languages except Punjabi and Urdu share a common Devanagari script. The word order of all the selected languages is mostly _Subject + Object + Verb_. Apart from this, all these languages share lexical similarities with Hindi in terms of common words derived from Sanskrit and other languages as mentioned earlier. Also, these languages have phonological similarities with Hindi. We also note that though Urdu and Hindi are linguistically almost the same language, yet due to the great divergence in their vocabularies in their written form, they have only a relatively small overlap in their corpus-based \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline **Languages** & **Family** & **Script** & **Word Order** & **Ergative** & **Place** \\ \hline Hindi & & Devanagari & Yes & Mainly North India \\ Gujarati & & Gujarati & No & Mainly Gujarat \\ Marathi & & Balbodh version of Devanagari & No & Mainly Maharashtra \\ Nepali & Indo-Aryan & Devanagari & SOV & Yes & Mainly Nepal \\ Maithili & & Devanagari & No & Mainly Bihar and parts of Nepal \\ Punjabi & & Gurumukhi & No & Mainly Punjab \\ Urdu & & Variant of Perso-Arabic & No & Mainly North India \\ \hline \end{tabular} \end{table} Table 1: Some details about the languages used in our experiments vocabularies, albeit this overlap consists mainly of core words which form a major component of the linguistic identity of a language. This papers is the first part of a series of three papers exploring and then extending the idea of using common phonetic-orthographic space for better NMT in the Indian context [68, 69]. The contributions of this paper are summarized as follows: 1. Propose a WX-based machine translation approach that leverages orthographic and phonological similarities between pairs of Indian languages. 2. Proposed approach achieves an improvement of _+0.01_ to _+10_ BLEU points compared to baseline state-of-the-art techniques for similar language pairs in most cases. We also get _+1_ BLEU points improvement on distant and zero-shot language pairs. The rest of the paper is organized as follows. Section 2 discusses closely related works. Section 3 describes some background and the NMT models that we extend or compare with. Section 4 describes the proposed approach in more detail. Section 5 discusses corpus statistics and experimental settings used to conduct the experiments. Results and ablation studies are reported in Sections 6 and 7, respectively. Finally, the paper is summarized in Section 8 and includes some directions for future work. ## 2 Related Works This section briefly describes some of the related work (Table 2) on language similarity, morphological richness, statistical and neural models, and language pairs used as discussed below. Although there had been work in the past, the recent sharper focus on machine translation for similar languages is also due to the shared tasks on this topic organized as part of the WMT conferences from 2019 to 2021. In [46], authors demonstrated that pre-training could help even when the language used for fine-tuning is absent during pre-training. In [47], authors experimented with attention-based recurrent neural network architecture (seq2seq) on HI\(\leftrightarrow\)MR and explored the use of different linguistic features like part-of-speech and morphological features, along with back translation for HI\(\rightarrow\)MR and MR\(\rightarrow\)HI machine translation. In [48], authors ensembled two Transformer models to try to allow the NMT system to learn the nuances of translation for low-resource language pairs by taking advantage of the fact that the source and target languages are written using the same script. In [49], authors' work relied on NMT with attention mechanism for the similar language translation in the WMT19 shared task in the context of NE\(\leftrightarrow\)HI language pair. In [50], the authors conducted a series of experiments to address the challenges of translation between similar languages. Out of which, the authors developed one phrase-based SMT system and one NMT system using byte-pair embedding for the HI\(\leftrightarrow\)MR pair. In [51], authors used a Transformer-based NMT with _sentencepiece_ for subword embedding on HI\(\leftrightarrow\)MR language pair [61]. In [52], authors used the Transformer-NMT for multilingual model training and evaluated the result on the HI\(\leftrightarrow\)MR pair. In [53], authors focused on incorporating monolingual data into NMT models with a back-translation approach. In [70], authors introduced NLP resources for 11 major Indian languages from two major language families. These resources include: large-scale sentence-level monolingual corpora, pre-trained word embeddings, pre-trained language models, and multiple NLU evaluation datasets. In [71], authors presented IndicBART, a multilingual, sequence-to-sequence pre-trained model focusing on 11 Indic languages and English. IndicBART utilized the orthographic similarity between Indic scripts to improve transfer learning between similar Indic languages. ### Shortcomings of existing works In most of the existing work on MT for related languages (e.g., [51], [52], [53]), authors have discussed improving the NMT models using extra monolingual corpora in addition to bi-lingual data. However, the proposed approach improves translation quality using only bilingual corpora with the help of WX-transliteration. The proposed approach reduces language complexity by transliterating the text to roman script and helps the NMT models to better learn the context information by exploiting language similarities. In this way, where applicable, it can complement the approaches which use extra monolingual data. ## 3 Background This section provides some background on the recent most successful machine translation techniques. From vanilla NMT to more robust and advanced BART, a denoising autoencoder for pre-training sequence-to-sequence models, remarkable advances in NMT techniques have been made in a relatively short time. ### Nmt Many of the NMT techniques use an encoder-decoder architecture based on neural networks that performs translation between language pairs. Numerous enhancements, toolkits, and open frameworks are available to train NMT models, such as OpenNMT. OpenNMT is one of the open-source NMT frameworks [2], used to model natural language tasks such as text summarization, tagging, and text generation. This toolkit is used for model architectures, feature representations, and source modalities in NMT research. Multilingual and zero-shot NMT have also been applied for NMT to achieve state-of-the-art results on different language pairs by using a single standard NMT model for multiple languages [5]. Furthermore, the introduction of 'attention' in NMT has drastically improved the results significantly [7], as for many other problems. As shown in Figure 1, NMT is an encoder-decoder sequence-based model consisting of recurrent neural network (RNN) units. The encoder consists of RNN units (\(E_{0}\), \(E_{1}\), \(E_{2}\)) and takes as input the embedding of words from sentences and produces the context vector (**C**) as follows: \[\textbf{C}=Encoder(\textbf{X_{1}},\textbf{X_{2}},\textbf{X_{3}},...,\textbf{X_{ a}}) \tag{1}\] where, {**X1**, **X2**, **X3**,..., **Xa**} is the input source sequence. The decoder consists of RNN units (\(D_{0}\), \(D_{1}\), \(D_{2}\), \(D_{3}\)) and it decodes these context vectors into target sentences with an \(<\)END\(>\) (end of a sentence) symbol as follows: \[Decoder(\textbf{C},\textbf{Y_{1}},\textbf{Y_{2}},\textbf{Y_{3}},...,\textbf{Y_ {n}})=\textbf{Y^{\prime}_{1}},\textbf{Y^{\prime}_{2}},\textbf{Y^{\prime}_{3}},...,\textbf{Y^{\prime}_{m}} \tag{2}\] where, {**Y1**, **Y2**, **Y3**,..., **Yn**} and {**Y1**, **Y2**, **Y3**,..., **Ym**} are target and predicted sequences, respectively. ### Transformer-based NMT The Transformer can be characterized by its breakthrough in combining five innovations elegantly in a single architecture. The first is the attention mechanism [6]. It maps a query and a set of key-value pairs to an output. A compatibility function of the query with the corresponding key computes the weights. The second extends the first by using multi-head self-attention. The third is the use of positional encoding in terms of relative positions, which allows it to learn temporal relationships and dependencies. The fourth is the use of masking, which has proved to be immensely effective in many other later models. The fifth is the use of residual connections. Together, the elegant combination of these innovations not only allows the model to learn much better models, but also obviates the need for recurrent units in the architecture, which in turn allows a great degree of parallelism during training the models. In other words, the Transformer model not only learns much better models, but does so in much less time during the training phase. Moreover, the problem of overfitting is also much less with the Transformer-based models. There are numerous state-of-the-art results reported for machine translation systems using a Transformer. Currey and Heafield [8] incorporated syntax into the Transformer using a mixed encoder model and multi-task machine translation. Multi-head attention is one key feature of self-attention. Fixing the attention heads on the encoder side of the Transformer increases BLEU scores by up to 3 points in low-resource scenarios [9]. The most common attention functions are additive attention and dot product attention. Transformer generates the scaled dot-product attention as follows [6]: \[\textbf{attn_{i}}=softmax\left(\frac{\textbf{Q_{i}}\textbf{K_{i}}^{T}}{\sqrt{d _{k}}}\right)\textbf{V_{i}} \tag{3}\] where, **Q\({}_{i}\)**, **K\({}_{i}\)**, **V\({}_{i}\)** and \(d_{k}\) are query, key, value and the dimension of the key, respectively. ### Bart BART is a denoising autoencoder for pretraining sequence-to-sequence models [10]. It uses a standard Transformer-based NMT architecture to generalize BERT, GPT, and many other recent pre-training schemes. BART uses the standard Transformer architecture, except it modifies ReLU activation functions to GeLUs. Its mBART variation is a sequence-to-sequence denoising autoencoder pre-trained on monolingual corpora in multiple languages using the BART objective [4]. ### Back-translation Back-translation is a method to prepare synthetic parallel corpus from a monolingual corpus for NMT [11]. In low-resource settings, back-translation can be a very effective method. Iterative back-translation is a further improvement [13]. It iterates over two back-translation systems multiple times. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline **Paper** & **Similar Language** & \begin{tabular}{c} **Reducting Morphological** \\ **Complexity** \\ \end{tabular} & **Statistical** & **Neural** & **WX** & **Language Pair** \\ \hline [46] & ✓ & ✗ & ✓ & ✓ & ✗ & Hi+MH, ES+PT \\ \hline [47] & ✓ & ✗ & ✗ & ✓ & ✗ & Hi+MH \\ \hline [48] & ✓ & ✗ & ✗ & ✓ & ✗ & Hi+MH \\ \hline [49] & ✓ & ✗ & ✗ & ✓ & ✗ & Hi+MH \\ \hline [50] & ✓ & ✗ & ✓ & ✓ & ✗ & Hi+MH \\ \hline [51] & ✓ & ✗ & ✗ & ✓ & ✗ & Hi+MH \\ \hline [52] & ✓ & ✗ & ✗ & ✓ & ✗ & Hi+MH \\ \hline [53] & ✓ & ✗ & ✗ & ✓ & ✗ & Hi+MH \\ \hline [70] & ✓ & ✗ & ✗ & ✓ & ✗ & Hi+MH \\ \hline [71] & ✓ & ✗ & ✗ & ✓ & ✗ & 11 Hole languages and English \\ \hline Proposed approach & ✓ & ✓ & ✗ & ✓ & ✓ & [QU,Sh,NE,M,AL,PA,UR]+III \\ \hline \end{tabular} Note: Hi: Hindi, MR: Marathi, ES: Spanish, PT: Portuguese, NE: Nepali, CS: Cseh, PL-Polish, GU: Gujarati, MA: Mathili, PA: Punjbi, UR: Unta \end{table} Table 2: Comparison of some existing work. ✓ and ✗ represent presence and absence of a particular feature, respectively. ### Similar languages Similar languages refer to a group of languages that share common ancestry or extensive contact for an extended period, or both, with each other, leading them to exhibit structural and linguistic similarities even across language families. Examples of languages that share common ancestors are Indo-Aryan languages, Romance languages, and Slavic languages. Languages in contact for a long period lead to the convergence of linguistic features even if languages do not belong to common ancestors. Prolonged contact among languages could lead to the formation of linguistic areas or _sprachbunds_. Examples of such linguistic areas are the Indian sub-continent [25], the Balkan [42], and Standard Average European [43] linguistic areas. Similarities between languages depend on various factors. Some of the factors are lexical similarity, structural correspondence, and morphological isomorphisms. Lexical similarity means that the languages share many words with similar forms (spelling/ pronunciation) and meaning, e.g. Sunday is written as (ravirAra) in Hindi and (ravirAra) in Biojpuri (both are proximate and related Indo-Aryan languages). These lexically similar words could be cognates, lateral borrowings, or loan words from other languages. Structural correspondence means, for example, that languages have the same basic word order, viz. SOV (Subject-Object-Verb) or SVO (Subject-Verb-Object). Morphological isomorphisms refers to the one-to-one correspondence between inflectional affixes. While content words are borrowed or inherited across similar languages, function words are generally not lexically similar across languages. However, function words in related languages (whether suffixes or free words) tend to have a one-one correspondence to varying degrees and for various linguistic functions. ### Transformer-based NMT + Back-translation Guzman et.al [3], in their work, first trained a Transformer on Nepali-English and Sinhala-English language pairs in both directions, and then they used the trained model to translate monolingual target language corpora to source languages. Finally, the source language sentence corpus was merged with generated source language sentences and was given as input to the Transformer for training and producing the translation. ## 4 Proposed Approach To tackle the morphological richness related problems in NMT training for Indian languages and to be able work with very little resources, we propose a simple but effective approach for translating low-resource languages that are similar in features and behaviour. The proposed approach consists of three modules: Text Encoder, Model Trainer, and Text Decoder (Figure 2), as discussed in the following section. Figure 1: Vanilla NMT. ### Text Encoder The proposed model first encodes the source and target corpora of parallel languages into an intermediate representation, the WX-notation2[1]. The primary reason behind encoding the source and target language corpora into WX-notation is to encode different languages with the same or different scripts into a common representation by projecting them onto a common phonetic-orthographic character space so that BPE can be linguistically better informed. WX-notation is a translisteration scheme for representing Indian languages in ASCII format, and as described earlier, it has many advantagged as an intermediate representation, even compared to using Devaganari or any other single Brahmi-based script. It implicitly helps the Transformer encoder model more cognates, loan words, and morphologically similar words between the languages, as well as model other kinds of similarities for better translation. Footnote 2: [https://pypi.org/project/wxconv/https://github.com/irshadbhat/indic-wr-converter](https://pypi.org/project/wxconv/https://github.com/irshadbhat/indic-wr-converter) ### Model Training The intermediate representation of the source language text is passed to the Transformer encoder. The Transformer encoder-decoder model learns the relationship between languages. We have used the SentencePiece3 library for tokenization of the text. SentencePiece is used as a pre-processing task for the WX-encoded source-target text in the concerned language pair. SentencePiece is a language-independent sub-word tokenizer and detokenizer designed for Neural-based text processing, including neural machine translation. It implements two subword segmentation algorithms, Byte-Pair Encoding (BPE) and unigram language model, with direct training from raw sentences [33, 34]. Therefore, it already indirectly, to some extent, provides cognates, loan words, and morphologically similar words to the Transformer, and our prior conversion to WX allows it to do so better. It may be noted that the approach is generalizable to other multilingual transliteration notations, perhaps even to IPA4\({}^{,}\)5, which is almost truly phonetic notation for written text. Footnote 3: [https://github.com/google/sentencepiece](https://github.com/google/sentencepiece) Footnote 4: [https://en.wikipedia.org/wiki/International_Phonetic](https://en.wikipedia.org/wiki/International_Phonetic)\(\backslash\)Alphabet\(\backslash\)chart Footnote 5: [https://www.internationalphoneticassociation.org/](https://www.internationalphoneticassociation.org/) ### Text Decoder After convergence of the training algorithm, the WX-encoded generated target sentences are decoded back to the plain text format to evaluate the model. ## 5 Corpus and Experimental Settings In this section, we discuss the corpus statistics and experimental settings we used for our experiment. ### Corpus description We evaluate the proposed model in an extremely low-resource scenario on the mutually similar languages which we selected for our experiments. These are Hindi (HI), Gujarati (GU), Marathi (MR), Nepali (NE), Maithili (MAI), Punjabi (PA), Urdu (UR), Bhojpuri (BHO), Maghi (MAG), Malayalam (ML), Tamil (TA) and Telugu (TE). We perform experiments on the following language pairs involving Hindi: GU\(\leftrightarrow\)HI, NE\(\leftrightarrow\)HI, MR\(\leftrightarrow\)HI, MAI\(\leftrightarrow\)HI, PA\(\leftrightarrow\)HI, and UR\(\leftrightarrow\)HI. Parallel corpora of GU\(\leftrightarrow\)HI, ML\(\leftrightarrow\)HI, TA\(\leftrightarrow\)HI, and TE\(\leftrightarrow\)HI for training, testing, and validation are downloaded from CVIT-PIB [14]. MR\(\leftrightarrow\)HI parallel corpus is collected from WMT [2020 shared tasks6]. NE\(\leftrightarrow\)HI language pair corpus is made of those collected from WMT 2019 shared tasks 7, Opus 8, and TDIL 9 repositories. We use a monolingual corpus of Gujarati, Hindi, and Marathi for similarity computation in section 5.1 from the PM India dataset described in [15]. The rest of the monolingual corpora are collected from the Opus collection for similarity computation in section 5.1 [29]. We use SentencePiece [45] to pre-process the source and target sentences. Footnote 6: [http://www.statant.org/wmt20/similar.html](http://www.statant.org/wmt20/similar.html) Footnote 8: [https://opus.nlpl.eu/](https://opus.nlpl.eu/) Footnote 9: [http://www.tdi-dc.in/index.php?lang=en](http://www.tdi-dc.in/index.php?lang=en) Figure 2: Proposed architecture. We use 5K merge operations to learn BPE with the SentencePiece model and restrict the source and target vocabularies to at most 5K tokens. There are some places where code-switching occurs in the employed dataset. The WX-transliteration tool ignores code-switched data and keeps it in the datasets as it is. ### Training details #### 5.2.1 Proposed approach We use the WX-notation tool 10 for transliterating the text and the fairseq 11[18] toolkit, which is a sequence modelling toolkit, to train the Transformer. We use five encoder and decoder layers. The encoder and decoder embedding dimensions are set to 512. Feed-forward encoding and decoding embedding dimensions are set to 2048. The number of an encoder and decoder attention heads is set to 2. The dropout, the attention dropout, and the ReLU dropout are set to 0.4, 0.2, and 0.2, respectively. The weight decay is set at 0.0001, and the label smoothing is set to 0.2. We use the Adam optimizer, with \(\beta_{1}\) and \(\beta_{2}\) set to 0.9 and 0.98. The learning rate schedule is inverse square root, with an initial learning rate of 1e-3 and a minimum learning rate of 1e-9. The maximum number of tokens used is set to 4000. The maximum number of epochs for training is set to 100. We use a beam size equal to 5 for generating data using the test set. Footnote 10: [https://pypi.org/project/wxconv/](https://pypi.org/project/wxconv/) Footnote 11: [https://github.com/facebookresearch/fairseq](https://github.com/facebookresearch/fairseq) #### 5.2.2 Guzman et al. [3] In Guzman et al. [3], authors have demonstrated the experiments on extremely low resource languages using Transformer. Our proposed approach is based on the Transformer described in Guzman et al. [3] with the addition of two extra modules, Text Encoder and Text Decoder. We use the Transformer model described in Guzman et al. [3] as a reasonably high baseline to compare the proposed approach without the intermediate representation of the WX-notation for Indian languages. The projection to WX could be used for any other NMT approach as well that uses a subword embedding. #### 5.2.3 Smt We use Moses 12, an open-source toolkit to train SMT [54]. For obtaining the phrase/word alignments from parallel corpora, we use _GIZA++_[55]. A 5-gram KenLM language model is used for training [56]. The parameters are tuned on the validation set using _MERT_ and tested with a test set [57]. Footnote 12: [http://www2.statmt.org/moses/](http://www2.statmt.org/moses/) ## 6 Results and Analysis We compare the proposed approach with the Moses-based SMT and the Transformer-based NMT model [3], where the latter is used as the baseline for NMT. We use six evaluation metrics, BLEU 13[12], LEBLEU [58], WupLeBleu [59], TER [31], WER, and chrF2 [30] for better comparison of the proposed approach. We see from Tables 4 and 5 that the proposed approach improves upon the baseline for most of the pairs. Footnote 13: [https://github.com/mjpost/sacrebleu](https://github.com/mjpost/sacrebleu) BLEU score, although a simple metric based on comparison of _n_-grams, is a standard metric accepted by _NLP_ researchers to obtain the accuracy of predicted translated outputs compared to the human-translated reference sentences. This is because it has been observed that the value of the BLEU score correlates well with human-judged quality of translations. The formula for the BLEU score is as follows [12]: \[BLEU=min\left(1,\frac{output\_length}{reference\_length}\right)\left(\prod_{i= 1}^{4}precision_{i}\right), \tag{4}\] \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline **Lang-Pairs** & **Train** & **Validation** & **Test** & **Domain** \\ \hline GU\(\leftrightarrow\)HI & 15784 & 1000 & 1973 & \multirow{2}{*}{WMT 2019 corpus, Agriculture, Entertainment, Bible} \\ NE\(\leftrightarrow\)HI & 136991 & 3000 & 3000 & & \\ MR\(\leftrightarrow\)HI & 43274 & 1000 & 1411 & News, PM India, Indic WordNet \\ PA\(\leftrightarrow\)HI & 225576 & 7199 & 7200 & GNOME, KDE4, Ubuntu, wikimedia, TED2020 \\ MAI\(\leftrightarrow\)HI & 93136 & 2972 & 2973 & GNOME, KDE4, wikimedia, Ubuntu \\ UR\(\leftrightarrow\)HI & 108176 & 3452 & 3453 & Tanzil, GNOME, KDE4, wikimedia, Ubuntu \\ ML\(\leftrightarrow\)HI & 17333 & 500 & 500 & PM India \\ TA\(\leftrightarrow\)HI & 43538 & 500 & 500 & PM India \\ TE\(\leftrightarrow\)HI & 2584 & 500 & 500 & PM India \\ BHO\(\leftrightarrow\)HI & 0 & 500 & 500 & Movie subtitles, Literature, News \\ MAG\(\leftrightarrow\)HI & 0 & 500 & 500 & Movie subtitles, Literature, News \\ \hline \end{tabular} Note: HI: Hindi, MR: Marathi, NE: Nepali, GU: Gujarati, MAI: Maithili, PA: Punjabi, UR: Urdu, ML: Malayalam, TA: Tamil, TE: Telgu, BHO: Bhopuri, MAG: Magali \end{table} Table 3: Corpus Statistics showing the number of training, validation, and test sentences for each domain where the _output_length_ and the _reference_length_ are the lengths of the predicted sentences and the reference sentences, respectively. We also perform a comparison between SMT without WX-transliteration and SMT with it. These two sets of results are also compared with the proposed approach as shown in Table 6. In the case of SMT also we can easily note that the performance improves in most cases by using WX as the intermediate notation, even though SMT is not using subword embeddings. We also present some basic analysis of the scores as shown in Tables 4 and 5. We use corpus-based language relatedness and complexity measures for further analysis for this purpose in the next section. ### Similarity between languages Since there are no definitive methods to judge the similarity between two languages, we use the following techniques to compute the similarity between the languages: #### 6.1.1 Snsglmscore We use character-level _n_-gram language models based SSNGLMScore to measure the relatedness between languages [28, 32]. SSNGLMScore is computed as follows: \[S_{\mathit{all}}=\sum_{\mathit{all}}^{\mathit{m}}p_{\mathit{all}}(w_{\mathit{s }}|w_{\mathit{t}}^{\mathit{m}-1}), \tag{5}\] where \(S\) stands for Scaled Sum of _n_-gram language model scores. \[\mathit{MS}_{\mathit{all}}=\frac{S_{\mathit{all}}-\min(S_{\mathit{SLTL}})}{ \max(S_{\mathit{SLTL}})-\min(S_{\mathit{SLTL}})}, \tag{6}\] where, _sl_ and _tl_ represent the source language and the target language, respectively. Moreover, _sl_\(\in\) SL(Gujarati, Marathi, Naithili, Nepali, Urdu, Punjabi, Hindi, Malayalam, Tamil, Telugu, Bhojpuri, Magahi) and \(m\) is the total number of sentences in the target language _tl_\(\in\) TL(Gujarati, Marathi, Naithili, Nepali, Urdu, Punjabi, Hindi, Malayalam, Tamil, Telugu, Bhojpuri, Magahi). We train the language model using a 6-gram character-level KenLM model on the source monolingual corpus \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|} \hline **Languages(xx)** & \multicolumn{2}{c|}{**BLEU**} & \multicolumn{2}{c|}{**chrF2**} & \multicolumn{2}{c|}{**TER**} \\ \hline \multicolumn{7}{|c|}{**XX\(\rightarrow\)HI**} \\ \cline{2-7} & **Guzmán et.al[3]** & **Proposed** & **Guzmán et.al[3]** & **Proposed** & **Guzmán et.al[3]** & **Proposed** \\ \cline{2-7} GU & 33.14 & **33.15** & **58** & 57 & **0.541** & 0.548 \\ NE & 30.51 & **41.97** & 46 & **49** & 0.658 & **0.652** \\ MR & 16.87 & **22.37** & 43 & **44** & **0.707** & 0.709 \\ PA & 78.56 & **81.05** & 82 & **82** & 0.220 & **0.216** \\ UR & 28.74 & **30.08** & 45 & **45** & 0.688 & **0.657** \\ MAI & 79.49 & **81.80** & **82** & 81 & **0.242** & 0.251 \\ \hline \multicolumn{7}{|c|}{**HI\(\rightarrow\)XX**} \\ \cline{2-7} & **Guzmán et.al[3]** & **Proposed** & **Guzmán et.al[3]** & **Proposed** & **Guzmán et.al[3]** & **Proposed** \\ \cline{2-7} GU & 25.47 & **25.82** & 56 & **56** & **0.616** & 0.619 \\ NE & 32.89 & **43.52** & 50 & **51** & **0.630** & 0.637 \\ MR & 14.05 & **14.76** & 41 & **44** & 0.789 & **0.762** \\ PA & 80.01 & **81.87** & 83 & **84** & 0.206 & **0.203** \\ UR & 22.74 & **24.35** & 46 & **47** & 0.597 & **0.596** \\ MAI & **86.58** & 83.52 & **89** & 86 & **0.148** & 0.168 \\ \hline \end{tabular} \end{table} Table 4: Experiment results (BLEU, chrF2, and TER scores). \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \hline **Languages(xx)** & \multicolumn{2}{c|}{**LEBLEU**} & \multicolumn{2}{c|}{**WpLeBLEU**} & \multicolumn{2}{c|}{**WER**} \\ \cline{2-7} & \multicolumn{2}{c|}{**XX\(\rightarrow\)HI**} \\ \cline{2-7} & **Guzmán et.al[3]** & **Proposed** & **Guzmán et.al[3]** & **Proposed** & **Guzmán et.al[3]** & **Proposed** \\ \cline{2-7} GU & **0.663** & 0.657 & **0.663** & 0.657 & 66.77 & **66.29** \\ NE & 0.543 & **0.547** & 0.543 & **0.547** & **66.99** & 67.71 \\ MR & 0.495 & **0.541** & 0.495 & **0.541** & **72.78** & 73.36 \\ PA & 0.853 & **0.853** & 0.853 & **0.853** & 22.29 & **21.83** \\ UR & 0.564 & **0.566** & 0.564 & **0.566** & 68.34 & **67.20** \\ MAI & **0.865** & 0.851 & **0.865** & 0.851 & **24.34** & 25.23 \\ \hline \multicolumn{7}{|c|}{**HI\(\rightarrow\)XX**} \\ \cline{2-7} & **Guzmán et.al[3]** & **Proposed** & **Guzmán et.al[3]** & **Proposed** & **Guzmán et.al[3]** & **Proposed** \\ \cline{2-7} GU & 0.622 & **0.623** & 0.622 & **0.623** & **73.11** & 73.33 \\ NE & **0.547** & 0.519 & **0.547** & 0.519 & **63.41** & 65.31 \\ MR & **0.485** & 0.454 & **0.485** & 0.454 & 80.10 & **77.46** \\ PA & 0.858 & **0.865** & 0.858 & **0.865** & 20.88 & **20.57** \\ UR & 0.619 & **0.629** & 0.619 & **0.629** & 62.35 & **62.27** \\ MAI & **0.916** & 0.908 & **0.916** & 0.908 & **14.83** & 16.89 \\ \hline \end{tabular} \end{table} Table 5: LEBLEU, WupLeBleu and WER scores. (_sf_). Each language model is tested on target language (_tf_), and the scores are reported. Table 7 lists the cross-lingual similarity scores of Hindi, Gujarati, Marathi, Nepali, Maithili, Punjabi, Malayalam, Tamil, Telugu, Bhojupuri, Magali, and Urdu with each other. Based on SSNGLMScore, Bhojupuri, Maithili and Magahi are the closest to Hindi, which matches linguistic knowledge about them, whereas Urdu seems to as far from Hindi as Malayalam and more than Telugu. The reasons Urdu is far from Hindi is partly that Urdu is written in a different kind of script from Hindi which does not have a straightforward mapping to WX, but mainly because, though grammatically almost identical, the two use very different vocabularies in written and formal forms. Maithili is also the second official language of Nepal and is also highly similar to Nepali, perhaps due to prolonged close contact. What is more surprising is that the similarity between Urdu and Nepali is relatively high, whereas that between Urdu and Hindi is among the lowest. This could be because of the nature of the corpus. Going through Tables 4 and 5, we find that there is an improvement in every metric except WER and TER in a majority of cases when we apply the proposed method on the translation direction from Maithili, Gujarati, Marathi, Nepali, Punjabi, and Urdu to Hindi. This observation allows us to assert that the proposed approach improves performance for translation between similar languages. Thus, even though the similarity measure we used mixes different kinds of similarities, it is suitable for our purposes because our method is based on sub-word and multilingual modelling. We also see a gain of +1.34 BLEU points on Hindi to Urdu despite Urdu being far away from the rest of the language pairs in terms of the similarity score we used. There is a considerable improvement of +11.46 BLEU points on HI\(\rightarrow\)NE and +10.63 BLEU points on NE\(\rightarrow\)HI language pairs. #### 6.1.2 char-BLEU, TER and chrF2 To better understand the slight fall in BLEU points despite the similarity for MAI \(\rightarrow\) HI and large increment in the case of NE\(\leftrightarrow\)HI (where Nepali and Maithili are known to be close), we also compute similarity by applying char-BLEU [44], chrF2, and TER on a training dataset of all language pairs. The reason behind using char-BLEU and chrF2 for similarity is that since they are character-based metrics, there is a greater chance of covering the morphological aspects. Before calculating the char-BLEU, the TER, and the chrF2 evaluation metrics, data must be in the same script to evaluate the score. So, we convert the corpus from UTF-8 to WX-notation. Table 8 contains the char-BLEU score of language pairs, whereas Table 9 contains the TER and chrF2 scores of each language pair. We see Table 8 and 9 and find out that HI and MAI are still more similar compared to other pairs. We can only hypothesize the reason being that this is due to the nature of the data that we have used. ### Analysis on language complexity #### 6.2.1 Morphological complexity Since Indian languages are morphologically rich, machine translation systems based on word tokens have difficulty with them. Therefore, we also tried to relate the results obtained with estimates of such complexity obtained from character-level entropy. It is reasonable to assume that the greater the character-level entropy, the more morphologically complex a language is likely to be. Character-level entropyWe used Character-level word entropy to estimate morphological redundancy, following Bharati et al.[74] and Bentz and Alikaniotis 2016 [35]. A "word" is defined in our experiments as a space-separated token, i.e., a string of alphanumeric Unicode characters delimited by white spaces. The average information content of character types for words is then calculated in terms of Shannon entropy [36]: \[H(T)=-\sum_{i=1}^{V}p(c_{i})\log_{2}(p(c_{i})) \tag{7}\] where \(V\) is number of characters (\(c_{i}\)) in a word. Table 10 lists the word (unigram) entropy of languages at character level, which indirectly represents languages' lexical richness, i.e., how complex - in terms of characters they are made up of - word forms are. Since we compute the unigram entropy based on characters, we can say that lexical richness also indicates morphological complexity, both derivational and inflectional. Based on the corpus-based word entropy values, it appears that Hindi is more morphologically complex than the other six languages. However, this may be more of derivational complexity rather than inflectional \begin{table} \begin{tabular}{|l|c c c|} \hline **Languages(xx)** & \multicolumn{3}{c|}{**BLEU**} \\ \hline & \multicolumn{3}{c|}{**XX\(\rightarrow\)HI**} \\ \cline{2-4} & **SMT** & **SMT + WX** & **Proposed** \\ GU & **43.49** & 30.69 & 33.15 \\ NE & 40.14 & **53.21** & 41.97 \\ MR & **7.41** & 1.46 & 22.37 \\ PA & 68.34 & **71.22** & 81.05 \\ UR & 19.21 & **21.84** & 30.08 \\ MAI & 79.56 & **81.46** & 81.80 \\ \hline & \multicolumn{3}{c|}{**HI\(\rightarrow\)XX**} \\ \hline & **SMT** & **SMT + WX** & **Proposed** \\ GU & **39.20** & 25.89 & 25.82 \\ NE & 40.21 & **54.84** & 43.52 \\ MR & **7.36** & 1.48 & 14.76 \\ PA & 67.21 & **70.64** & 81.87 \\ UR & 18.24 & **18.41** & 24.35 \\ MAI & 79.12 & **83.06** & 83.82 \\ \hline \end{tabular} \end{table} Table 6: BLEU score-based comparison of SMT, SMT + WX and the proposed approaches. complexity, as Hindi is relatively simpler in terms of inflectional morphology. The high derivational complexity of Hindi is because it is the official language of India and is more standardized than most other Indian languages. It, therefore, has borrowed and coined a large number of complicated words and technical terms, whether from Persian or Sanskrit or English. This adds a great deal to the derivational complexity of written formal Hindi, compared to commonly spoken Hindi. At least, this is our hypothesis based on the similarity and complexity results. We also find that our approach shows a considerable improvement of about more than 10 BLEU points in both directions for the Hindi-Nepali language pair, i.e., NE\(\rightarrow\)HI and HI\(\rightarrow\)NE. Such improvement may be attributed to the effect caused by projecting to a common multilingual orthographic-phonetic notation, that is, WX. This probably helps the Transformer learn the context between languages better with the help of a sentence piece tokenizer. In Tables 11, 12 and 13, we present the values of word entropy and redundancy at character level. These tables show that the entropy increases when converting to WX and redundancy decreases. This is evidence of the fact that the project to a common orthrographic and phonetic space causes the entropy to increase and redundancy to decrease, thus allowing more compact representations to be learnt from the data after conversion to WX in our case. #### 6.2.2 Syntactic complexity Perplexity Perplexity (_PP_) of a language can be seen as a weighted average of the reciprocal of its branching factor [28]. Branching factor is the number of possible words that can succeed any given word based on the context. Therefore, perplexity - as a kind of the mean branching factor - is a mean representative of the possible succeeding words given a word. Thus, it can be seen as a rough measure of the syntactic complexity. If the model is a good enough representation of the true distribution for the language, then the _PP_ value will actually indicate syntactic complexity. To estimate distances of other languages from Hindi using perplexity, we trained the perplexity model on the Hindi corpus and tested it on the corpora of other languages. \[\textit{PP}(C)=\sqrt[n]{\frac{1}{\textit{P}(S_{1},S_{2},S_{3},...,S_{n})}} \tag{8}\] where corpus \(C\) contains \(n\) sentences with \(W\) words. Table 14 and 15 contain the asymmetric and symmetric perplexity -- average of the two translation directions -- values between the concerned language pairs and indicate their distances from Hindi based on character-level language model. Pairs having higher perplexity scores means the languages are more distant. We see language pairs Urdu and Hindi have more perplexity scores. This is mostly because these two languages, though almost identical in spoken form and in terms of core syntax and core vocabulary, use very different extended vocabularies for written and formal purposes, besides using very different writing systems. Standard written Urdu uses Persian, Arabic, and Turkish words heavily, whether adapted phonologically or not. Given the small amounts of data, it is not surprising that the values of perplexity are different in the two translation directions. Similarly, standard and written Hindi uses words much more heavily derived or borrowed or even coined \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|} \hline **Model** & **BHO** & **CU** & **HI** & **MAG** & **MAI** & **ML** & **MR** & **NE** & **PA** & **TA** & **TE** & **UR** \\ \hline **BHO** & - & 0.5659 & 0.6725 & 0.6997 & 0.7235 & 0.4090 & 0.5687 & 0.4979 & 0.4580 & 0.3233 & 0.5057 & 0.4237 \\ \hline **GU** & - & - & 0.5483 & 0.5642 & 0.6449 & 0.3727 & 0.5411 & 0.3868 & 0.3408 & 0.2531 & 0.4578 & 0.3787 \\ \hline **HI** & - & - & - & 0.6331 & 0.6598 & 0.3536 & 0.5717 & 0.4181 & 0.4046 & 0.2564 & 0.4567 & 0.3670 \\ \hline **MAG** & - & - & - & - & 0.7762 & 0.4414 & 0.5724 & 0.5671 & 0.4827 & 0.3736 & 0.5248 & 0.5245 \\ \hline **MAI** & - & - & - & - & - & 0.5833 & 0.6496 & 0.6968 & 0.5734 & 0.5453 & 0.6435 & 0.7040 \\ \hline **ML** & - & - & - & - & - & 0.3736 & 0.3388 & 0.1968 & 0.3792 & 0.4507 & 0.2759 \\ \hline **MR** & - & - & - & - & - & - & - & 0.4023 & 0.3496 & 0.2637 & 0.4771 & 0.3498 \\ \hline **NE** & - & - & - & - & - & - & - & 0.2661 & 0.2784 & 0.3985 & 0.4354 \\ \hline **PA** & - & - & - & - & - & - & - & - & 0.1449 & 0.2718 & 0.2938 \\ \hline **TA** & - & - & - & - & - & - & - & - & - & - & 0.2972 & 0.2641 \\ \hline **TE** & - & - & - & - & - & - & - & - & - & - & 0.3493 \\ \hline **UR** & - & - & - & - & - & - & - & - & - & - & - & - \\ \hline \end{tabular} \end{table} Table 7: Similarity between languages using SSNGLMScore \begin{table} \begin{tabular}{|l|c|} \hline **Language** & **char-BLEU** \\ \hline Gujarati\(\leftrightarrow\)Hindi & 47.29 \\ Marathi\(\leftrightarrow\)Hindi & 35.05 \\ Nepali\(\leftrightarrow\)Hindi & 40.53 \\ Maithili\(\leftrightarrow\)Hindi & 66.70 \\ Punjab\(\leftrightarrow\)Hindi & 37.17 \\ Urdu\(\leftrightarrow\)Hindi & 8.61 \\ \hline \end{tabular} Note: Applying char-BLEU score on the training data of both the languages of the pair \end{table} Table 8: char-BLEU score on the training data from Sanskrit. Despite higher perplexity between these two languages, our approach gives a _+2_ increment in the BLEU score, probably because the common core syntax and core vocabulary manifest themselves in every phrase or sentence and thus have higher probabilistic weight. They are, in fact, completely mutually intelligible in the spoken forms and partly in the written form. There are also a lot of Indians who can comfortably read and understand both these languages, even in their standard, written, and literary forms. The use of WX perhaps allows the models to exploit the core similarities better. ## 7 Ablation Study This section discusses ablation studies conducted using the proposed method on distant and zero-shot language pairs and back-translation. ### Analysis of the proposed approach on more distant language pairs To see whether and to what extent our approach generalizes to more distant language pairs, we also analyze the performance of the proposed approach on (ML\(\leftrightarrow\)HI, TA\(\leftrightarrow\)HI, and TE\(\leftrightarrow\)HI). Malayalam, Tamil, and Telugu belong the Dravidian family, and Hindi is from the Indo-Aryan family. We note that translating between these three Dravidian languages and Hindi still leads to improvement, considering both chrF2 and BLEU scores. The results are shown in Table 16. ### Unsupervised settings We also demonstrate the proposed approach under unsupervised scenarios on zero-shot language pairs, Biojpuri-Hindi and Magahi-Hindi, for which no parallel training corpora is available. The validation datasets for zero-shot experiments are collected from LoResMT 2020 shared tasks14. For training the model, we use NE\(\leftrightarrow\)HI language pairs and use language transfer on zero-shot pairs to evaluate the model on validation datasets. The reason behind using NE\(\leftrightarrow\)HI language pairs for training the model in unsupervised experiments on Biojpuri-Hindi and Magahi-Hindi is the higher similarity between NE\(\leftrightarrow\)HI language pairs with both Biojpuri-Hindi and Magahi-Hindi zero-shot language pairs based on [65]. The results are shown in Table 17, demonstrating the improvement in unsupervised settings also. Footnote 14: [https://sites.google.com/view/loresmt](https://sites.google.com/view/loresmt) ### Back-translation Finally we report results on using the approach along with Back-Translation, which has been shown to benefit machine translation for very low resource languages. We selected Gujarati and Hindi language pairs for performing Back-Translation (BT) with the proposed approach. With Back-Translation also, the proposed approach shows an improvement of BLEU point _+0.97_ on HI\(\rightarrow\)GU and _+1.36_ on GU\(\rightarrow\)HI language pairs, as shown in Table 18. ## 8 Conclusion and Future Scope In this work, we have proposed a simple but effective MT system approach by encoding the source and target script into an intermediate representation, WX-notation, that helps the models to be learnt in a common phonetic and orthographic space. This language projection reduces the surface complexity of the algorithm and allows the neural network to better model the relationships between languages to provide an improved translation. Further, we have investigated these results by estimating the similarities and complexities of language pairs and individual languages to verify that our results are consistent and agree with the intuitively known facts about the closeness or distances between various language pairs. Moreover, this approach works well under unsupervised settings and works fine for some distant language pairs. The proposed approach improves baseline approaches by _0.01_ BLEU points to _11.46_ BLEU \begin{table} \begin{tabular}{|l|r|r|r|r|} \hline **Languages** & **Character Entropy** & **Character Entropy\({}^{*}\)** & Difference \\ \hline Gajarati & 5,0368 & 3,7454 & 1,2914 \\ Morathi & 5,020 & 3,6846 & 1,3374 \\ Nepali & 4,6722 & 3,5770 & 1,0952 \\ Mathili & 5,1119 & 3,3912 & 1,1997 \\ Punjat & 5,0834 & 3,7892 & 1,2902 \\ Urdu & 4,8821 & 4,1198 & 0,7623 \\ Hindi & 5,2195 & 3,7974 & 1,4221 \\ \hline \end{tabular} \({}^{*}\) After applying WX-notation \end{table} Table 10: Character-based entropy of languages with or without applying WX-notation \begin{table} \begin{tabular}{|l|r|r|r|r|r|r|} \hline **Languages** & \(GU\to HI\) & \(MR\to HI\) & \(NE\to HI\) & \(MAI\to HI\) & \(PA\to HI\) & \(UR\to HI\) \\ \hline TER & 1.066 & 1.300 & 1.052 & 0.610 & 0.988 & 1.093 \\ chrF2 & 38 & 29 & 34 & 65 & 32 & 12 \\ \hline **Languages** & \(HI\to GU\) & \(HI\to MR\) & \(HI\to NE\) & \(HI\to MAI\) & \(HI\to PA\) & \(HI\to UR\) \\ \hline TER & 0.884 & 0.940 & 0.887 & 0.555 & 0.906 & 1.044 \\ chrF2 & 39 & 29 & 36 & 62 & 30 & 10 \\ \hline \end{tabular} Note: Applying TER and chrF2 scores on the training data of both the languages of a pair \end{table} Table 9: TER and chrF2 scores on the training data points. The proposed approach has some limitations and boundary conditions. First, it requires a common transliteration script, which may not be available for all morphologically rich languages. Second, it is only applicable to Indian languages. Third, we can see from Table 16 that performance on distant language pairs falls short of expectations. In the future, we plan to extend this approach to the various ways described below: * **Multilingual NMT system:** Since the proposed approach transforms all the Indian language scripts into a common notation called WX, this conversion favours the subword embeddings to work as character embedding. It may be, therefore, more beneficial to implement this approach in the multilingual system(s) for all Indian languages. * **BART, MBART, and other representations:** We tried the MBART-based translation of Gujarati to Hindi and Hindi to Gujarati, and the results are worse than a vanilla transformer. So, we plan to extend the proposed approach to more representations like BART, MBART, and other state-of-the-art representation techniques for Deep Learning. * **Dravidian languages and the rest of the Indo-Aryan language family:** We also plan to extend the proposed approach to the Dravidian language family and the rest of the Indo-Aryan languages.
2306.04043
An Analysis of Reader Engagement in Literary Fiction through Eye Tracking and Linguistic Features
Capturing readers' engagement in fiction is a challenging but important aspect of narrative understanding. In this study, we collected 23 readers' reactions to 2 short stories through eye tracking, sentence-level annotations, and an overall engagement scale survey. We analyzed the significance of various qualities of the text in predicting how engaging a reader is likely to find it. As enjoyment of fiction is highly contextual, we also investigated individual differences in our data. Furthering our understanding of what captivates readers in fiction will help better inform models used in creative narrative generation and collaborative writing tools.
Rose Neis, Karin de Langis, Zae Myung Kim, Dongyeop Kang
2023-06-06T22:14:59Z
http://arxiv.org/abs/2306.04043v1
# An Analysis of Reader Engagement in Literary Fiction ###### Abstract Capturing readers' engagement in fiction is a challenging but important aspect of narrative understanding. In this study, we collected 23 readers' reactions to 2 short stories through eye tracking, sentence-level annotations, and an overall engagement scale survey. We analyzed the significance of various qualities of the text in predicting how engaging a reader is likely to find it. As enjoyment of fiction is highly contextual, we also investigated individual differences in our data. Furthering our understanding of what captivates readers in fiction will help better inform models used in creative narrative generation and collaborative writing tools. The interactive demo is available here1. Footnote 1: [https://bookdown.org/bishop_pilot/acldemo2/ACLDemo.html](https://bookdown.org/bishop_pilot/acldemo2/ACLDemo.html) ## 1 Introduction The question of reader engagement in fiction has been studied in the psychology field for decades, with some of the foundational theoretical work from Gerrig (1993) on Transportation Theory paving the way for more recent theoretical frameworks and experimental setups, notably the work by Melanie C. Green (2004) and Busselle and Bilandzic (2009). However, as Jacobs (2015) emphasized in his article on the neurocognitive science of literary reading, the samples normally collected are small and not enough to compensate for individual differences in reading patterns due to reader context and other situational factors. In order to help close the experimental gap, one contribution of this study is to provide a data set of reader reactions to natural stories, which Jacobs refers to as "hot" experimental research. This data, along with the extraction of linguistic features, allows us to test theories around reader engagement and discover which textual qualities have the most impact. In our study, we have the following research questions: * one being immersed and the other more reflective. We also looked at whether linguistic features of the text related to a more affective reading mode led to higher dwell times as Jacobs predicts. * **RQ2: How much is engagement dependent on reader context vs. linguistic features?** In order to address this question, we evaluated how well the features we extracted could predict whether a sentence was highlighted by readers. * **RQ3: Are dwell time patterns consistent across readers?** We scaled dwell times per participant and evaluated the pattern over the story to see if dwell times increased and decreased in the same areas of the story for different readers. With respect to RQ1, our findings indicated that negatively-valenced, concrete sentences had higher dwell times. No relationship was found between the highlights and dwell times. This may be due to the fact that the highlighting data is sparse. For RQ2, we found that features such as valence, sentiment, and emotion were significant across readers, although the reader context accounted for much of the variance in highlighting annotations. Regarding RQ3, there was a high amount of variance between readers for dwell time. However, once dwell times were individually scaled, we could see some consistency in their patterns, particularly when looking only at highly engaged readers. For future studies, a modified highlighting exercise in which participants must select a category for each sentence -- including none -- could result in less sparse annotation data. A more complete an notation of the story text would allow us to explore the connection between dwell time and different modes of engagement. As new methods are created for representing complex features of stories, such as character relationships and story tension, data sets like ours can be used to find more meaningful relationships between the story text and how engaging it is. ## 2 Related Work In his model for the neurocognitive poetics of literary reading, Jacobs (2015) proposed two modes of reading: one fast track -- "immersion" and one slow -- "aesthetic trajectory". The former is proposed to be brought on by things like familiarity, suspense, sympathy, and vicarious hope; whereas the latter is a more reflective and connected mode brought on by aesthetic appreciation, more complex emotion, and unfamiliar situations. We used this framework to inform what variables we expected to have an impact on dwell time. Busselle and Bilandzic (2009) conducted a series of studies to narrow down the salient aspects of reader engagement and created a general media engagement scale. The aspects they defined are narrative understanding, attentional focus, emotional engagement, and narrative presence, and the scale they created include questions related to those aspects. We adapted this scale for written narrative to gauge overall interest in the stories used in our study. In addition, in order to obtain more granular information, we used these aspects to design an annotation task that would provide sentence-level feedback. Using visualizations and linear mixed effect models, we explored textual features that had an impact on engagement and dwell time across readers. There have been several other eye tracking as well as fMRI studies in the area of reader engagement (a few are shown in Table 1). One 13-participant study showed that words in enactive passages had on average longer fixation durations and dwell times (Magyari et al., 2020). Based on survey responses, the authors hypothesized that in the enactive texts, the ease of imagery contributes to greater involvement in imagination and results in an overall slower reading speed. Hsu et al. (2015) conducted an fMRI study and found valence and arousal scores as good predictors of overall emotional experience of the reader. ## 3 Methods Participant study designThe study asked 31 English speakers (17 female, 11 male, 3 other, average age: 26) to read two short stories by Anton Chekhov2 while their eyes were tracked, and then answer an engagement scale survey: Footnote 2: “Expensive Lessons” and “Schoolmistress” * I was curious about what would happen next. (+) * The story affected me emotionally. (+) * While reading my body was in the room, but my mind was inside the world created by the story. (+) * At times while reading, I wanted to know what the writer's intentions were. (+) * While reading, when a main character succeeded, I felt happy, and when they suffered in some way, I felt sad. (+) * The characters were alive in my imagination. (+) \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \hline & Ours & Kunze et al. & Magyari & Hsu et al. & Maslej et al. \\ & & (2015) & et al. (2020) & (2015) & (2019) \\ \hline \multicolumn{5}{l}{**Data gathered**} \\ \hline Eye tracking & x & x & x & & \\ \hline Saccade angle & & x & x & \\ \hline fMRI & & & & x & \\ \hline Engagement survey & x & x & x & & x \\ \hline Engagement annotation & x & & & x & \\ \hline \multicolumn{5}{l}{**Textual features extracted**} \\ \hline Emotional arc & x & & & & \\ \hline Lexical categories & x & & & x & x \\ \hline Description category & & & x & & \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison between our study and other similar experiments. * I found my mind wandering while reading the story. (-) * I could vividly imagine the scenes in the story. (+) * At points, I had a hard time making sense of what was going on in the story (-) After reading through both stories, they completed a highlighting exercise where they highlighted areas according to the following categories: * _Present_: Able to vividly picture the scene in the story * _Confused_ * _Curious_: Curious about what will happen next * _Connected_: Connected to the character; able to identify with them or feel their emotions * _Other_: Enjoyed it for a different reason Eye-tracking dataDue to calibration issues, 8 samples were discarded, leaving 23 (13 female, 8 male, 2 other, average age: 28, std.: 10). See Table 4 for more details on the participants. The eye tracking results were drift corrected and interest area reports were exported using words as interest areas. Outliers for dwell time were removed using the inner quartile range method (1.7% of the data). The data was aggregated to the sentence level and dwell time values were normalized by sentence character count. To handle missing data, null values for the eye tracking features were filled with the average of the 5 nearest sentences (5.7% of all sentences read across participants). Dwell times were then scaled individually per participant using min-max scaling. This allowed each participant's dwell time patterns to be preserved when scaling. Figure 2: Highlights and features. and then computing the mean and difference between minimum and maximum scores. To obtain lemmas, we used the BookNLP 4 code package. All feature scores used in our models are scaled to \([0,1]\). Footnote 4: BookNLP As these sentence-level features can have high variability on their own, we performed low-pass filtering by Fourier transformation on sliding windows of ten sentences. As a result, we were able to filter out extreme features and smoothly track the patterns of features that persist over a longer context. LimitationsThere are a few issues with the data that should be mentioned. Since the participants were asked to read two stories in a row, it is best to make sure there is a balance in which story is read first. However, due to poor tracking of reading order, our data ended up with a skew towards one story (Expensive Lessons: 16, Schoolmistress: 7), which may affect level of attention for the second story. In addition, the stories did not receive high scores on average in the engagement survey. On a scale from 0-4, Expensive Lessons got an average of 2.09 and Schoolmistress averaged 1.92. Ideally, stories used for such studies should be more widely popular in order to make engagement more likely. Perhaps in part due to the low average score, the highlighting data is sparse, making it difficult to find relationships between dwell time and engagement categories. Finally, although efforts were made to recruit participants from the larger community, a majority of the participants were University students and staff, with a minority from outside the University community. As seen in Table 4, this resulted in a skew towards younger, college-educated participants. Observations from this study may not generalize well to other groups. ## 4 Results Other studies have shown that valence and arousal play an important role in predicting interest in a story (Maslej et al., 2019; Hsu et al., 2015) and Jacobs (2015) emphasized the importance of affective processes in his framework. In order to determine the importance of these values for our data, we used linear mixed model analysis. Using lme4 (Bates et al., 2015) and lmerTest (Kuznetsova et al., 2017), we fit predictions of the proportion of the sentence highlighted and dwell time, with random effects of participant (n=23) and story (n=2). Variables were tested for collinearity using the variance inflation factor (VIF) method outlined by Zuur et al. (2010), and no variables exceeded the recommended threshold of 3. Observations and fixed effects are on a \([0,1]\) scale. See Appendix B for exact model definitions. ### Predicting engagement highlights We fit a model for predicting the proportion of a sentence highlighted by a reader in order to see how significant the textual features were across readers to address RQ2. Table 2 shows major results in predicting annotated highlights with different linguistic and discourse features. Our results support a significance of valence mean (p=0.01), similar to Hsu et al. (2015). Unlike in other studies, we found that arousal mean had no significance (p=0.686). However, similar to Hsu et al. (2015), valence-span -- the difference between valence max and valence min (p<0.001) and arousal-span -- the difference between arousal max and arousal min (p<0.001) were significant. The positive slope for both (0.1) suggests that the reader was more engaged in sentences with a higher range of valence and arousal. Of the emotion categories (i.e. anger, disgust, fear, joy, neutral, sadness, surprise), surprise was found to be a significant effect (p=0.001) with a positive slope (0.08). Other features that had an impact were negative sentiment score (p<0.001) and character count (p<0.001). The positive slope for negative sentiment (0.09) partially align with \begin{table} \begin{tabular}{l r r r r} \hline & Slope & \(Pr(>|t|)\) & Sig. & VIF \\ \hline (Intercept) & -0.05 & 0.49 & & \\ char. ct. & 0.16 & \textless{} 0.001 & \(***\) & 2.19 \\ word freq. & 0.07 & 0.08 &. & 1.23 \\ positive & 0.03 & 0.09 &. & 1.58 \\ negative & 0.09 & \textless{} 0.001 & \(***\) & 1.73 \\ concrete & 0.02 & 0.15 & & 1.24 \\ valence & 0.11 & 0.011 & \(*\) & 1.39 \\ arousal & -0.02 & 0.68 & & 1.11 \\ val.-span & 0.11 & \textless{} 0.001 & \(***\) & 2.75 \\ ar.-span & 0.11 & \textless{} 0.001 & \(***\) & 2.61 \\ surprise & 0.08 & 0.001 & \(**\) & 1.11 \\ disgust & 0.03 & 0.059 &. & 1.15 \\ \hline \end{tabular} \end{table} Table 2: Fixed Effects: predicting highlights the Maskje et al. (2019) study, where negative emotion predicted higher story ratings, although unlike their findings, there was no relationship between concreteness and engagement. When including random effects that model individual participants, the model explains 23% of the variance; without these effects the explained variance drops to 3.7%. So, with respect to RQ2, the reader context is important in elucidating the relationships of the fixed effects with engagement. Since the proportion is bounded between 0 and 1, the model residuals are not normally distributed. We therefore also fit a generalized mixed model with a binomial distribution, with the observed outcome a binary variable representing whether or not the sentence had any highlighting. Table 5 shows largely the same results, except that word frequency and positive sentiment are not significant when predicting the binary outcome. ### Predicting eye movement dwell time To address RQ1, we fit a model that predicted dwell time (Table 3). In our findings, valence mean was significant (p<0.001) with a negative slope (-0.06) and arousal mean was not (p=0.349). Valence-span (p=0.0029) and arousal-span (p<0.001) were found to be significant. The negative relationship between valence mean and dwell time supports part of Jacobs' proposed framework, which states that passages that engage our emotions, particularly negative valence, would likely result in higher dwell times. There was no relationship between highlights and dwell time, however, so we were not able to confirm whether the different categories of engagement correlated with different modes of reading. There was also a positive relationship between concreteness and dwell time (p<0.001, slope=0.01). According to the prevailing theory in neuroscience, "words referring to easily perceptible entities coactivate the brain regions involved in the perception of those entities" (Brysbaert et al., 2014). This observation may indicate that this leads to longer processing times. So indirectly our observation has some overlap with the findings of Maslej et al. (2019), where enactive passages had higher dwell times, although the linguistic features of their study differed. To evaluate how consistent dwell time patterns were across readers (RQ3), we examined the dwell time graphs of participants to see if there was a similar pattern. We noticed an especially striking similarity in patterns amongst readers who were highly engaged (see Figure 3). Although removing word-level outliers for dwell time improved the skewness of the data, it is still heavily skewed to the left. This resulted in residuals with a fat tail and therefore not perfectly normal. A log transformation improved the normality of the data, but it resulted in less normal residuals. This may impact the reliability of the above results. ## 5 Conclusion By collecting reader feedback and eye tracking data on literary fiction, we were able to support findings of other studies that emphasized the importance of affective language for reader immersion. Although we found no direct relationship between dwell times and highlighted text, the dwell time model and the highlight model shared some predictors, such as valence and arousal. One possibility to explore for future studies would be to look at whether this overlap is related to two different modes of engagement -- one that leads to higher dwell times and one that leads to lower dwell times. However, as mentioned, this exploration would require a more complete annotation. This could be achieved by selecting more engaging stories and modifying the highlighting exercise to require readers to annotate each sentence with a category or select none. Further analysis on our data set could be done by extracting more complex features. This would expand the analysis beyond the lexical level would allow us to find more interesting relationships. \begin{table} \begin{tabular}{l r r r r} \hline \hline & Slope & \(Pr(>|t|)\) & Sig. & VIF \\ \hline (Intercept) & 0.10 & \textless{} 0.001 & \(***\) & \\ word freq. & 0.18 & \textless{} 0.001 & \(***\) & 1.19 \\ positive & 0.01 & 0.045 & \(*\) & 1.57 \\ negative & 0.01 & 0.1 & & 1.68 \\ concrete & 0.01 & 0.0002 & \(***\) & 1.21 \\ valence & -0.06 & \textless{} 0.001 & \(***\) & 1.37 \\ arousal & -0.01 & 0.34 & & 1.09 \\ val.-span & -0.02 & 0.0029 & \(**\) & 2.40 \\ ar.-span & -0.06 & \textless{} 0.001 & \(***\) & 2.24 \\ surprise & -0.03 & \textless{} 0.001 & \(***\) & 1.07 \\ \hline \hline \end{tabular} \end{table} Table 3: Fixed Effects: predicting dwell time
2308.03280
Mirror-NeRF: Learning Neural Radiance Fields for Mirrors with Whitted-Style Ray Tracing
Recently, Neural Radiance Fields (NeRF) has exhibited significant success in novel view synthesis, surface reconstruction, etc. However, since no physical reflection is considered in its rendering pipeline, NeRF mistakes the reflection in the mirror as a separate virtual scene, leading to the inaccurate reconstruction of the mirror and multi-view inconsistent reflections in the mirror. In this paper, we present a novel neural rendering framework, named Mirror-NeRF, which is able to learn accurate geometry and reflection of the mirror and support various scene manipulation applications with mirrors, such as adding new objects or mirrors into the scene and synthesizing the reflections of these new objects in mirrors, controlling mirror roughness, etc. To achieve this goal, we propose a unified radiance field by introducing the reflection probability and tracing rays following the light transport model of Whitted Ray Tracing, and also develop several techniques to facilitate the learning process. Experiments and comparisons on both synthetic and real datasets demonstrate the superiority of our method. The code and supplementary material are available on the project webpage: https://zju3dv.github.io/Mirror-NeRF/.
Junyi Zeng, Chong Bao, Rui Chen, Zilong Dong, Guofeng Zhang, Hujun Bao, Zhaopeng Cui
2023-08-07T03:48:07Z
http://arxiv.org/abs/2308.03280v1
# Mirror-NeRF: Learning Neural Radiance Fields for Mirrors with Whitted-Style Ray Tracing ###### Abstract. Recently, Neural Radiance Fields (NeRF) has exhibited significant success in novel view synthesis, surface reconstruction, _etc._ However, since no physical reflection is considered in its rendering pipeline, NeRF mistakes the reflection in the mirror as a separate virtual scene, leading to the inaccurate reconstruction of the mirror and multi-view inconsistent reflections in the mirror. In this paper, we present a novel neural rendering framework, named Mirror-NeRF, which is able to learn accurate geometry and reflection of the mirror and support various scene manipulation applications with mirrors, such as adding new objects or mirrors into the scene and synthesizing the reflections of these new objects in mirrors, controlling mirror roughness, _etc._ To achieve this goal, we propose a unified radiance field by introducing the reflection probability and tracing rays following the light transport model of Whitted Ray Tracing, and also develop several techniques to facilitate the learning process. Experiments and comparisons on both synthetic and real datasets demonstrate the superiority of our method. The code and supplementary material are available on the project webpage: [https://zj3dw.github.io/Mirror-NeRF/](https://zj3dw.github.io/Mirror-NeRF/). ## 1. Introduction 3D scene reconstruction and rendering is a long-standing problem in the fields of computer vision and graphics with broad applications in VR and AR. Although significant progress has been made over decades, it is still very challenging to reconstruct and re-render the scenes with mirrors, which exist ubiquitously in the real world. ## 2. Cross Concepts * **Computing methodologies \(\rightarrow\) Computer vision; Rendering.** * **Computing methodologies \(\rightarrow\) Computer vision; Rendering.** ## 3. Computing methods * **Learning and \(\rightarrow\) Computer vision; Rendering.** * **Achm Reference Format:** * **Junyi Zeng, Chong Bao, Rui Chen, Zilong Dong, Guofeng Zhang, Hujun Bao, and Zhaopeng Cui. 2023. Mirror-NeRF: Learning Neural Radiance Fields for Mirrors with Whitted-Style Ray Tracing. In _Proceedings of the 31st ACM International Conference on Multimedia (MM '23), October 29-November 3, 2023, Ottawa, ON, Canada._ * **10** pages. [https://doi.org/10.1145/3581783.3611857](https://doi.org/10.1145/3581783.3611857) ## 4. Introduction 3D scene reconstruction and rendering is a long-standing problem in the fields of computer vision and graphics with broad applications in VR and AR. Although significant progress has been made over decades, it is still very challenging to reconstruct and re-render the scenes with mirrors, which exist ubiquitously in the real world. The "appearance" of the mirror is not multi-view consistent and changes considerably with the observer's perspective due to the physical reflection phenomenon where the light will be entirely reflected along the symmetric direction at the mirror. Recently, Neural Radiance Fields (NeRF) [(16)] has exhibited significant success in novel view synthesis and surface reconstruction due to its capability of modeling view-dependent appearance changes. However, since the physical reflection is not considered in its rendering pipeline, NeRF mistakes the reflection in the mirror as a separate virtual scene, leading to the inaccurate reconstruction of the geometry of the mirror, as illustrated in Fig. 2. The rendered "appearance" of the mirror also suffers from multi-view inconsistency. Several techniques [(22; 26; 50)] decompose the object material and illuminations to model the reflection effect at the surface, but they all assume the surfaces with certain diffuse reflection to recover object surface first and then model the specular component. Thus they struggle to handle the mirrors with pure specular reflection due to the incorrect surface estimation of mirrors. NeRFReN [(9)] models reflection by separating the reflected and transmitted parts of a scene as two radiance fields and improves the rendering quality for the scenes with mirrors, while it still fails to model the physical specular reflection process. Thus, it cannot render the reflection that is not observed in the training views as shown in Fig.2, and cannot synthesize new reflections of the objects or mirrors that are newly placed in the scene. In this paper, we propose a novel neural rendering framework, named Mirror-NeRF, to accomplish high-fidelity novel view synthesis in the scene with mirrors and support multiple scene manipulation applications. For clarity, we term the ray as the inverse of light. The rays emitted from the camera are termed as camera rays and rays reflected at the surface are termed as reflected rays. Exhaustively conducting ray tracing in a room-scale environment is prohibitively expensive. With the goal of achieving physically-accurate rendering of reflections in the mirror, we draw inspiration from Whitted Ray Tracing [(37)] where the ray is reflected at the mirror-like surface and terminates at a diffuse surface. Specifically speaking, we first define the probability that the ray is reflected when hitting a spatial point as the reflection probability. The reflection probability is parameterized as a continuous function in the spatial space by a Multi-Layer Perceptron (MLP). Then we trace the ray emitted from the camera. The physical reflection will take place when the ray hits the surface with a high reflection probability. We accumulate the density and radiance of the ray by the volume rendering technique and synthesize the image by blending the color of camera rays and reflected rays based on the reflection probability. Instead of taking the specular reflection as separate neural fields, our neural fields are unified, which is more reasonable to synthesize new physically sound reflection from novel viewpoints. As shown in Fig. 1, our representation further supports various types of scene manipulations, _e.g._, adding new objects or mirrors into the scene and synthesizing the reflections of these new objects in mirrors, controlling the roughness of mirrors and reflection substitution. However, learning both geometry- and reflection-accurate mirror with the proposed new representation is not trivial. First, the reflection at a surface point is related to the surface normal. The analytical surface normal from the gradient of volume density has significant noise since the density cannot concentrate precisely on the surface. Thus, we exploit an MLP to parameterize a smooth distribution of surface normal. Second, the reconstruction of mirror surface is ambiguous and challenging, since the "appearance" of mirror is from other objects and not consistent from different viewpoints. Based on the fact that mirrors in real world usually have planar surfaces, we leverage both plane consistency and forward-facing normal constraints in a joint optimization manner to guarantee the smoothness of the mirror geometry and reduce the ambiguity of the reflection. Moreover, a progressive training strategy is proposed to stabilize the geometry optimization of the mirror. Our contributions can be summarized as follows. **1)** We propose a novel neural rendering framework, named Mirror-NeRF, that resolves the challenge of novel view synthesis in the scene with mirrors. Different from NeRF [(16)] and NeRFReN [(9)] that tend to learn a separate virtual world in the mirror, Mirror-NeRF can correctly render the reflection in the mirror in a unified radiance field by introducing the reflection probability and tracing the rays following the light transport model of Whitted Ray Tracing [(37)]. The physically-inspired rendering pipeline facilitates high-fidelity novel view synthesis with accurate geometry and reflection of the mirror. **2)** To learn both accurate geometry and reflection of the mirror, we leverage several techniques, including a surface normal parametrization to acquire smooth distribution of surface normal, the plane consistency and forward-facing normal constraints with joint optimization to ensure the accurate geometry of the mirror, and a progressive training strategy to maintain the stability of training. **3)** The proposed Mirror-NeRF enables a series of new scene manipulation applications with mirrors as shown in Fig. 1, such as object placement, mirror roughness control, reflection substitution, _etc_. Extensive experiments on real and synthetic datasets demonstrate that Mirror-NeRF can achieve photo-realistic novel view synthesis. A large number of scene manipulation cases show the physical correctness and flexibility of the proposed method. Figure 2. Comparison of the novel views synthesized by different methods. NeRF [(16)] mistakes the reflection in the mirror as a separate virtual scene, leading to inaccurate depth of the mirror. NeRFReN [(9)] uses two radiance fields to learn the color inside and outside the mirror separately. They synthesize the reflection in the mirror by interpolating the memorized reflection and cannot infer the reflection unobserved in the training views, _e.g._, the missing ceiling. Instead, we successfully synthesize new reflections in the mirror with the accurate depth of the mirror due to our ray tracing pipeline. ## 2. Related Work ### Neural Rendering The goal of neural rendering is to synthesize photorealistic images or videos by computing the light transport in a 3D scene. Lots of works (Lots et al., 2016; Li et al., 2017; Li et al., 2018) have been proposed to push the envelope of rendering quality in this field. One of the most notable approaches is NeRF (Lots et al., 2016), which models the radiance field of a scene using the MLP. By training on a set of posed images, NeRF learns to infer the radiance and density of each sampled point and accumulates them along the ray with volume rendering techniques to render the color. This enables NeRF to generate photorealistic images of the scene from a novel viewpoint. Several extensions and improvements have been proposed to apply NeRF to more challenging problems, such as scene reconstruction (Beng et al., 2016; Li et al., 2017; Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018), generalization (Li et al., 2018; Li et al., 2018), novel view extrapolation (Li et al., 2018; Li et al., 2018), scene manipulation (Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018), SLAM (Li et al., 2018; Li et al., 2018), segmentation (Li et al., 2018; Li et al., 2018), human body (Li et al., 2018; Li et al., 2018) and so on. Furthermore, some NeRF-variants provide various applications, such as supersampling (Li et al., 2018) and controllable depth-of-field rendering (Li et al., 2018). However, these NeRF-variants struggle to model mirror reflection since they assume that all lights in the scene are reflected at Lambertian surfaces. ### Neural Rendering With Reflection Plenty of works (Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018) have been working on making NeRF understand physical reflection. PhySG (Li et al., 2018) simplifies light transport by modeling the environment illumination and material properties as mixtures of spherical Gaussians and integrating the incoming light over the hemisphere of the surface. InvRender (Li et al., 2018) extends PhySG to model the indirect light by using another mixture of spherical Gaussians to cache the light that bounces off from other surfaces. These approaches assume that surfaces are diffuse with a simple BRDF and environment lighting is far away from the scene. For a room with the mirror, they cannot handle the complex reflection and material diversity in the scene. As for NeRF, it will treat the reflection in mirrors as real geometry, which reconstructs the inaccurate depth of the mirror. RefNeRF (Li et al., 2018) decomposes the light as diffuse and specular components and learns the reflection using a radiance field conditioned by the reflected view direction. NeRFReN (Li et al., 2018) employs two radiance fields to learn the color inside and outside the mirror and depth constraints to recover the depth of the mirror. However, these methods generate mirror reflection from new viewpoints by interpolating the previously learned reflections, and are limited in accurately inferring reflections that were not observed during training and synthesizing reflections for newly added objects or mirrors in the scene. By introducing the physical ray tracing into the neural rendering pipeline, our method can correctly render the reflection in the mirror and support multiple scene manipulation applications. ## 3. Mirror-Nerf We introduce Mirror-NeRF, a physically inspired neural rendering framework that supports photo-realistic novel view synthesis of scenes with mirrors and reconstructs the accurate geometry and reflection of mirrors. As illustrated in Fig. 3, we leverage unified neural fields to learn the volume density, normal, reflection probability and radiance inside and outside the mirror (Sec. 3.1). With the intention of generating physically-accurate reflections in the mirror, we employ the light transport model in Whitted Ray Tracing (Wy et al., 2019) and trace the volume rendered ray in the scene (Sec. 3.2). Besides, some regularization constraints for the mirror surface (Sec. 3.3) and a progressive training strategy (Sec. 3.4) are proposed to improve the reconstruction quality of the mirror and stabilize the training. ### Unified Neural Fields We design several neural fields to learn the properties of the scene, which are unified for parts inside and outside the mirror (Fig. 3). #### 3.1.1. Geometry and Color Following the implicit representation in NeRF (Lots et al., 2016), we use a geometry MLP \(\mathcal{F}_{geo}\) to encode the geometry feature \(f_{geo}\) at an arbitrary spatial location \(\mathbf{x}\). The volume density field is presented by a volume density MLP \(\mathcal{F}_{\sigma}\) which takes \(f_{geo}\) as input, and the radiance field is presented by a radiance MLP \(\mathcal{F}_{\text{c}}\) which takes \(f_{geo}\) and view direction \(\mathbf{d}\) as input: \[\begin{split}& f_{geo}=\mathcal{F}_{geo}(y_{x}(\mathbf{x})),\\ &\sigma=\mathcal{F}_{\sigma}(f_{geo}),\\ &\mathbf{c}=\mathcal{F}_{\text{c}}(f_{geo},y_{d}(\mathbf{d})), \end{split} \tag{1}\] where \(y_{x}(\cdot)\) and \(y_{d}(\cdot)\) are respectively the positional encoding function of spatial position and view direction. \(\sigma\) and \(\mathbf{c}\) are volume Figure 3. Framework. We trace the rays physically in the scene and learn a unified radiance field of the scene with the mirror. The neural field takes as input spatial location \(\mathbf{x}\), view direction \(\mathbf{d}\), and outputs the volume density \(\hat{\sigma}\), radiance \(\hat{\mathbf{c}}\), surface normal \(\hat{\mathbf{n}}\) and reflection probability \(\hat{m}\). The final color is blended by the color of the camera ray and the reflected ray based on the reflection probability. density and radiance respectively. To render an image from a specific viewpoint, we follow the volume rendering techniques in NeRF. The volume-rendered color \(\hat{C}\) of a ray \(\mathbf{r}\) is calculated by accumulating the volume densities \(\sigma_{i}\) and radiances \(\mathbf{c}_{i}\) of sampled points \(x_{i}\) along the ray: \[\hat{C}(\mathbf{r})=\sum_{i=1}^{N}T_{i}\alpha_{i}\mathbf{c}_{i}\] \[T_{i}=\exp\left(-\sum_{j=1}^{i-1}\sigma_{j}\delta_{j}\right),\] \[\alpha_{i}=1-\exp\left(-\sigma_{i}\delta_{i}\right), \tag{2}\] where \(N\) is the number of sampled points on the ray \(\mathbf{r}\), and \(\delta_{i}\) is the sampling distance between adjacent points along the ray. #### 3.1.2. Smooth Surface Normal Prior works (Zhu et al., 2017; Zhang et al., 2018) have analyzed the acquisition of surface normal in NeRF that the negative gradient of volume density _w.r.t._\(\mathbf{x}\) can give a differentiable approximation of the true normal: \[\mathbf{n}=-\frac{\nabla\sigma(\mathbf{x})}{||\nabla\sigma(\mathbf{x})||}. \tag{3}\] However, such parametrization tends to produce an unsmooth surface normal distribution since the volume density cannot concentrate precisely on the surface. The noise in the surface normal will severely hamper tracing the correct direction of the reflected rays at the mirror. To obtain a smooth distribution of surface normal, we utilize an MLP \(\mathcal{F}_{n}\) that takes \(f_{geo}\) as input and predicts the smoothed surface normal \(\hat{\mathbf{n}}\): \[\hat{\mathbf{n}}=\mathcal{F}_{n}(f_{geo}). \tag{4}\] We supervise the optimization of \(\mathcal{F}_{n}\) by the analytical surface normal \(\mathbf{n}\): \[\mathcal{L}_{n}=||\hat{\mathbf{n}}-\mathbf{n}||_{2}^{2}. \tag{5}\] To compute the surface normal at the intersection point of a ray \(\mathbf{r}\) and the surface, we follow the Eq. (2) by: \[\hat{\mathbf{N}}(\mathbf{r})=\sum_{i=1}^{N}T_{i}\alpha_{i}\hat{\mathbf{n}}_{i}. \tag{6}\] #### 3.1.3. Reflection Probability To model the reflection and perform the Whitted-style ray tracing described in Sec. 3.2, we also utilize an MLP \(\mathcal{F}_{m}\) to predict the probability \(m\) that rays will be reflected at a spatial point: \[m=\mathcal{F}_{m}(f_{geo}), \tag{7}\] where \(m\) ranges in \([0,1]\). To determine the reflection probability \(\hat{M}\) of a ray \(\mathbf{r}\) hitting the solid surface, we perform the volume rendering like Eq. (2): \[\hat{M}(\mathbf{r})=\sum_{i=1}^{N}T_{i}\alpha_{i}m_{i}. \tag{8}\] ### Whitted-Style Ray Tracing NeRF (Kang et al., 2018) does not take into account the physical reflection in the rendering pipeline. When applied to the scene with the mirror, NeRF cannot reconstruct the geometry of the mirror and treats the reflection in the mirror as a separate virtual scene. To handle the reflection at the mirror, we draw inspiration from Whitted Ray Tracing (Wanderson, 2018) where the ray is reflected at the mirror-like surface and terminates at the diffuse surface. As shown in Fig. 4, when a ray is reflected, we first compute the location \(\hat{\mathbf{X}}\) of the intersection point of the ray \(\mathbf{r}\) and the surface by: \[\hat{\mathbf{X}}(\mathbf{r})=\mathbf{o}(\mathbf{r})+\hat{D}(\mathbf{r}) \mathbf{d}(\mathbf{r}),\] \[\hat{D}(\mathbf{r})=\sum_{i=1}^{N}T_{i}\alpha_{i}t_{i}, \tag{9}\] where \(\hat{D}\), \(\mathbf{o}\) and \(\mathbf{d}\) are the expected termination depth, origin and direction of the ray \(\mathbf{r}\) respectively. \(T_{i}\) and \(\alpha_{i}\) are the same as Eq. (2). To trace the reflected ray \(\mathbf{r}_{ref}\) of a ray \(\mathbf{r}\), we set \(\hat{\mathbf{X}}(\mathbf{r})\) as its origin, and compute its direction by: \[\mathbf{d}(\mathbf{r}_{ref})=\mathbf{d}(\mathbf{r})-2\left(\hat{\mathbf{N}}(\mathbf{r}) \cdot\mathbf{d}(\mathbf{r})\right)\hat{\mathbf{N}}(\mathbf{r}). \tag{10}\] Here all direction vectors are normalized. Then, we use the volume rendering technique to compute the color of the ray \(\mathbf{r}\) and its reflected ray \(\mathbf{r}_{ref}\). The radiances of the sampled points on \(\mathbf{r}\) and \(\mathbf{r}_{ref}\) are attained by querying the same neural radiance field. Since the density-based representation always induces a "foggy" geometry, the reflected ray may terminate unexpectedly near the origin as illustrated in Fig. 4(c). To solve the problem, we start sampling points on the reflected ray at a distance from the origin as shown in Fig. 4(a). We blend the color of the ray \(\mathbf{r}\) and its reflected ray \(\mathbf{r}_{ref}\) according to the volume-rendered reflection probability of the ray \(\hat{M}(\mathbf{r})\) as: \[\hat{C}^{p}(\mathbf{r})=\hat{C}(\mathbf{r})\left(1-\hat{M}(\mathbf{r})\right)+\hat{C}^{p} (\mathbf{r}_{ref})\hat{M}(\mathbf{r}). \tag{11}\] Note that \(\hat{C}^{p}\) is defined recursively, and the recursion terminates when \(\hat{M}\) is zero or the specified maximum recursion depth is reached. For each pixel, we generate a ray from the camera and trace it in the scene. The set of these camera rays is denoted as \(R_{cam}\). The Figure 4. Our strategy for sampling points on rays is shown in (a). We sample points on both the camera ray and the reflected ray. For the reflected ray, we forward a distance from the origin to start sampling points to avoid the reflected ray terminating unexpectedly near the origin due to the “foggy” geometry. The effectiveness of this design is demonstrated by the comparison of (b) and (c) where mirror reflection is corrupted without the forward sampling strategy. The bottom right images in (b) and (c) show the reflected depth of the mirror. pixel color is rendered by Eq.(11) with \(\mathbf{r}\in R_{cam}\). We supervise the rendered pixel color by the ground truth pixel color \(C^{I}\) with a photometric loss: \[\mathcal{L}_{c}=\sum_{\mathbf{r}\in R_{cam}}||\hat{C^{\mathbf{p}}}(\mathbf{r})-C^{I}(\mathbf{r} )||_{2}^{2}. \tag{12}\] To guide the optimization of reflection probability \(\hat{M}\), we calculate the binary cross entropy loss between the rendered reflection probability \(\hat{M}\) and the mirror reflection mask \(M\): \[\mathcal{L}_{m}=\sum_{\mathbf{r}\in R_{cam}}-\left(M(\mathbf{r})\log\hat{M}(\mathbf{r})+( 1-M(\mathbf{r}))\log\left(1-\hat{M}(\mathbf{r})\right)\right), \tag{13}\] where \(M\) is obtained by using the off-the-shelf segmentation tools like (Krizhevsky et al., 2017) on the ground-truth images. ### Regularization We design a novel rendering pipeline based on Whitted Ray Tracing for the mirror, while a naive training without regularization always leads to unstable convergence at the mirror where the "appearance" of the mirror is blurred. We find that the bumpy surface of the mirror will greatly affect the quality of reflection due to underconstrained density at the mirror. Thus, we introduce several regularization terms into our optimization process. #### 3.3.1. Plane Consistency Constraint As far as we observe, mirrors typically have planar surfaces in the real world. To make full use of this property, we apply the plane consistency constraint proposed by (Bahdan et al., 2017) to the surface of the mirror. Specifically, we randomly sample four points \(A,B,C,D\) on the surface of the mirror and enforce the normal vector of the plane \(ABC\) to be perpendicular to the vector \(\hat{AD}\): \[\mathcal{L}_{pc}=\frac{1}{N_{p}}\sum_{i=1}^{N_{p}}|A_{i}B_{i}\times A_{i}^{-}C _{i}\cdot A_{i}^{-}D_{i}|, \tag{14}\] where \(N_{p}\) denotes the number of the 4-point sets randomly selected from the planes. #### 3.3.2. Forward-facing Normal Constraint With regard to the reflection equation Eq. (10), we find that it still holds when the surface normal rotates 180 degrees and points to the inside of the surface. This ambiguity will incur the incorrect depth of the mirror. To tackle this issue, we follow (Srivastava et al., 2017) to enforce that the analytical surface normal \(\hat{\mathbf{n}}\) of sampled points makes an obtuse angle with the direction \(\mathbf{d}\) of the camera ray \(\mathbf{r}\), _i.e._, the surface normal should be forward-facing to the camera. \[\mathcal{L}_{nreg}=\max(0,\hat{\mathbf{n}}\cdot\mathbf{d}(\mathbf{r}))^{2}. \tag{15}\] #### 3.3.3. Joint Optimization In practice, we jointly optimize all networks with the aforementioned losses. In other words, each loss will eventually have an impact on the volume density field and radiance field: \[\mathcal{L} =\lambda_{c}\mathcal{L}_{c}+\lambda_{m}\mathcal{L}_{m}+\lambda_{ pc}\mathcal{L}_{pc}\] \[\quad+\lambda_{n}\mathcal{L}_{n}+\lambda_{nreg}\mathcal{L}_{n,reg}, \tag{16}\] where \(\lambda\) is the coefficient of each loss term. Joint optimization will bring three main advantages. First, the surface normal loss \(\mathcal{L}_{n}\) not only influences the \(\mathcal{F}_{n}\) but also encourages \(\mathcal{F}_{geo}\) to produce a smooth feature distribution, which makes the volume density uniformly concentrate on the surface to strengthen the flatness of the surface. Second, the reflection probability loss \(\mathcal{L}_{m}\) will promote the volume density field to reach a peak at the mirror, thereby producing an unbiased depth for the mirror. Both of the losses regulate the \(\mathcal{F}_{geo}\) through \(f_{geo}\). Third, in spite of the employment of plane and normal constraints, any tiny error of the surface normal will be amplified during the reflection. Through joint optimization, these errors will be iteratively refined since the photometric loss \(\mathcal{L}_{c}\) will implicitly adjust the surface normal \(\hat{\mathbf{N}}\) to the desired direction through the differentiable reflection equation. ### Progressive Training Strategy In the early stage of training, the neural field is unstable and easily falls into the local optimum. We conclude the degeneration situations as two cases: 1) The reflection in the mirror might be learned as a separate scene with inaccurate depth just like NeRF in the case the color converges faster than the geometry. 2) The color may be stuck in a local optimum and blurry if strong geometric regularization is enabled at the beginning. To make training stable, we progressively train the image area inside and outside the mirror and schedule the coefficients of losses at different stages of training. In the initial stage, we enable \(\lambda_{c}\) and disable the remaining coefficients to maintain the stability of the neural field and avoid the geometry of the mirror being ruined. Furthermore, we replace the \(\mathcal{L}_{c}\) with masked photometric loss \(\mathcal{L}_{cm}\): \[\mathcal{L}_{cm}=\sum_{\mathbf{r}\in R_{cam}\bigcap\overline{R_{M}}}||\hat{C^{\bm {p}}}(\mathbf{r})-C^{I}(\mathbf{r})||_{2}^{2}+\sum_{\mathbf{r}\in R_{cam}\bigcap\hat{R_{M} }}||\hat{C^{\mathbf{p}}}(\mathbf{r})-K||_{2}^{2}, \tag{17}\] where \(R_{M}\) is the set of rays hitting the mirror-like surface and \(\overline{R_{M}}\) is the complementary set of \(R_{M}\). \(K\) is a constant vector, which we use \((0,0,0)\) in our experiments. The use of \(K\) for the image region inside the mirror is to learn an initial rough shape of the mirror without learning its reflection, which will be discussed in Sec. 4.3.2. \(\mathcal{L}_{cm}\) is used until the last stage. After a few epochs, we activate the \(\lambda_{m}\), \(\lambda_{pe}\), \(\lambda_{n}\), \(\lambda_{nreg}\) to regularize the location and geometry of the mirror. After this stage, the accurate depth of the mirror is expected to have been learned by the neural fields. At last, we use \(\mathcal{L}_{c}\) instead of \(\mathcal{L}_{cm}\) to jointly optimize the reflection part and refine the geometry of the mirror. ## 4. Experiments ### Datasets To the best of our knowledge, there is no room-level dataset containing mirrors publicly available for the task of novel view synthesis. Therefore, we create 5 synthetic datasets and capture 4 real datasets with mirrors. Each synthetic dataset is an indoor room downloaded from the Blendswar (Bendl et al., 2017), including living room, meeting room, washroom, bedroom, and office. Real datasets are captured in real indoor scenes using IPad Pro, including clothing store, lounge, market and discussion room. In each dataset, images are captured 360 degrees around the scene. We split the images as training and test sets to perform the quantitative and qualitative comparison. We use the off-the-shelf segmentation tool (Krizhevsky et al., 2017) to segment the mirror reflection mask in the image. ### Comparisons We compare our method with NeRF (Kipipour et al., 2017) and the state-of-the-art neural rendering methods dealing with the reflection, _i.e._, RefNeRF (Kipipour et al., 2017) and NeRFReN (Kipipour et al., 2018). The same mirror masks are provided for our method and NeRFReN. We perform the quantitative comparisons of novel view synthesis on the metrics PSNR, SSIM (Wang et al., 2017), and LPIPS (Wang et al., 2017). As demonstrated in Tab. 1, on the regular test viewpoints, our method outperforms the SOTA methods handling the reflection (_i.e._, Ref-NeRF and NeRFReN) on both synthetic and real datasets, and is comparable with NeRF. Note that NeRF does not reconstruct the physically sound geometry of the mirror and just interpolates the memorized reflection when performing novel view synthesis, while our method recovers the correct depth of the mirror and enables synthesizing reflections unobserved in training views and multiple applications due to the physical ray-tracing pipeline. Since the above test viewpoints are close to the distribution of training viewpoints, NeRF can generate visually reasonable reflection by interpolating the reflection of nearby views. To compare the correctness of modeling reflection, we capture a set of more challenging test images with more reflections unobserved in the training views. We quantitatively compare the reflection in the mirror, as shown in Tab. 2. Our method surpasses all the compared methods since we can faithfully synthesize the reflection by tracing the reflected ray in the scene. Please refer to the supplementary material for more details. Qualitative comparisons on the synthetic and real datasets are shown in Fig. 5. NeRF models the scene as a volume of particles that block and emit light (Kipour et al., 2017), and conditions the view-dependent reflection by view direction input. The assumption is suitable well for the Lambertian surface but fails in resolving the reflection in the mirror. The multi-view inconsistent reflection in the mirror will mislead NeRF to learn a separate virtual scene in the mirror, _e.g._, the inaccurate depth results shown in Fig. 5, since NeRF does not Figure 5. Qualitative comparison of novel view synthesis on synthetic and real scenes with mirrors. consider the physical reflection in the rendering pipeline. Despite Ref-NeRF's attempt to reproduce reflections by reparameterizing the radiance field using the reflected ray direction and surface materials, it encounters the same limitation as NeRF in reconstructing the mirror's geometry. NeRFReN takes two neural radiance fields to model the scene inside and outside the mirror respectively and can produce the smooth depth of the mirror. However, the above methods synthesize the reflection by interpolating the memorized reflection. The common drawback of these methods is that they cannot synthesize the reflections unobserved in the training set from new viewpoints, _e.g._, the missing statue in the mirror of the living room, the vanishing ceiling in the mirror of the washroom, and broken cabinet in the mirror of the discussion room in Fig.5. With our neural rendering framework based on physical ray tracing, we can synthesize the reflection of any objects in the scene from arbitrary viewpoints. Moreover, NeRF, Ref-NeRF, and NeRFReN struggle to produce the reflection of the objects whose reflection has high-frequency variations in color, _e.g._, the distorted hanging picture in the mirror of the meeting room, the blurry curtain in the mirror of the office and the lounge, and the "fogged" clothes in the mirror of the clothing store in Fig.5. By contrast, our method renders detailed reflections of objects by tracing the reflected rays. Compared to NeRFReN, our method can also recover smoother depth of the mirror, _e.g._, the depth of the mirror from NeRFReN is damaged by the reflection of distant light on the office while our method recovers the mirror depth accurately. ### Ablation Studies We qualitatively and quantitatively analyze our model design and training schemes on the synthetic bedroom in this section, as shown in Fig. 6 and Tab. 3. For more ablation studies, please refer to the supplementary material. #### 4.3.1. Smooth Surface Normal Parametrization We first inspect the effectiveness of our surface normal parametrization (Sec.3.1) by using the analytical surface normal from Eq. (3) to calculate the direction of the reflected ray. As depicted in Fig. 6(f) and Tab. 3, the reflection in the mirror is collapsed due to the inevitable noise in the analytical surface normal of the mirror. Instead, our parametrization provides a smooth surface normal with less noise to guide the optimization of the reflection in the mirror. #### 4.3.2. Masked Photometric Loss \(\mathcal{L}_{cm}\) Without the usage of \(\mathcal{L}_{cm}\) in the early stage (Sec. 3.4), the depth of the mirror is incorrectly recovered as depicted in Fig. 6(b). The reason for this is that color supervision inside the mirror may lead to the optimization of mirror geometry getting stuck in a local optimum during the initial stages while the mirror geometry has not yet converged. #### 4.3.3. Regularization We then analyze the efficacy of each regularization term (Sec.3.3) by turning it off during training. As demonstrated in Fig. 6(c) and Tab. 3, without plane consistency constraint, the discontinuities occur in the depth of the mirror which decreases the image quality. A similar effect happens for the forward-facing normal constraint as shown in Fig. 6 (e). This normal regularization can improve the image quality by correctly orienting the surface normal to the room. Without the joint optimization strategy, the reflection in the mirror is blurred due to the imprecise geometry of the mirror as shown in Fig. 6 (d). When all regularization terms are enabled, we successfully learn the precise reflection in the mirror with the highest image quality. ### Applications Due to the physical modeling of the mirror reflection, the proposed Mirror-NeRF supports various new scene manipulation applications with mirrors as shown in Fig. 7. \begin{table} \begin{tabular}{l|c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{3}{c}{Synthetic Datasets} & \multicolumn{3}{c}{Real Datasets} \\ \cline{2-7} & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) \\ \hline NeRF & 28.501 & 0.903 & 0.066 & 25.399 & 0.788 & 0.209 \\ Ref-NeRF & 28.703 & 0.905 & 0.079 & 24.544 & 0.730 & 0.294 \\ NeRFReN & 28.483 & 0.902 & 0.080 & 23.191 & 0.686 & 0.367 \\ Ours & 29.243 & 0.907 & 0.077 & 25.173 & 0.785 & 0.205 \\ \hline \hline \end{tabular} \end{table} Table 1. Quantitative comparison of novel views at regular test viewpoints on synthetic and real scenes with mirrors. The best is marked in red and the second is marked in orange. \begin{table} \begin{tabular}{l|c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{3}{c}{Synthetic Datasets} & \multicolumn{3}{c}{Real Datasets} \\ \cline{2-5} & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) \\ \hline NeRF & 23.326 & 0.964 & 0.027 & 19.749 & 0.886 & 0.117 \\ Ref-NeRF & 22.828 & 0.964 & 0.028 & 20.188 & 0.897 & 0.122 \\ NeRFReN & 23.542 & 0.966 & 0.030 & 19.174 & 0.871 & 0.148 \\ Ours & 25.677 & 0.975 & 0.021 & 22.705 & 0.912 & 0.085 \\ \hline \hline \end{tabular} \end{table} Table 2. Quantitative comparison of reflections inside the mirror from challenging novel viewpoints out of the training set distribution on synthetic and real scenes. \begin{table} \begin{tabular}{l|c c c} \hline \hline \multicolumn{1}{c|}{Settings} & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) \\ \hline w/o Surface Normal Param. & 20.464 & 0.720 & 0.349 \\ w/o \(\mathcal{L}_{cm}\) & 28.331 & 0.878 & 0.103 \\ w/o Plane Consistency & 30.687 & 0.916 & 0.058 \\ w/o Forward. Normal Reg. & 31.108 & 0.923 & 0.052 \\ w/o Joint Optimization & 27.691 & 0.875 & 0.106 \\ Full Model & **32.422** & **0.933** & **0.047** \\ \hline \hline \end{tabular} \end{table} Table 3. We quantitatively analyze our model design and training schemes on the synthetic bedroom. Figure 6. Ablation studies. We qualitatively analyze our model design and training schemes. The top right and bottom right images in each subfigure show the depth and normal map respectively. #### 4.4.1. Placing New Mirrors By tracing the reflected rays at the mirror recursively, it is feasible for our method to integrate new mirrors into the original scene. As shown in Fig. 7(a), we enable the synthesis of novel views involving inter-reflection between the newly placed mirror and the original mirror, _e.g._, the endless reflection of the room in the new and original mirrors in the first two rows, and the new reflection of the ground in the last row. #### 4.4.2. Reflecting Newly Placed Objects We support the composition of multiple neural radiance fields and synthesize new reflections of the composite scenes in the mirror. Specifically, for each traced ray, we detect occlusion by comparing the volume-rendered depth from the radiance fields that have a collision with the ray. The ray will hit the surface with the minimum depth, and terminate or be reflected at the surface. Here we show the composite results of dynamic radiance field D-NeRF (Grover et al., 2017) with the scene modeled by our method in Fig. 7(b). The reflection of objects from D-NeRF is precisely synthesized in the mirror. This application might be of great use in VR and AR. Please refer to the supplementary video for the vivid dynamic composite results. #### 4.4.3. Reflection Substitution In the film and gaming industries, artists may desire to create some magical visual effects, for example, substituting the reflections in the mirror with a different scene. Since we learn the precise geometry of the mirror, it can be easily implemented by transforming the reflected rays at the mirror into another scene and rendering the results of the reflected ray. As shown in Fig. 7(c), we can synthesize the photo-realistic view of the new scene in the mirror with multi-view consistency. Note that in consequence of tracing reflected rays in the new scene, the appearance in the mirror is flipped compared to the new scene. #### 4.4.4. Controlling the Roughness of Mirrors According to the microfacet theory (Sandhi et al., 2017), the reason why a surface looks rough is that it consists of a multitude of microfacets facing various directions. We support modifying the roughness of the mirror by simulating the microfacet theory. Specifically, we trace the camera ray multiple times following Eq.10 with different random noises added on the surface normal and average the volume-rendered colors to get the final color of this ray. The roughness of the mirror is controlled by the magnitude of noise and the number of tracing times. With this design, we can generate reasonable reflections with different roughness as shown in Fig. 7(d). ## 5. Conclusion We have proposed a novel neural rendering framework following Whitted Ray Tracing, which synthesizes photo-realistic novel views in the scene with the mirror and learns the accurate geometry and reflection of the mirror. Besides, we support various scene manipulation applications with mirrors. As a limitation, our method does not explicitly estimate the location of the light source in the room, which prevents us from relighting the room. The refraction is also not modeled in our framework since we focus on mirrors currently, and it is naturally compatible with our ray tracing pipeline and considered as future work. #### Acknowledgments This work was partially supported by the NSFC (No. 62102356) and Ant Group. Figure 7. Applications on synthetic and real scenes with mirrors.
2305.02162
Approximate quantum error correction, covariance symmetry, and their relation
To perform reliable quantum computation, quantum error correction is indispensable. In certain cases, continuous covariance symmetry of the physical system can make exact error correction impossible. In this work we study the approximate error correction and covariance symmetry from the information-theoretic perspective. For general encoding and noise channels, we define a quantity named infidelity to characterize the performance of the approximate quantum error correction and quantify the noncovariance of an encoding channel with respect to a general Lie group from the asymmetry measure of the corresponding Choi state. In particular, when the encoding channel is isometric, we derive a trade-off relation between infidelity and noncovariance. Furthermore, we calculate the average infidelity and noncovariance measure for a type of random code.
Hao Dai
2023-05-03T14:54:56Z
http://arxiv.org/abs/2305.02162v2
# Approximate quantum error correction, covariance symmetry and their relation ###### Abstract To perform reliable quantum computation, quantum error correction is indispensable. In certain cases, continuous covariance symmetry of the physical system can make exact error correction impossible. In this work, we study the approximate error correction and covariance symmetry from the information-theoretic perspective. For general encoding and noise channels, we define a quantity named infidelity to characterize the performance of the approximate quantum error correction and quantify the noncovariance of an encoding channel from the asymmetry measure of the corresponding Choi state. Particularly, when the encoding channel is isometric, we derive a trade-off relation between infidelity and noncovariance. Furthermore, we calculate the average infidelity and noncovariance measure for a type of random code. ## I Introduction Errors are avoidable in quantum computing and quantum error correction (QEC) provides a method to realize fault-tolerant quantum computation[1; 2]. The subject has lasted for decades and various correcting codes have been developed [3; 4]. Beyond the quantum computation, QEC is closely connected with a wide range of quantum topics, such as quantum metrology [5; 6; 7] and quantum entanglement [8; 9; 10]. Symmetry is a ubiquitous property of the physical system and can put strong constraints on the QEC. A no-go theorem, also known as Eastin-Knill theorem, claims that there does not exist a local error-detecting code in a finite-dimensional system that allows for a set of universal logical gates to act transversally on the physical system [11]. This theorem implies that the continuous covariance symmetry and exact correction can be incompatible in certain cases [12; 13], which has motivated the exploration of the relation between covariance symmetry and approximate QEC. Several studies have focused on the performance of quantum codes that are exactly covariant but correct errors approximately [14; 15; 16; 17]. In particular, when the symmetry group is the \(U(1)\) Lie group and the corresponding generator in the physical system is a Hamiltonian, covariant codes can not correct errors perfectly if the physical Hamiltonian satisfies the "Hamiltonian-in-Kraus-span" (HKS) condition [18; 19]. Under this special case, the relation between the covariance violation and the inaccuracy of the approximate QEC has been investigated [18; 20]. In this work, we study the approximate QEC and the covariance symmetry from an information-theoretic perspective. For general encoding and noise channels in the form of the Kraus representations, we evaluate the error-correcting capability of the codes via a defined quantity called infidelity, which is related to entanglement fidelity. When infidelity is equal to \(0\), the errors caused by the noise channel can be corrected exactly. We also quantify the violation of covariance symmetry, which we term as noncovariance, from the asymmetry measures of the corresponding Choi state. We specifically explore infidelity and noncovariance measure for isometric encoding codes. Moreover, we reprove that under the HKS condition, exact correctability and covariance are incompatible. In addition, we investigate the generalized Wigner-Yanase skew information and derive a sum uncertainty relation. By virtue of the generalized skew information, we obtain a trade-off relation between infidelity and noncovariance. Furthermore, we also calculate the average infidelity and noncovariance measure for a type of random code. The paper is organized as follows. In Section II, we review the basic concepts including QEC, Wigner-Yanase skew formation and asymmetry measures for states. In Section III and Section IV, we quantify the inaccuracy of the approximate QEC and the noncovariance, respectively. In Section V, we study the special case for the isometric encoding channel. In Section VI, we make a discussion. ## II Preliminaries In this Section, to highlight the idea of our approach, we briefly review the basic working knowledge and define some notations used in the remainder of the article. ### Quantum error correction In a QEC procedure, the logical state is encoded into a higher-dimensional physical system and redundancy is introduced to protect against errors. As a very starting point, we denote by \(L\) the logical system and by \(\mathcal{H}_{L}\) the relevant Hilbert space. The dimension of the space is assumed to be \(d_{L}\) and the state space is denoted as \(\mathcal{D}(\mathcal{H}_{L})\). Similar definitions can be defined for other systems. The encoding is a channel \(\mathcal{E}\) from the logical system \(L\) to the physical system \(S\). The subspace of system \(S\), \(\mathcal{C}=\mathcal{E}(\mathcal{D}(\mathcal{H}_{L}))\), is known as code space, and let \(P\) be the projector on the code space. After a noise channel \[\mathcal{N}_{S\to S}(\rho)=\sum_{i=1}^{n}A_{i}\rho A_{i}^{\dagger} \tag{1}\] with \(\sum_{i=1}^{n}A_{i}^{\dagger}A_{i}=\mathbf{1}_{S}\), the encoding state has been changed and we can perform a corresponding decoding channel \(\mathcal{R}\) to recover the original state. An ideal QEC procedure can recover all states in the logical system perfectly, that is, \[\mathcal{R}\circ\mathcal{N}\circ\mathcal{E}=\mathcal{I} \tag{2}\] with \(\mathcal{I}\) being the identity map on logical system \(L\). The Knill-Laflamme condition is a necessary and sufficient condition for a quantum code to achieve exact correction [21]. For a given code \(\mathcal{E}\) with the projector on the code subspace \(P\), the errors can be corrected iff \[PA_{i}^{\dagger}A_{j}P=\alpha_{ij}P \tag{3}\] holds for a corresponding non-negative Hermitian matrix (\(\alpha_{ij}\)). Notice that when the Kraus operators of a noise channel can be described by a linear span of \(\{A_{i}\}\), the errors caused by this noise can also be corrected exactly. ### Wigner-Yanase skew information and its generalization The conventional variance quantifies the total uncertainty of the observable \(H\) in the state \(\rho\) and is defined as \[V(\rho,H)=\operatorname{tr}\bigl{(}\rho H^{2}\bigr{)}-(\operatorname{tr}\rho H )^{2}. \tag{4}\] As a counterpart of it, the following quantity, also known as Wigner-Yanase skew information [22; 23; 24], \[I(\rho,H)=-\frac{1}{2}\operatorname{tr}[\sqrt{\rho},H]^{2}=\frac{1}{2}\|[ \sqrt{\rho},H]\|_{2}^{2} \tag{5}\] can quantify the quantum uncertainty of the observable \(H\) in the state \(\rho\). Here, \([X,Y]=XY-YX\) denotes Lie product and \(\|X\|_{p}=(\operatorname{tr}\bigl{(}XX^{\dagger}\bigr{)}^{p/2})^{1/p}\) denotes \(p\)-norm. For a pure state, the skew information coincides with the variance. The operator \(H\) in Eq. (5) is required to be Hermitian and we can generalize to non-Hermitian case [25]. For an arbitrary operator \(K\) which can be non-Hermitian, the generalized skew information is defined as \[I(\rho,K)=\frac{1}{2}\operatorname{tr}[\sqrt{\rho},K][\sqrt{\rho},K]^{\dagger }=\frac{1}{2}\|[\sqrt{\rho},K]\|_{2}^{2}. \tag{6}\] In particular, for pure state \(|\phi\rangle\), \[I(|\phi\rangle\!\langle\phi|\,,K)=\frac{1}{2}\left\langle\phi|\,KK^{\dagger}+ K^{\dagger}K\left|\phi\right\rangle-|\langle\phi|\,K\left|\phi\right\rangle |^{2}. \tag{7}\] Besides, the generalized skew information can be expressed as a sum of the original skew information, \[I(\rho,K)=I(\rho,K^{\dagger})=I(\rho,\operatorname{Re}(K))+I(\rho, \operatorname{Im}(K)), \tag{8}\] where \(\operatorname{Re}(K)=\frac{1}{2}(K+K^{\dagger})\) and \(\operatorname{Im}(K)=\frac{1}{2i}(K-K^{\dagger})\) represent the real and imaginary components, respectively. Notice that when \(K\) is Hermitian, it reduces to the original ones. The original skew information satisfies a series of uncertainty relations [26; 27; 28]. Here we give a sum uncertainty relation based on the generalized skew information. **Lemma 1**.: _Let \(K_{1},\cdots,K_{N}\) be a set of operators. For a state \(\rho\), there is_ \[\sum_{j=1}^{N}I(\rho,K_{j})\geq\frac{1}{N}I(\rho,\sum_{j=1}^{N}K_{j}). \tag{9}\] Proof.: \[\begin{split} I(\rho,\sum_{j=1}^{N}K_{j})&=\frac{1 }{2}\Biggl{\|}[\sqrt{\rho},\sum_{j=1}^{N}K_{j}]\Biggr{\|}_{2}^{2}\\ &\leq\frac{1}{2}\Big{(}\sum_{j=1}^{N}\left\|[\sqrt{\rho},K_{j}] \right\|_{2}\Big{)}^{2}\\ &\leq\frac{N}{2}\sum_{j=1}^{N}\left\|[\sqrt{\rho},K_{j}] \right\|_{2}^{2}\\ &=N\sum_{j=1}^{N}I(\rho,K_{j}),\end{split}\] (10) where the first inequality is from triangle inequality of the norm and the second inequality is from Cauchy-Schwarz inequality. ### Asymmetry measures Given a group \(\mathbf{G}\), for any group element \(g\), let \(U(g)\) be the representing unitary operator in the space \(\mathcal{H}\). If a state \(\rho\) remains unchanged under unitary transformations induced by \(\mathbf{G}\) \[U(g)\rho U^{\dagger}(g)=\rho,\quad\forall g\in\mathbf{G}, \tag{11}\] we call that the state is symmetric with respect to \(\mathbf{G}\). In quantum resource theory, the quantification of how much a state breaks this symmetry, or the measure of asymmetry, is a significant problem. Different measures of asymmetry have been proposed in the literature [29; 30; 31; 32]. For example, some commonly-used measures of asymmetry are based on skew information and von Neumann entropy. Here we mainly focus on compact Lie groups and only review the asymmetric measure given by skew information. Suppose \(H\) is a generator of the Lie group \(\mathbf{G}\), the symmetric states should commute with \(H\). Consequently, the skew information \(I(\rho,H)\) gives an asymmetry measure and possesses some desirable properties including being non-negative and being invariant under unitary transformations [29; 32]. ## III Approximate quantum error correction The exact correctability is a strong restriction to practical codes and instead, we sometimes consider approximate QEC codes. In an approximate QEC process, we need to find a proper recovery channel such that the composite operation \(\mathcal{R}\circ\mathcal{N}\circ\mathcal{E}\) is close enough to the identity map, which demonstrates that all states can be nearly recovered. To characterize the performance of the approximate QEC codes, we first recall how to quantify the "distance" between two channels. For two states, \(\rho\) and \(\sigma\), the "distance" is quantified by fidelity \[F(\rho,\sigma)=\left\|\sqrt{\rho},\sqrt{\sigma}\right\|_{1}=\operatorname{tr} \sqrt{\sqrt{\rho}\sigma\sqrt{\rho}}. \tag{12}\] The fidelity and the trace distance are closely related. The two measures are qualitatively equivalent since they satisfy the following inequalities [33], \[1-\frac{1}{2}\|\rho-\sigma\|_{1}\leq F(\rho,\sigma)\leq\sqrt{1-\frac{1}{4}\| \rho-\sigma\|_{1}^{2}}. \tag{13}\] For two channels in the system \(L\), \(\Lambda\) and \(\Lambda^{\prime}\), the entanglement fidelity \[F_{e}(\Lambda,\Lambda^{\prime})=F\Big{(}(\Lambda_{L}\otimes\mathcal{I}_{R})( \left|\psi\right\rangle\!\!\left\langle\psi\right|_{LR}),(\Lambda^{\prime}_{L }\otimes\mathcal{I}_{R})(\left|\psi\right\rangle\!\!\left\langle\psi\right|_ {LR})\Big{)} \tag{14}\] measures the closeness between these two channels, where \(R\) is the reference system identical to system \(L\) and \(\left|\psi\right\rangle_{LR}=1/\sqrt{d_{L}}\sum_{k=1}^{d_{L}}\left|k\right\rangle _{L}\left|k\right\rangle_{R}\) is the maximally entangled state. As a special case, take \(\Lambda^{\prime}\) as identity map and we obtain the entanglement fidelity of the channel \(\Lambda\), \[F_{e}(\Lambda)=F_{e}(\Lambda,\mathcal{I})=\sqrt{\left\langle\psi\right|_{LR} \left(\Lambda_{L}\otimes\mathcal{I}_{R}\right)(\left|\psi\right\rangle\!\! \left\langle\psi\right|_{LR})\left|\psi\right\rangle_{LR}}. \tag{15}\] With the above entanglement fidelity, now we can characterize the performance of an encoding channel \(\mathcal{E}\) under noise \(\mathcal{N}\) by the quantity defined as \[f_{e}(\mathcal{N}\circ\mathcal{E})=\max_{\mathcal{R}}\!F_{e}(\mathcal{R} \circ\mathcal{N}\circ\mathcal{E}). \tag{16}\] When \(f_{e}(\mathcal{N}\circ\mathcal{E})=1\), we can find a channel \(\mathcal{R}\) such that all states are recovered perfectly. The maximization problem in Eq. (16) is generally difficult since the optimization is over all channels. Fortunately, we can study the problem from the view of leakage information to the environment via the method of complementary channels [34]. As shown in Figure 1, the channel \((\mathcal{N}\circ\mathcal{E})_{L\to S}\) has an isometry dilation \(V_{L\to SE}\) with environment system \(E\) such that \[\mathcal{N}\circ\mathcal{E}(\rho_{L})=\operatorname{tr}_{E}\Big{(}V_{L\to SE }\rho_{L}V_{L\to SE}^{\dagger}\Big{)}. \tag{17}\] Then the complementary channel is defined as \[\widehat{\mathcal{N}\circ\mathcal{E}}(\rho_{L})=\operatorname{tr}_{S}\Big{(} V_{L\to SE}\rho_{L}V_{L\to SE}^{\dagger}\Big{)}. \tag{18}\] The optimization problem Eq. (16) has an equivalent form [13], \[f_{e}(\mathcal{N}\circ\mathcal{E})=\max_{\left|\zeta\right\rangle}\!\!F_{e}( \widehat{\mathcal{N}\circ\mathcal{E}},\mathcal{T}_{\zeta}), \tag{19}\] where \(\mathcal{T}_{\zeta}(\cdot)=\operatorname{tr}(\cdot)\left|\zeta\right\rangle\! \!\left\langle\zeta\right|\) is a constant channel. With the method of complementary channels, we give a lower bound of the entanglement fidelity \(f_{e}\) for generalized encoding and noise channels which extends the results in Ref. [35]. **Lemma 2**.: _Suppose the channel \((\mathcal{N}\circ\mathcal{E})_{L\to S}\) has a Stinespring dilation \(V_{L\to SE}\), as shown in Figure 1. Let_ \[\left|\Psi\right\rangle_{RSE}=\left(\mathbf{1}_{R}\otimes V_{L\to SE} \right)\left|\psi\right\rangle_{RL}, \tag{20}\] _where \(\left|\psi\right\rangle_{RL}=1/\sqrt{d_{L}}\sum_{k=1}^{d_{L}}\left|k\right\rangle _{L}\left|k\right\rangle_{R}\). Denote \(\rho_{RE}\), \(\rho_{R}\) and \(\rho_{E}\) as the reduced states of \(\left|\Psi\right\rangle_{RSE}\) on \(RE\), \(R\), and \(E\), respectively. The quantity \(f_{e}\) satisfies the following inequality_ \[1-f_{e}(\mathcal{N}\circ\mathcal{E})\leq\frac{1}{2}\|\rho_{RE}-\rho_{R}\otimes \rho_{E}\|_{1}\leq\frac{\sqrt{d_{L}d_{E}}}{2}\|\rho_{RE}-\rho_{R}\otimes\rho_{E} \|_{2}, \tag{21}\] _where \(d_{E}\) is the dimension of the environment system \(E\). The equality holds iff \(\rho_{RE}=\rho_{R}\otimes\rho_{E}\)._ Proof.: From Eq. (19), we obtain \[f_{e}(\mathcal{N}\circ\mathcal{E}) =\max_{\left|\zeta\right\rangle}\!\!F(\widehat{\mathcal{N}\circ \mathcal{E}}\otimes\mathcal{I}(\left|\psi\right\rangle\!\!\left\langle\psi\right| ),\mathcal{T}_{\zeta}\otimes\mathcal{I}(\left|\psi\right\rangle\!\!\left\langle \psi\right|)) \tag{22}\] \[=\max_{\left|\zeta\right\rangle}\!\!F(\rho_{RE},\frac{\mathbf{1} _{R}}{d_{L}}\otimes\left|\zeta\right\rangle\!\!\left\langle\zeta\right|_{E})\] \[=\max_{\left|\zeta\right\rangle}\!\!F(\rho_{RE},\rho_{R}\otimes \left|\zeta\right\rangle\!\!\left\langle\zeta\right|_{E})\] \[\geq F(\rho_{RE},\rho_{R}\otimes\rho_{E})\] \[\geq 1-\frac{1}{2}\|\rho_{RE}-\rho_{R}\otimes\rho_{E}\|_{1},\] Figure 1: \(V_{L\to SE}\) is a Stinespring dilation of the channel \((\mathcal{N}\circ\mathcal{E})_{L\to S}\) with an environment system \(E\). The input is a maximally entangled state of the logical system \(L\) and the reference system \(R\). The output is denoted as \(\left|\Psi\right\rangle_{RSE}\). where the last inequality is from Eq. (13). Recall that for an operator \(A\), the 1-norm and the 2-norm have the relation \[\left\|A\right\|_{1}\leq\sqrt{\text{rank}(A)}\norm{A}_{2}. \tag{23}\] According to this relation and \(\text{rank}(\rho_{RE}-\rho_{R}\otimes\rho_{E})\leq d_{L}d_{E}\), we obtain the remaining inequality in Eq. (21). In general, the fidelity and the 1-norm are difficult to calculate since we need spectral decomposition. In comparison, 2-norm is easier to calculate. After a tedious calculation of the 2-norm in Eq. (21) presented in the Appendix, we obtain a lower bound of \(f_{e}\). **Observation 1**.: _Let \(\mathcal{E}_{L\to S}\) and \(\mathcal{N}_{S\to S}\) be the encoding and noise channels, respectively. Suppose they have specific forms,_ \[\begin{split}\mathcal{E}(\rho)&=\sum_{s=1}^{m}E_{s} \rho E_{s}^{\dagger},\\ \mathcal{N}(\sigma)&=\sum_{i=1}^{n}A_{i}\sigma A_{ i}^{\dagger},\end{split} \tag{24}\] _where \(\sum_{s=1}^{m}E_{s}^{\dagger}E_{s}=\mathbf{1}_{L}\) and \(\sum_{i=1}^{n}A_{i}^{\dagger}A_{i}=\mathbf{1}_{S}\). Denote \(O=\sum_{s=1}^{m}E_{s}E_{s}^{\dagger}\). The entanglement fidelity \(f_{e}\) has a lower bound,_ \[f_{e}(\mathcal{N}\circ\mathcal{E})\geq 1-\epsilon(\mathcal{N}\circ\mathcal{E}), \tag{25}\] _where_ \[\begin{split}\epsilon(\mathcal{N}\circ\mathcal{E})& =\sqrt{\frac{mn}{4d_{L}}}\Big{(}\sum_{i,j=1}^{n}\tr(A_{i}^{\dagger }A_{j}OA_{j}^{\dagger}A_{i}O\Big{)}\\ &-\frac{1}{d_{L}}\sum_{i,j=1}^{n}\sum_{s,t=1}^{m}\left|\tr(A_{i} ^{\dagger}A_{j}E_{t}E_{s}^{\dagger})\right|^{2}\Big{)}^{\frac{1}{2}}\end{split} \tag{26}\] _and we call \(\epsilon\) as infidelity._ This observation gives a quantitative description of the performance of an approximate QEC. When \(\epsilon\ll 0\), the errors can be corrected approximately. The defined infidelity \(\epsilon\) also characterizes the correlation between system \(R\) and system \(E\). As the environment becomes more correlated with the reference system which contains the encoded quantum information, more information leaks into the environment which can result in the degradation of the protected information. ## IV Covariance symmetry A channel \(\mathcal{E}\) from system \(L\) to system \(S\) is called covariant with group \(\mathbf{G}\), if for \(\forall g\in\mathbf{G}\) and \(\forall\rho\in\mathcal{D}(\mathcal{H}_{L})\), there is \[\mathcal{E}\Big{(}U_{L}(g)\rho U_{L}(g)^{\dagger}\Big{)}=U_{S}(g)\mathcal{E}( \rho)U_{S}^{\dagger}(g), \tag{27}\] where \(U_{L}(g)\) and \(U_{S}(g)\) are unitary representations of group element \(g\) on space \(\mathcal{H}_{L}\) and \(\mathcal{H}_{S}\), respectively. We can also say that the channel is symmetric with respect to \(\mathbf{G}\). The covariant channel is intimately connected with the symmetric state and the Choi representation builds this bridge. More explicitly, the covariance symmetry of a channel is equal to the group symmetry of the corresponding Choi state [36]. Next we explain this equivalence relation in detail. Recall that there exists a one-to-one correspondence between the channel and the Choi state, \[\begin{split}\Phi_{\mathcal{E}}&=(\mathcal{I}_{L} \otimes\mathcal{E}_{R\to S})(\abs{\psi}\!\!\bra{\psi}_{LR}),\\ \mathcal{E}_{L\to S}(\rho)&=d_{L}\tr_{L}\Big{(}(\rho_{L}^{T} \otimes\mathbf{1}_{S})\Phi_{\mathcal{E}}\Big{)},\end{split} \tag{28}\] where \(T\) represents the transposition. Suppose the channel \(\mathcal{E}\) is \(\mathbf{G}\)-covariant, then for \(\forall\rho\) and \(\forall g\), we can obtain \[\begin{split} 0\\ =&\frac{1}{d_{L}}\mathcal{E}(\rho)-\frac{1}{d_{L}}U_{S}^{ \dagger}(g)\mathcal{E}\Big{(}U_{L}(g)\rho U_{L}^{\dagger}(g)\Big{)}U_{S}(g)\\ =&\tr_{L}\Big{(}(\rho_{L}^{T}\otimes\mathbf{1}_{S}) \Phi_{\mathcal{E}}\Big{)}\\ -& U_{S}^{\dagger}(g)\tr_{L}\Bigg{(}\Big{(}U_{L}^{*}(g) \rho_{L}^{T}U_{L}^{T}(g)\otimes\mathbf{1}_{S}\Big{)}\Phi_{\mathcal{E}}\Bigg{)}U _{S}(g)\\ =&\tr_{L}(\rho_{L}^{T}\otimes\mathbf{1}_{S})\Big{(} \Phi_{\mathcal{E}}-(U_{L}^{T}(g)\otimes U_{S}^{\dagger}(g))\Phi_{\mathcal{E}} \big{(}U_{L}^{*}(g)\otimes U_{S}(g)\big{)}\Big{)},\end{split} \tag{29}\] where \(*\) is the conjugate operation. Therefore, \[\Phi_{\mathcal{E}}=(U_{L}^{T}(g)\otimes U_{S}^{\dagger}(g))\Phi_{\mathcal{E}} (U_{L}^{*}(g)\otimes U_{S}(g)),\forall g. \tag{30}\] This implies that the Choi state \(\Phi_{\mathcal{E}}\) is symmetric with respect to the unitary representation \(\{U_{L}^{*}(g)\otimes U_{S}(g)\}\). Hence, we can quantify the noncovariance of a channel from the asymmetry of its Choi state. Concretely, the noncovariance is defined as \[N_{\mathbf{G}}(\mathcal{E})=N_{\mathbf{G}}(\Phi_{\mathcal{E}}), \tag{31}\] where \(N_{\mathbf{G}}(\Phi_{\mathcal{E}})\) represents the asymmetry measure of \(\Phi_{\mathcal{E}}\) under the group \(\mathbf{G}\) which has been well-studied as discussed in Section II. ## V Isometric encoding In this Section, we investigate infidelity, noncovariance and their trade-off relation of a particular example. ### Infidelity of the QEC The isometric encoding channel is of the form \(\mathcal{E}(\rho)=W\rho W^{\dagger}\) with \(W^{\dagger}W=\mathbf{1}_{L}\). The projector onto the coding space is \(P=WW^{\dagger}\) and the Choi state is \[\begin{split}\Phi_{\mathcal{E}}&=(\mathcal{I}_{L} \otimes\mathcal{E}_{R\to S})(\ket{\psi}\!\!\bra{\psi}_{LR})\\ &=\ket{\tilde{\psi}}\!\!\bra{\tilde{\psi}}_{LS},\end{split} \tag{32}\] where \(\ket{\tilde{\psi}}_{LS}=\left(\mathbf{1}_{L}\otimes W_{R\to S}\right)\ket{ \psi}_{LR}\). The noise channel is assumed to be of the general form \(\mathcal{N}_{S\to S}(\rho)=\sum_{i=1}^{n}A_{i}\rho A_{i}^{\dagger}\). According to Observation 1, the square of infidelity is \[\epsilon^{2}(\mathcal{N}\circ\mathcal{E})=\frac{n}{4d_{L}}\sum_{i,j=1}^{n} \mathrm{tr}\Big{(}PA_{j}^{\dagger}A_{i}PA_{i}^{\dagger}A_{j}\Big{)}-\frac{1}{d _{L}}\Big{|}\mathrm{tr}\Big{(}PA_{i}^{\dagger}A_{j}\Big{)}\Big{|}^{2}. \tag{33}\] Notice that when Knill-Laflamme condition is satisfied, namely, \(P(A_{i}^{\dagger}A_{j})P=\lambda_{ij}P\) holds for \(\forall i,j\) with some constants \(\lambda_{ij}\), then infidelity \(\epsilon=0\) and perfect error correction can be realized. Denote \(K_{ij}=PA_{i}^{\dagger}A_{j}P\) and infidelity can be written in the form of the generalized skew information, \[\frac{4d_{L}\epsilon^{2}(\mathcal{N}\circ\mathcal{E})}{n}\] \[=\sum_{i,j=1}^{n}\frac{1}{2}\mathrm{tr}\Big{(}K_{ij}^{\dagger}K_{ ij}+K_{ij}K_{ij}^{\dagger}\Big{)}-\sum_{i,j=1}^{n}\frac{1}{d_{L}}|\mathrm{tr}\,K_{ ij}|^{2}\] \[=\sum_{i,j=1}^{n}\frac{1}{2}\mathrm{tr}\Big{(}W^{\dagger}(K_{ij}^ {\dagger}K_{ij}+K_{ij}K_{ij}^{\dagger})W\Big{)}\] \[-\sum_{i,j=1}^{n}\frac{1}{d_{L}}\Big{|}\mathrm{tr}\big{(}W^{ \dagger}K_{ij}W\big{)}\Big{|}^{2}\] \[=\sum_{i,j=1}^{n}\sum_{k,l=1}^{d_{L}}\frac{1}{2}\bra{k}W^{ \dagger}(K_{ij}^{\dagger}K_{ij}+K_{ij}K_{ij}^{\dagger})W\ket{l}\bra{k}l\] \[-\sum_{i,j=1}^{n}\left|\sum_{k,l=1}^{d_{L}}\bra{k}W^{\dagger}K_{ ij}W\ket{l}\bra{k}l\right|\right|^{2}\] \[=d_{L}\sum_{i,j=1}^{n}\Big{(}\frac{1}{2}\bra{\psi}\mathbf{1}_{L} \otimes W^{\dagger}(K_{ij}^{\dagger}K_{ij}+K_{ij}K_{ij}^{\dagger})\mathbf{1}_{ L}\otimes W\ket{\psi}\] \[-\big{|}\langle\psi|\left(\mathbf{1}_{L}\otimes W^{\dagger}\right) (\mathbf{1}_{L}\otimes K_{ij})(\mathbf{1}_{L}\otimes W)\ket{\psi}\big{|}^{2}\Big{)}\] \[=d_{L}\sum_{i,j=1}^{n}I\Big{(}\ket{\tilde{\psi}}\!\!\bra{\tilde{ \psi}},\mathbf{1}_{L}\otimes K_{ij}\Big{)}. \tag{34}\] ### trade-off relation between infidelity and noncovariance We consider the case for general compact Lie groups. Let the two orthonormal bases of Lie algebras corresponding to unitary representations \(\{U_{L}^{*}(g)\}\) and \(\{U_{S}(g)\}\) be \(\{H_{L}^{p}:p=1,\cdots,d_{\mathbf{G}}\}\) and \(\{H_{S}^{q}:q=1,\cdots,d_{\mathbf{G}}\}\), respectively. The set \(\{H_{L}^{p}\otimes\mathbf{1}_{S}+\mathbf{1}_{L}\otimes H_{S}^{q}:p,q=1,\cdots, d_{\mathbf{G}}\}\) constitutes an orthonormal basis of Lie algebra corresponding to \(\{U_{L}^{*}(g)\otimes U_{S}(g)\}\)[37]. The sum of skew information, \[N_{\mathbf{G}}(\rho)=\sum_{p,q=1}^{d_{\mathbf{G}}}I(\rho,H_{L}^{p}\otimes \mathbf{1}_{S}+\mathbf{1}_{L}\otimes H_{S}^{q}), \tag{35}\] quantifies the asymmetry of state \(\rho\) with respect to \(\mathbf{G}\)[32]. We can obtain the noncovariance measure of a channel \(\mathcal{E}\) from this asymmetry measure, as defined in Eq. (31). Moreover, we find the following relation by combining the Lemma 1 with the expressions of infidelity and noncovariance. **Observation 2**.: _For an isometric encoding channel \(\mathcal{E}\) and noise channel \(\mathcal{N}\), noncovariance with respect to a compact Lie group \(\mathbf{G}\) and infidelity satisfy the following trade-off relation:_ \[\frac{4\epsilon^{2}(\mathcal{N}\circ\mathcal{E})}{n}+N_{\mathbf{G}}(\mathcal{E}) \geq\frac{1}{n^{2}+d_{\mathbf{G}}^{2}}I\Big{(}\ket{\tilde{\psi}}\!\!\bra{ \tilde{\psi}},K\Big{)}, \tag{36}\] _where \(K=\sum_{i,j=1}^{n}\mathbf{1}_{L}\otimes K_{ij}+\sum_{p,q=1}^{d_{\mathbf{G}}}(H _{L}^{p}\otimes\mathbf{1}_{S}+\mathbf{1}_{L}\otimes H_{S}^{q})\)._ Next, we consider the special case of \(U(1)\) group. In this case, assume \(U_{L}(g)=\mathrm{e}^{-iH_{L}g}\) and \(U_{S}(g)=\mathrm{e}^{-iH_{S}g}\), where \(H_{L}\) and \(H_{S}\) are Hamiltonians. Then, \[\begin{split} U_{L}^{*}(g)\otimes U_{S}(g)&=\mathrm{ e}^{iH_{L}g}\otimes\mathrm{e}^{-iH_{S}g}\\ &=\mathrm{e}^{-i(\mathbf{1}_{L}\otimes H_{S}-H_{L}\otimes 1_{S})g}. \end{split} \tag{37}\] Thus, the corresponding generated Hamiltonian of \(U_{L}^{*}(g)\otimes U_{S}(g)\) is \(H=\mathbf{1}_{L}\otimes H_{S}-H_{L}\otimes\mathbf{1}_{S}\). The noncovariance of the isometric encoding channel can be quantified by the skew information \[N_{\mathbf{G}}(\mathcal{E})=I\Big{(}\ket{\tilde{\psi}}\!\!\bra{\tilde{\psi}},H \Big{)}. \tag{38}\] For \(U(1)\) group, the HKS condition is sufficient for the nonexistence of covariant and exact QEC codes [19]. Explicitly, if \[H_{S}\in\mathrm{span}\{A_{i}^{\dagger}A_{j}:i,j=1,\cdots,n\}, \tag{39}\] all covariant codes can not correct errors perfectly. Here we reprove this no-go result. Let \(H_{S}=\sum_{i,j=1}^{n}\alpha_{ij}A_{i}^{\dagger}A_{j}\) with \(\alpha_{ij}\in\mathbb{C}\) and suppose the isometric encoding channel \(\mathcal{E}\) is covariant and correct errors perfectly. Since \(N_{\mathbf{G}}(\mathcal{E})=0\) and \(H\) is Hermitian, there exist a constant \(\lambda\) such that \[H(\mathbf{1}_{L}\otimes W)\ket{\psi}=\lambda(\mathbf{1}_{L}\otimes W)\ket{\psi}. \tag{40}\] This implies that \[(\mathbf{1}_{L}\otimes P)H(\mathbf{1}_{L}\otimes P)(\mathbf{1}_{L}\otimes W) \ket{\psi}=\lambda(\mathbf{1}_{L}\otimes W)\ket{\psi}. \tag{41}\] Consequently, \[\begin{split} 0&=I\Big{(}\left|\tilde{\psi}\right\rangle \!\!\Big{\langle}\tilde{\psi}\Big{|}\,,(\mathbf{1}_{L}\otimes P)H(\mathbf{1}_{ L}\otimes P)\Big{)}\\ &=I\Big{(}\left|\tilde{\psi}\right\rangle\!\!\Big{\langle}\tilde{ \psi}\Big{|}\,,\sum_{i,j}\alpha_{ij}\mathbf{1}_{L}\otimes K_{ij}-H_{L}\otimes P \Big{)}.\end{split} \tag{42}\] In addition, \(\epsilon(\mathcal{N}\circ\mathcal{E})=0\) indicates that \[I\big{(}\big{|}\tilde{\psi}\big{\rangle}\!\!\Big{\langle}\tilde{\psi}\Big{|} \,,\alpha_{ij}\mathbf{1}_{L}\otimes K_{ij}\big{)}=0. \tag{43}\] Combine Eq. (42) with Eq. (43), we obtain \[\begin{split} 0&\leq(n^{2}+1)I\Big{(}\left|\tilde{\psi} \right\rangle\!\!\Big{\langle}\tilde{\psi}\Big{|}\,,H_{L}\otimes P\Big{)}\\ &\leq I\Big{(}\left|\tilde{\psi}\right\rangle\!\!\Big{\langle} \tilde{\psi}\Big{|}\,,-\sum_{i,j=1}^{n}\alpha_{ij}\mathbf{1}_{L}\otimes K_{ij} +H_{L}\otimes P\Big{)}\\ &+\sum_{i,j=1}^{n}I\Big{(}\left|\tilde{\psi}\right\rangle\!\! \Big{\langle}\tilde{\psi}\Big{|}\,,\alpha_{ij}\mathbf{1}_{L}\otimes K_{ij} \Big{)}=0.\end{split} \tag{44}\] Therefore, \[(\mathbf{1}_{L}\otimes W^{\dagger})(H_{L}\otimes P)(\mathbf{1}_{L}\otimes W) \left|\psi\right\rangle=\alpha\left|\psi\right\rangle \tag{45}\] holds for some constant \(\alpha\). After a direct calculation, there is \[\left\langle k\right|H_{L}\left|l\right\rangle=\alpha\delta_{kl}, \tag{46}\] or equivalently, we have \(H_{L}=\alpha\mathbf{1}_{L}\). This contradicts with the nontrivial assumption of logical Hamiltonian. ### Average infidelity and noncovariance for random codes We consider a type of random code that the encoding isometry has the following expression, \[W=U_{S}(\mathbf{1}_{L}\otimes\left|0\right\rangle_{A}), \tag{47}\] where \(A\) is an ancillary system satisfying \(\mathcal{H}_{S}=\mathcal{H}_{L}\otimes\mathcal{H}_{A}\) and \(U\) is a random unitary under Haar measure. Equivalently, the projector can be written as \[P=U_{S}(\mathbf{1}_{L}\otimes\left|0\right\rangle\!\!\left\langle 0\right|_{A})U_{S}^ {\dagger}. \tag{48}\] For this type of random code, the average infidelity satisfies \[\begin{split}&\int_{\mathbf{U}(d_{S})}\frac{4d_{L}\epsilon^{2}( \mathcal{N}\circ\mathcal{E})}{n}d\mu(U)\\ &=\frac{d_{L}^{2}-1}{d_{S}^{2}-1}\sum_{i,j=1}^{n}\Big{(}\operatorname {tr}\!\left(A_{j}^{\dagger}A_{i}A_{i}^{\dagger}A_{j}\right)-\frac{1}{d_{S}} \Big{|}\operatorname{tr}\!\left(A_{i}^{\dagger}A_{j}\right)\Big{|}^{2}\Big{)}, \end{split} \tag{49}\] where \(\mathbf{U}(d_{S})\) represents the unitary group in system \(S\) and \(\mu\) is the Haar measure. When \(\mathbf{G}\) is \(U(1)\) group, the average noncovariance is equal to \[\begin{split}&\int_{\mathbf{U}(d_{S})}N_{\mathbf{G}}(\mathcal{E})d \mu(U)\\ &=\frac{d_{L}\operatorname{tr}\!\left(H_{L}^{2}\right)-( \operatorname{tr}H_{L})^{2}}{d_{L}^{2}}\\ &+\frac{(d_{L}d_{S}^{2}-d_{S})\operatorname{tr}\!\left(H_{S}^{2} \right)-(d_{L}d_{S}-1)(\operatorname{tr}H_{S})^{2}}{d_{L}d_{S}(d_{S}^{2}-1)}. \end{split} \tag{50}\] We leave the detailed calculation in Appendix. From Eq. (49) and Eq. (50), we can see that if the dimension of the physical system \(d_{S}\) tends to infinity, the average infidelity tends \(0\) while the noncovariance tends to \(\frac{d_{L}\operatorname{tr}\!\left(H_{L}^{2}\right)-(\operatorname{tr}H_{L})^ {2}}{d_{L}^{2}}\). ## VI Conclusion and outlook In this work, we define a quantity termed infidelity to characterize the inaccuracy of an approximate QEC and also quantify noncovariance symmetry of the encoding channel with respect to a general Lie group. With these two quantities, we derive a trade-off relation between approximate QEC and noncovariance in the special case that the encoding channel is isometric. For a type of random code, we find that when the dimension of the physical system is large enough, the errors can be corrected approximately while noncovariance tends to a constant. It is interesting to design explicit nearly covariant and approximate QEC codes leveraging the quantities we defined which is left for future work. ###### Acknowledgements. This work is supported by the National Natural Science Foundation of China Grant No. 12174216. ## Appendix A Proof of Observation 1 The Stinespring isometry \(V_{L\to SE}\) of the composited channel, \[\mathcal{N}\circ\mathcal{E}(\rho)=\sum_{i,s}A_{i}E_{s}\rho E_{s}^{\dagger}A_{ i}^{\dagger},\] (A1) satisfies \[V_{L\to SE}\left|\varphi\right\rangle_{L}=\sum_{i,s}A_{i}E_{s}\left|\varphi \right\rangle_{L}\otimes\left|is\right\rangle_{E}.\] (A2) Here, \(\left\{\left|is\right\rangle_{E}\right\}\) forms an orthonormal basis of the environment system \(E\) and the dimension \(d_{E}=mn\), which is equal to the number of the Kraus operators \(\{A_{i}E_{s}\}\). It should be noticed that we omit the upper bound of the index in the summation sign for convenience in this appendix. The output state is \[\begin{split}\ket{\Psi}_{RSE}&=\left(\mathbf{1}_{R} \otimes V_{L\to SE}\ket{\psi}_{RL}\right.\\ &=\frac{1}{\sqrt{d_{L}}}\sum_{k}V_{L\to SE}\ket{k}_{L} \otimes\ket{k}_{R}\\ &=\frac{1}{\sqrt{d_{L}}}\sum_{k,i,s}A_{i}E_{s}\ket{k}_{L} \otimes\ket{is}_{E}\otimes\ket{k}_{R}.\end{split} \tag{10}\] The reduced state in system \(RE\) is \[\begin{split}&\rho_{RE}\\ =&\operatorname{tr}_{S}\ket{\Psi}\!\bra{\Psi}_{RSE}\\ =&\frac{1}{d_{L}}\sum_{i,j,k,l,s,t}\operatorname{tr} _{S}\left((A_{i}E_{s}\ket{k}\!\bra{l}E_{t}^{\dagger}A_{j}^{\dagger})_{S} \otimes\ket{is}\!\bra{jt}_{E}\otimes\ket{k}\!\bra{l}_{R}\right)\\ =&\frac{1}{d_{L}}\sum_{i,j,k,l,s,t}\bra{l}E_{t}^{ \dagger}A_{j}^{\dagger}A_{i}E_{s}\ket{k}\bra{i}_{E}\otimes\ket{k}\!\bra{l}_{R }.\end{split} \tag{11}\] Then the reduced state in system \(R\) is the maximally mixed state \(\mathbf{1}_{R}/d_{L}\) and \[\begin{split}\rho_{E}&=\operatorname{tr}_{R}\rho_{ RE}\\ &=\frac{1}{d_{L}}\sum_{k,i,j,s,t}\bra{k}E_{t}^{\dagger}A_{j}^{ \dagger}A_{i}E_{s}\ket{k}\ket{is}\!\bra{jt}_{E}\\ &=\frac{1}{d_{L}}\sum_{i,j,s,t}\operatorname{tr}\!\left(E_{t}^{ \dagger}A_{j}^{\dagger}A_{i}E_{s}\right)\ket{is}\!\bra{jt}_{E}.\end{split} \tag{12}\] To calculate the 2-norm in Eq. (21), we map the states in system \(RE\) to states in system \(LE\) through the following isometric channel \[\begin{split}&\Lambda(\sum_{k,l,i,j,s,t}\alpha_{klijst}\ket{is} \!\bra{jt}_{E}\otimes\ket{k}\!\bra{l}_{R})\\ &=\sum_{k,l,i,j,s,t}\alpha_{klijst}^{*}\ket{is}\!\bra{jt}_{E} \otimes\ket{k}\!\bra{l}_{L},\end{split} \tag{13}\] and then, \[\left\lVert\Lambda(\rho_{RE})-\Lambda(\rho_{R}\otimes\rho_{E})\right\rVert_{2 }=\left\lVert\rho_{RE}-\rho_{R}\otimes\rho_{E}\right\rVert_{2}. \tag{14}\] Let \[D =\Lambda(\rho_{RE})-\Lambda(\rho_{R}\otimes\rho_{E}) \tag{15}\] \[=\frac{1}{d_{L}}\sum_{i,j,s,t}\left(E_{s}^{\dagger}A_{i}^{\dagger} A_{j}E_{t}-\operatorname{tr}\!\left(E_{s}^{\dagger}A_{i}^{\dagger}A_{j}E_{t} \right)\!\frac{1_{L}}{d_{L}}\right)\otimes\ket{is}\!\bra{jt}_{E}.\] There is \[\begin{split}&\left\lVert\rho_{RE}-\rho_{R}\otimes\rho_{E} \right\rVert_{2}^{2}\\ &=\operatorname{tr}DD^{\dagger}\\ &=\frac{1}{d_{L}^{2}}\sum_{i,j,s,t}\left(\operatorname{tr}\!\left( E_{s}^{\dagger}A_{i}^{\dagger}A_{j}E_{t}E_{t}^{\dagger}A_{j}^{\dagger}A_{i}E_{s} \right)\right.\\ &-\frac{1}{d_{L}}\operatorname{tr}\!\left(E_{s}^{\dagger}A_{i}^{ \dagger}A_{j}E_{t}\right)\operatorname{tr}\!\left(E_{t}^{\dagger}A_{j}^{ \dagger}A_{i}E_{s}\right)\right)\\ &=\frac{1}{d_{L}^{2}}\sum_{i,j}\operatorname{tr}\!\left(A_{i}^{ \dagger}A_{j}OA_{j}^{\dagger}A_{i}O\right)-\frac{1}{d_{L}^{3}}\sum_{i,j,s,t} \left|\operatorname{tr}\!\left(A_{i}^{\dagger}A_{j}E_{t}E_{s}^{\dagger} \right)\right|^{2},\end{split} \tag{16}\] where \(O=\sum_{t}E_{t}E_{t}^{\dagger}\). ## Appendix B Calculation of average infidelity and noncovariance In a Hilbert space \(\mathcal{H}\) with dimension \(d\), the uniform Haar measure \(\mu\) over unitary operator group \(\mathbf{U}(d)\) remains invariant under both left and right multiplication of any unitary operator \(V\in\mathbf{U}(d)\)[38; 39; 40]. Mathematically, \[\mu(\mathcal{A})=\mu(\mathcal{A}V)=\mu(V\mathcal{A}) \tag{17}\] holds for arbitrary Borel subset \(\mathcal{A}\) and arbitrary unitary \(V\). Here we recall some integral formulae over unitary groups referring to Ref. [38] for detailed proofs. **Lemma 3**.: _For Haar measure \(\mu\), it holds that_ 1. \[\int_{\mathbf{U}(d)}UAU^{\dagger}d\mu(U)=\frac{\operatorname{tr}A}{d}\mathbf{1 }_{d},\] (18) 2. \[\int_{\mathbf{U}(d_{A})}(U_{A}\otimes\mathbf{1}_{B})X_{AB}(U_{A}\otimes \mathbf{1}_{B})^{\dagger}d\mu(U_{A})=\frac{\mathbf{1}_{A}}{d_{A}}\otimes \operatorname{tr}_{A}X_{AB},\] (19) 3. \[\int_{\mathbf{U}(d)}(U\otimes U)A(U\otimes U)^{\dagger}d\mu(U)\] (20) \[=\Big{(}\frac{\operatorname{tr}A}{d^{2}-1}-\frac{\operatorname{tr }(AF)}{d(d^{2}-1)}\Big{)}\mathbf{1}_{d^{2}}-\Big{(}\frac{\operatorname{tr}A}{d (d^{2}-1)}-\frac{\operatorname{tr}(AF)}{d^{2}-1}\Big{)}F\] _with_ \(F\) _being the swap operator,_ 4_._ \[\int_{\mathbf{U}(d)}UAU^{\dagger}XUBU^{\dagger}d\mu(U)\] (21) \[=\frac{d\operatorname{tr}(AB)-\operatorname{tr}A\operatorname{tr}B }{d(d^{2}-1)}(\operatorname{tr}X)\mathbf{1}_{d}+\frac{d\operatorname{tr}A \operatorname{tr}B-\operatorname{tr}(AB)}{d(d^{2}-1)}X.\] We first calculate the average infidelity. According to the above lemma, there is \[\int_{\mathbf{U}(d_{S})}\mathrm{tr}\Big{(}PA_{j}^{\dagger}A_{i}PA_{i }^{\dagger}A_{j}\Big{)}d\mu(U)\] \[=\mathrm{tr}\int_{\mathbf{U}(d_{S})}U(\mathbf{1}_{L}\otimes|0 \rangle\!\langle 0|)U^{\dagger}A_{j}^{\dagger}A_{i}U(\mathbf{1}_{L}\otimes|0 \rangle\!\langle 0|)U^{\dagger}A_{i}^{\dagger}A_{j}d\mu(U)\] \[=\frac{d_{S}d_{L}-d_{L}^{2}}{d_{S}(d_{S}^{2}-1)}\Big{|}\mathrm{tr }\Big{(}A_{j}^{\dagger}A_{i}\Big{)}\Big{|}^{2}+\frac{d_{S}d_{L}^{2}-d_{L}}{d_{S }(d_{S}^{2}-1)}\,\mathrm{tr}\Big{(}A_{j}^{\dagger}A_{i}A_{i}^{\dagger}A_{j} \Big{)} \tag{10}\] and \[\int_{\mathbf{U}(d_{S})}\Big{|}\mathrm{tr}\Big{(}PA_{i}^{\dagger }A_{j}\Big{)}\Big{|}^{2}d\mu(U)\] \[=\mathrm{tr}\int_{\mathbf{U}(d_{S})}U(\mathbf{1}_{L}\otimes|0 \rangle\!\langle 0|)U^{\dagger}A_{i}^{\dagger}A_{j}\] \[\otimes U(\mathbf{1}_{L}\otimes|0\rangle\!\langle 0|)U^{\dagger}A_{j }^{\dagger}A_{i}d\mu(U)\] \[=\frac{d_{S}d_{L}^{2}-d_{L}}{d_{S}(d_{S}^{2}-1)}\Big{|}\mathrm{ tr}\Big{(}A_{i}^{\dagger}A_{j}\Big{)}\Big{|}^{2}-\frac{d_{L}^{2}-d_{S}d_{L}}{d_{S }(d_{S}^{2}-1)}\,\mathrm{tr}\Big{(}A_{j}^{\dagger}A_{i}A_{i}^{\dagger}A_{j} \Big{)}. \tag{11}\] Thus, we can obtain \[\int_{\mathbf{U}(d_{S})}\frac{4d_{L}\epsilon^{2}(\mathcal{N} \circ\mathcal{E})}{n}d\mu(U)\] \[=\frac{d_{L}^{2}-1}{d_{S}^{2}-1}\sum_{i,j}\Big{(}\,\mathrm{tr} \Big{(}A_{j}^{\dagger}A_{i}A_{i}^{\dagger}A_{j}\Big{)}-\frac{1}{d_{S}}\Big{|} \mathrm{tr}\Big{(}A_{i}^{\dagger}A_{j}\Big{)}\Big{|}^{2}\Big{)}. \tag{12}\] Now we calculate the average of \[N_{\mathbf{G}}(\mathcal{E})=\Big{\langle}\tilde{\psi}\Big{|}H^{2 }\,\Big{|}\tilde{\psi}\Big{\rangle}-\Big{\langle}\tilde{\psi}\Big{|}\frac{H}{ \,\Big{|}\tilde{\psi}\Big{\rangle}^{2}}. \tag{13}\] The first term is equal to \[\int_{\mathbf{U}(d_{S})}\Big{\langle}\tilde{\psi}\Big{|}H^{2}\, \Big{|}\tilde{\psi}\Big{\rangle}\,d\mu(U)\] \[=\frac{1}{d_{L}}\,\mathrm{tr}\big{(}H_{L}^{2}\big{)}+\frac{1}{d_{S }}\,\mathrm{tr}\big{(}H_{S}^{2}\big{)}-\frac{2\,\mathrm{tr}\,H_{L}\,\mathrm{tr }\,H_{S}}{d_{L}d_{S}},\] where \(|\psi^{\prime}\rangle=1/\sqrt{d_{L}}\sum_{k}|k\rangle_{L}\,|k0\rangle_{S}\). The second term is equal to \[\int_{\mathbf{U}(d_{S})}\Big{\langle}\tilde{\psi}\Big{|}\,H\, \Big{|}\tilde{\psi}\Big{\rangle}^{2}\,d\mu(U)\] \[=\frac{1}{d_{L}^{2}}\int_{\mathbf{U}(d_{S})}\Big{(}-\mathrm{tr}\,H _{L}+\mathrm{tr}\big{(}U^{\dagger}H_{S}U(\mathbf{1}_{L}\otimes|0\rangle\! \langle 0|)\big{)}\Big{)}^{2}d\mu(U)\] \[=\frac{(\mathrm{tr}\,H_{L})^{2}}{d_{L}^{2}}-\frac{2\,\mathrm{tr}\, H_{L}}{d_{L}^{2}}\,\mathrm{tr}\int_{\mathbf{U}(d_{S})}(U^{\dagger}H_{S}U)( \mathbf{1}_{L}\otimes|0\rangle\!\langle 0|)d\mu(U)\] \[+\frac{1}{d_{L}^{2}}\,\mathrm{tr}\int_{\mathbf{U}(d_{S})}(U^{ \dagger})^{\otimes 2}H_{S}^{\otimes 2}U^{\otimes 2}(\mathbf{1}_{L}\otimes|0 \rangle\!\langle 0|)^{\otimes 2}d\mu(U)\] \[=\frac{(\mathrm{tr}\,H_{L})^{2}}{d_{L}^{2}}-\frac{2\,\mathrm{tr} \,H_{L}\,\mathrm{tr}\,H_{S}}{d_{L}d_{S}}+\frac{d_{L}d_{S}-1}{d_{L}d_{S}(d_{S}^{2 }-1)}(\mathrm{tr}\,H_{S})^{2}\] \[+\frac{d_{S}-d_{L}}{d_{L}d_{S}(d_{S}^{2}-1)}\,\mathrm{tr}\big{(}H_{ S}^{2}\big{)} \tag{14}\] Thus, the average noncovariance can be expressed as \[\int_{\mathbf{U}(d_{S})}N_{\mathbf{G}}(\mathcal{E})d\mu(U)\] \[=\frac{d_{L}\,\mathrm{tr}\big{(}H_{L}^{2}\big{)}-(\mathrm{tr}\,H _{L})^{2}}{d_{L}^{2}} \tag{15}\] \[+\frac{(d_{L}d_{S}^{2}-d_{S})\,\mathrm{tr}\big{(}H_{S}^{2}\big{)}- (d_{L}d_{S}-1)(\mathrm{tr}\,H_{S})^{2}}{d_{L}d_{S}(d_{S}^{2}-1)}.\]
2307.14179
Resolution-Aware Design of Atrous Rates for Semantic Segmentation Networks
DeepLab is a widely used deep neural network for semantic segmentation, whose success is attributed to its parallel architecture called atrous spatial pyramid pooling (ASPP). ASPP uses multiple atrous convolutions with different atrous rates to extract both local and global information. However, fixed values of atrous rates are used for the ASPP module, which restricts the size of its field of view. In principle, atrous rate should be a hyperparameter to change the field of view size according to the target task or dataset. However, the manipulation of atrous rate is not governed by any guidelines. This study proposes practical guidelines for obtaining an optimal atrous rate. First, an effective receptive field for semantic segmentation is introduced to analyze the inner behavior of segmentation networks. We observed that the use of ASPP module yielded a specific pattern in the effective receptive field, which was traced to reveal the module's underlying mechanism. Accordingly, we derive practical guidelines for obtaining the optimal atrous rate, which should be controlled based on the size of input image. Compared to other values, using the optimal atrous rate consistently improved the segmentation results across multiple datasets, including the STARE, CHASE_DB1, HRF, Cityscapes, and iSAID datasets.
Bum Jun Kim, Hyeyeon Choi, Hyeonah Jang, Sang Woo Kim
2023-07-26T13:11:48Z
http://arxiv.org/abs/2307.14179v1
# Resolution-Aware Design of Atrous Rates for Semantic Segmentation Networks ###### Abstract DeepLab is a widely used deep neural network for semantic segmentation, whose success is attributed to its parallel architecture called atrous spatial pyramid pooling (ASPP). ASPP uses multiple atrous convolutions with different atrous rates to extract both local and global information. However, fixed values of atrous rates are used for the ASPP module, which restricts the size of its field of view. In principle, atrous rate should be a hyperparameter to change the field of view size according to the target task or dataset. However, the manipulation of atrous rate is not governed by any guidelines. This study proposes practical guidelines for obtaining an optimal atrous rate. First, an effective receptive field for semantic segmentation is introduced to analyze the inner behavior of segmentation networks. We observed that the use of ASPP module yielded a specific pattern in the effective receptive field, which was traced to reveal the module's underlying mechanism. Accordingly, we derive practical guidelines for obtaining the optimal atrous rate, which should be controlled based on the size of input image. Compared to other values, using the optimal atrous rate consistently improved the segmentation results across multiple datasets, including the STARE, CHASE_DB1, HRF, Cityscapes, and iSAID datasets. ## 1 Introduction Semantic segmentation refers to the task of generating a semantic mask that classifies each pixel in an image into a specific category [1; 2; 3]. Semantic segmentation is one of the most representative tasks in the field of computer vision and is crucial for understanding scenes in indoor and outdoor environments. The recent success of deep neural networks has been incorporated into semantic segmentation using an encoder-decoder architecture, thereby enabling high performance [4; 5]. However, one challenge in semantic segmentation is detecting objects of varying sizes. Using the cascade architecture of deep neural networks leads to single-level image understanding, which complicates the detection of small and large objects within an image. To facilitate image understanding with multi-level features, modern segmentation networks employ a parallel architecture called atrous spatial pyramid pooling (ASPP) atop an encoder, which enables the extraction of both local and global information from the encoded features. Since its introduction in DeepLabV2 [6], the ASPP module has yielded successful results in semantic segmentation and has been applied to other dense prediction tasks, including instance [7] and panoptic segmentations [8; 9], monocular depth estimation [10; 11], domain adaptation [12; 13; 14; 15], and image matting [16]. The aim of ASPP module is to enlarge the field of view (FOV) of the segmentation network using atrous convolution [6], also known as dilated convolution [17; 18]. In contrast to vanilla convolution, atrous convolution uses an atrous rate, which generates an empty space between each element of the convolutional kernel (Figure 1). After applying multiple atrous convolutions with various atrous rates, the ASPP module combines their outputs, which enables it to detect objects of varied sizes and capture an understanding of the global context of the image, such as the overall layout and relationships between objects. The FOV size of the segmentation network is determined by the atrous rates of ASPP module and spatial scale of the encoded feature. To date, the common choice for atrous rates of the ASPP module has been \(\{6,12,18\}\) for three atrous convolutions or their doubled values when using a lower downsampling rate. However, in this study, we demonstrate that the existing rule for determining atrous rates yields a fixed FOV size, which poses limitations on its potential benefits by using the correct size. The FOV size should be adjusted by considering the properties of the target task or dataset to obtain more effective behavior of the ASPP module. To achieve this, the detailed mechanisms of the ASPP module are investigated. Despite their prevalence, the internal behavior of ASPP module in deep neural networks has rarely been discussed in semantic segmentation. To this end, an effective receptive field (ERF) is introduced for semantic segmentation that visualizes a segmentation network. We discover that the ERF of segmentation network exhibits a specific pattern owing to the architectural properties of ASPP module. Based on this pattern, we are going to explain the mechanism of ASPP module. By analyzing the ASPP module, its FOV size is determined and guidelines are proposed for controlling the FOV size by choosing valid values of atrous rates. Finally, the performances of the proposed atrous rates were compared with other values on various semantic segmentation datasets. ## 2 Background ### ERF for Semantic Segmentation Networks Let \(\mathbf{I}\in\mathbb{R}^{H\times W\times C}\) be an input image for a semantic segmentation model, where \((H,W)\) is the resolution or size of the image and \(C\) represents the number of channels. The objective of semantic segmentation is to generate a semantic mask \(\hat{\mathbf{y}}\in\mathbb{R}^{H\times W\times N_{c}}\) that classifies each pixel in an image \(\mathbf{I}\) into one of the \(N_{c}\) categories. A deep neural network is used as the semantic segmentation model that outputs \(\hat{\mathbf{y}}\) from the input image \(\mathbf{I}\). Subsequently, the relationship between \(\hat{\mathbf{y}}\) and \(\mathbf{I}\) is represented by a differentiable function involving various operations including convolution, batch normalization, and ReLU. ERF has been widely used for image classification [19; 20]; however, ERF for semantic segmentation has been rarely discussed. Therefore, we first formulate an ERF for semantic segmentation. The objective is to analyze an input pixel-level area that influences a pixel unit in the segmentation output. The difference between ERFs for image classification and semantic segmentation depends on the choice of the target unit in the output. Herein, the pixel located at the central coordinate \((C_{h},C_{w})\) is selected to examine the spatial relationship between the central output unit and input image. The Figure 1: \(3\times 3\) vanilla convolution uses nine adjoint features (shown in blue on the left), whereas \(3\times 3\) atrous convolution with an atrous rate of 2 uses nine distant features with one feature in between (shown in blue on the right). central output unit is defined as \(Y\coloneqq\sum_{k=1}^{N_{c}}\hat{y}_{C_{h},C_{w},k}\in\mathbb{R}\), which is aggregated across all categories. Because a gradient indicates the sensitivity of the pixel [21], the contribution of each pixel in \(\mathbf{I}\) to \(Y\) can be investigated using the gradient \(\frac{\partial Y}{\partial\mathbf{I}}\in\mathbb{R}^{H\times W\times C}\). After summing the gradient over the image channels, \(\mathbf{G}\coloneqq\sum_{c=1}^{C}\frac{\partial Y}{\partial\mathbf{I}}\in \mathbb{R}^{H\times W}\) is obtained, which represents the influence of each pixel in the input image on the central output unit. However, the gradient obtained from a single image is sparse and highly dependent on the properties of the input image. To examine the general behavior of the segmentation network, \(\mathbf{G}\) is aggregated across a large image dataset \(S\). Additionally, \(\mathrm{ReLU}\) is used to filter the negative and accumulate the positive importances [22]. Therefore, the ERF \(\mathbf{R}\coloneqq\sum_{I\in S}\mathrm{ReLU}(\mathbf{G})\in\mathbb{R}^{H \times W}\) is obtained, which illustrates the general contribution of each pixel to the central output unit. Note that ERF is defined as the relationship between the input and output of the segmentation network, which includes the properties of encoder and decoder. In summary, the ERF is obtained as follows: \[\mathbf{R}=\sum_{I\in S}\mathrm{ReLU}\left(\sum_{c=1}^{C}\frac{\partial(\sum_ {k=1}^{N_{c}}\hat{y}_{C_{h},C_{w},k})}{\partial\mathbf{I}}\right). \tag{1}\] ### ASPP Module For an input image \(\mathbf{I}\in\mathbb{R}^{H\times W\times C}\), the encoder of a segmentation network produces a feature map \(\mathbf{H}\in\mathbb{R}^{(H/s)\times(W/s)\times M}\), where \(M\) represents the number of channels for the feature map and \(s\) denotes the _output stride_, indicating the downsampling ratio up to the encoder output. The output stride value is generally selected as \(s=8\) or \(s=16\). A study of DeepLabV3 [25] observed that using \(s=8\) resulted in improved accuracy with additional use of computational resources, whereas \(s=16\) provided reasonable performance and \(s\geq 32\) caused performance degradation. This is because using higher downsampling rates eliminates fine details in images, thereby decreasing the accuracy of dense prediction tasks, such as semantic segmentation. After obtaining the encoder output \(\mathbf{H}\), DeepLabV3 and its variants apply the ASPP module containing the following five branches: one \(1\times 1\) convolution, one image pooling, and three \(3\times 3\) atrous convolutions with atrous rates of \(\{r,2r,3r\}\). The results from these branches are concatenated and subse quently merged with additional convolutions to produce the ASPP output \(\mathbf{A}\in\mathbb{R}^{(H/s)\times(W/s)\times N_{e}}\). Finally, bilinear upsampling is applied to \(\mathbf{A}\) for obtaining the final dense segmentation \(\hat{\mathbf{y}}\in\mathbb{R}^{H\times W\times N_{e}}\). The three atrous rates \(\{r,2r,3r\}\) of the ASPP module are determined by the base atrous rate \(r\). To choose the base atrous rate, DeepLabV3 [25] and DeepLabV3+ [26] use the following rule: \[r=\begin{cases}6&\text{if $s=16$,}\\ 12&\text{if $s=8$.}\end{cases} \tag{2}\] This rule is widely deployed in numerous semantic segmentation libraries and codes (Listing 1). However, the two cases are later demonstrated to be equivalent because enlarging FOV by increasing atrous rates has the same effect as decreasing the downsampling rate (Section 3.2). Thus, assuming \(s=16\), DeepLabV3 and DeepLabV3+ employ an ASPP module with atrous rates of \(\{6,12,18\}\). The base atrous rate \(r=6\) originates from an earlier version. In an ablation study of DeepLabV2 [6], ASPP-S with four branches of atrous rates \(\{2,4,8,12\}\) and ASPP-L with \(\{6,12,18,24\}\) were compared, concluding that the latter performed better on the PASCAL VOC 2012 dataset [27]. In DeepLabV3, the fourth atrous branch was removed using the three atrous rates, \(\{6,12,18\}\). Since then, the ASPP module has been widely employed with \(r=6\) as the default value in semantic segmentation. A default value of \(r=6\) can be used; however, it is not guaranteed to be optimal. In principle, the base atrous rate \(r\) should be a hyperparameter to change the FOV size of the segmentation network based on the target task or dataset. The identification of the optimal value of base atrous rate can improve the performance of segmentation networks compared to a suboptimal one. However, few studies have attempted to determine the optimal atrous rate, and the default value of \(r=6\) has been simply used without considering the specific properties of target task or dataset. References [28] and [29] exploited a neural architecture search to automatically discover an improved architecture for the ASPP module; however, these approaches do not consider dataset dependency because the search is based on a single dataset, and do not provide a logical understanding of the ASPP module. We claim that the mechanism of the ASPP module inside deep neural networks has not been thoroughly understood. Owing to this difficulty, there are currently no guidelines on exactly how much we should control the atrous rates to enlarge or shrink the FOV. Furthermore, because the default value \(r=6\) was obtained from the ablation study using a single dataset of the PASCAL VOC 2012, its validity should be verified across multiple datasets. In practical scenarios requiring a smaller FOV, using a base atrous rate other than \(r=6\) can be beneficial. Conversely, for larger image sizes, designing a larger FOV using a different base atrous rate can be advantageous. To this end, this study aims to establish practical guidelines for obtaining an optimal atrous rate. ## 3 Understanding the ASPP Module ### Analysis of ERF for Semantic Segmentation Networks To investigate the inner mechanism of ASPP module, we begin with empirical observations of the segmentation networks. To this end, the ERFs of DeepLabV3 and DeepLabV3+ were analyzed under several conditions, including the used datasets, types of backbones, and choices of output strides. Figure 2 illustrates the ERFs of DeepLabV3 using the Cityscapes dataset [30] with an input size of \(768\times 768\). Note that, for the three backbones of ResNet-{18, 50, 101} [31], the ERFs of DeepLabV3 exhibited an *-shaped pattern, which we refer to as the _star pattern_. Although the star patterns overlook the less-used areas, their broad coverage enables the segmentation network to capture the global context of an input image. Additionally, when measuring the end-to-end distance, a star pattern of the same size was observed for the two output strides \(s\in\{8,16\}\). Furthermore, the ERFs for DeepLabV3+ were obtained (Figure 3). Compared to DeepLabV3, DeepLabV3+ has a slightly different decoder that uses an additional branch to merge low-level features with the ASPP module. However, the ERFs of DeepLabV3+ still exhibited the star patterns. These observations imply that the star pattern is related to the use of the ASPP module. Now, using another dataset called ADE20K [32], the ERFs of DeepLabV3 and DeepLabV3+ were examined. Although cropped, the star pattern was visible in the ERFs (Figure 4). The cropping occurred because of the \(512\times 512\) size of the input images. Compared to the Cityscapes dataset, the ADE20K dataset contains images that are smaller in size. Finally, using another segmentation network called FastFCN [33], the ERFs for Cityscapes dataset were obtained. For FastFCN, the type of head, such as ASPP or PSP [34], can be chosen. Figure 5 illustrates the ERFs of FastFCNs using ASPP and PSP heads. FastFCN with an ASPP head exhibited a star pattern, whereas that with a PSP head did not. Here, the observations from ERFs in Figures 2-5 are summarized. * **Observation 1.** When a segmentation network employed the ASPP module, its ERF exhibited the star pattern. * **Observation 2.** When we follow the existing rule of atrous rate in Eq. 2, the size of the star pattern did not expand or shrink by selecting the output stride as \(s=8\) or \(s=16\). Figure 4: ERFs of DeepLabV3 and DeepLabV3+ for the ADE20K dataset in \(512\times 512\) input image. Note that \(\bigstar\)-shape appears like a cropped version of its larger size. Figure 3: ERFs of DeepLabV3+ for the Cityscapes dataset in \(768\times 768\) input image. Figure 2: ERFs of DeepLabV3 for the Cityscapes dataset in \(768\times 768\) input image. “R” indicates ResNet. We observed that all ERFs of DeepLabV3 exhibited an \(\bigstar\)-shaped pattern, which is referred to as the _star pattern_. Because printed figures can be seen improperly, we highly encourage viewing all images electronically with zoom. See supplementary materials to review the raw image files. * **Observation 3.** The absolute size of the star pattern was fixed for current segmentation networks. When small-sized images were used, the star pattern in ERF was not shrunk but rather cropped. Now, we analyze the underlying mechanism of the ASPP module to understand and validate these three observations. ### The Mechanism of ASPP Module The ASPP module produces output \(\mathbf{A}\) using the encoded feature \(\mathbf{H}\). Here, we analyze the formation process of \(\mathbf{A}\) from \(\mathbf{H}\). Consider the base atrous rate \(r=6\) and output stride \(s=16\). The three atrous branches apply atrous convolutions parallelly to the feature map \(\mathbf{H}\) at rates \(\{6,12,18\}\). The first \(3\times 3\) atrous convolution with an atrous rate of six uses nine features in \(\mathbf{H}\), and the distance between each feature is six in the feature unit (red boxes in Figure 6). Secondly, the \(3\times 3\) atrous convolution with an atrous rate of 12 uses one feature at the same target coordinate and eight features in \(\mathbf{H}\) at a distance of 12 on the feature unit (green boxes). Similarly, the \(3\times 3\) atrous convolution with an atrous rate of 18 uses one feature at the same target coordinate and eight features in \(\mathbf{H}\) with 18 feature distances (blue boxes). Here, each feature in the encoded feature \(\mathbf{H}\) covers a local area in the image \(\mathbf{L}\). By the composition of the three atrous convolutions, each element of \(\mathbf{A}\) uses a single feature on the same target coordinate (purple), eight features at atrous rates 6 (red), 12 (green), and 18 (blue), amounting to \(1+8+8+8=25\) features in \(\mathbf{H}\). Note that the 25 features in \(\mathbf{H}\) are regularly spaced, forming an *-shape. This is the reason why employing the ASPP module yielded a star pattern on ERF (Observation 1). Additionally, \(s=16\) indicates that the feature map \(\mathbf{H}\) contains \(16\times\) downsampled information from the input image. Thus, in terms of scale, one feature unit on \(\mathbf{H}\) is equivalent to \(16\times 16\) pixels at the image level. This scale conversion allows us to analyze 25 features in a pixel unit. For instance, the center-to-center distance between the bottom-left and bottom-right features is 36 feature units, which can be converted to \(36\cdot 16=576\) pixels. When the center-to-center distance in ERF was measured using Figure 2, approximately 573 pixels were obtained,1 that closely matched our expected value of 576 pixels. Footnote 1: The distance was measured using a software such as XnViewMP. Now the aforementioned analysis is generalized to \((r,s)\). The three atrous convolutions with atrous rates \(\{r,2r,3r\}\) use 25 features on a regularly spaced star pattern. Furthermore, each feature unit in \(\mathbf{H}\) is equivalent to \(s\times s\) pixel units at the image level. Thus, the center-to-center distance between the bottom-left and bottom-right features is \(2\cdot 3r\) feature units, which can be converted to \(2\cdot 3r\cdot s=6rs\) pixels. When we consider the end-to-end distance between the bottom-left and bottom-right features, adding a margin \(\alpha\approx 32\) of the circular areas, \(6rs+\alpha\) pixels are obtained. Note that when following the existing rule in Eq. 2, two possible choices of \((r,s)=(6,16)\) or \((r,s)=(12,8)\) exist, resulting in the same length of \(6rs=576\). This is the reason why both output strides \(s=8\) and \(s=16\) resulted in the same size of the star pattern (Observation 2). In other words, Figure 5: ERFs of FastFCN for the Cityscapes dataset in \(768\times 768\) input image. Notably, the *-shape is visible in FastFCN with an ASPP head, but not in FastFCN with a PSP head. \(b\) denotes the mini-batch size used during training. as long as we are limited to only two choices, the absolute size of star pattern is unaffected by other factors such as the dataset used (Observation 3). ### Guidelines for Determining the Atrous Rates of ASPP Module Based on the aforementioned analysis, we conclude that in order to enlarge the FOV of ASPP module, we should not adhere to the existing rule of atrous rates in Eq. 2 and should consider increasing either the base atrous rate \(r\) or output stride \(s\). However, a larger output stride implies a higher downsampling rate, which is disadvantageous for semantic segmentation. Furthermore, as discussed in the study by DeepLabV3 [25], the choice of output stride can be limited by the available GPU resources. Therefore, rather than controlling the output stride, we would like to control the base atrous rate to enlarge or shrink the FOV of ASPP module. Here, we claim that the FOV size of ASPP should match the size of the input image. Reference [25] demonstrated that if the FOV size of the ASPP module is larger than the size of input image, then the outer kernel in the \(3\times 3\) atrous convolution is applied to the zero-padded region, thereby causing an invalid kernel which degenerates into a \(1\times 1\) convolution. However, if the FOV size of ASPP module is smaller than the size of the input image, the ASPP module will not capture the global context of the image (Figure 7). The limited usage of global information in an image is disadvantageous for semantic segmentation because performance of scene understanding is improved by using the global information [35]. Therefore, we regard the desirable size of FOV as the exact size of the input image, neither more than that nor less than that. Note that this size refers to that of the input of segmentation network, such as the crop size during training, not the size of the full image. In summary, our objective is to equalize the end-to-end distance \(6rs+\alpha\) to the size of input image \(l\). From this condition, we derive the following guidelines for obtaining the optimal base atrous rate \(r^{*}\): \[r^{*}=\frac{l-\alpha}{6s}. \tag{3}\] For instance, when performing semantic segmentation on the PASCAL VOC 2012 dataset, a crop size of \(512\times 512\) is commonly used. For \(l=512\) and \(s=16\), the proposed guidelines state that \(r^{*}=5\), which is close to \(r=6\) from the existing rule in Eq. 2. We claim that this is the reason why Figure 6: Explanation of the FOV of ASPP module using atrous rates \(\{6,12,18\}\) for an output stride \(s=16\). the existing rule of atrous rates in Eq. 2 has worked suitably with the PASCAL VOC 2012 dataset. Furthermore, a crop size of \(512\times 512\) has been widely used for several datasets, including ADE20K, COCO Stuff [36], LoveDA [37], and REFUGE [38]. In addition, \(l=512\) was used in early studies on the Cityscapes dataset. However, recent studies using the Cityscapes dataset have preferred a larger crop size, such as \(769\times 769\), which enables the aggregation of information on a wide image [39, 40, 41, 42]. For \(l=769\) and \(s=8\), the proposed guidelines state that \(r^{*}=15.35\), which is different from \(r=12\) in the existing rule in Eq. 2. Indeed, the later section demonstrates that when using a crop size of \(769\times 769\) for the Cityscapes dataset, \(r=15\) consistently yielded the highest performance in semantic segmentation among \(r\in\{12,13,\cdots,18\}\). Conversely, studies on a few datasets have commonly used smaller crop sizes, such as \(128\times 128\), which require smaller FOV and atrous rates. In summary, we claim that the crop size of semantic segmentation can differ from \(512\times 512\); thus, a valid atrous rate should be used based on the proposed guidelines. \begin{table} \begin{tabular}{c c c|c c c} \hline \hline size \(l\) & stride \(s\) & rate \(r^{*}\) & size \(l\) & stride \(s\) & rate \(r^{*}\) \\ \hline 128 & 16 & 1.00 & 128 & 8 & 2.00 \\ 256 & 16 & 2.33 & 256 & 8 & 4.67 \\ 320 & 16 & 3.00 & 320 & 8 & 6.00 \\ 512 & 16 & 5.00 & 512 & 8 & 10.00 \\ 640 & 16 & 6.33 & 640 & 8 & 12.67 \\ 768 & 16 & 7.67 & 768 & 8 & 15.33 \\ 769 & 16 & 7.68 & 769 & 8 & 15.35 \\ 832 & 16 & 8.33 & 832 & 8 & 16.67 \\ 896 & 16 & 9.00 & 896 & 8 & 18.00 \\ 1024 & 16 & 10.33 & 1024 & 8 & 20.67 \\ \hline \hline \end{tabular} \end{table} Table 1: For the image size \(l\) and output stride \(s\), the proposed base atrous rate \(r^{*}\) is summarized. In practice, the integer closest to \(r^{*}\) is selected. Figure 7: Illustration comparing the FOV of ASPP (\(\bigstar\)-shaped pattern) with image size (dotted line). (Left) When the FOV size of the ASPP module is larger than the image size, the outer kernel becomes an invalid operation. (Right) When the FOV size of the ASPP module is smaller than the image size, the segmentation network cannot capture global information in the image. ## 4 Experiments ### Small Image Size From this section, we compare the performance of segmentation networks with the proposed atrous rate and different values across various datasets and crop sizes. First, the performance of segmentation network using the ASPP module was examined with small crop sizes. We target the structured analysis of the retina (STARE) dataset [43; 44], which contains retinal images along with the corresponding segmentation labels on the blood vessels. Following the common practice for semantic segmentation of the STARE dataset [45], a crop size of \(128\times 128\) pixels was used, which was obtained after applying mean-std normalization and a random resize operation using a size of \(605\times 700\) pixels with a ratio range of 0.5 to 2.0. Furthermore, random flipping with a probability of 0.5 and photometric distortions, including brightness, contrast, saturation, and hue, were applied. The objective was to classify each pixel into one of the two categories and train the segmentation network using the cross-entropy loss function. Following the common practice for semantic segmentation of the STARE dataset, U-Net [46] using the ASPP module was targeted. U-Net employs four \(2\times 2\) maxpool operations, producing an output stride \(s=16\). However, existing practices for U-Net with the ASPP module use the base atrous rate \(r=12\) instead of \(r=6\) from Eq. 2 proposed in DeepLabV3 and DeepLabV3+ [47]. To investigate the validity of the proposed atrous rate \(r^{*}\) and compare it with the existing values, 12 different values of \(r\in\{12,11,\cdots,1\}\) were experimented. A training recipe from MMSegmentation[47] was employed. For training, stochastic gradient descent with momentum 0.9, weight decay \(5\times 10^{-4}\), and learning rate \(10^{-2}\) with polynomial decay with a 40K scheduler were used. The training was performed on a single GPU machine. We measured the mean intersection over union (mIoU) and reported the average of ten runs. Table 2 summarizes the results on the STARE dataset. Note that \(r\in\{11,10,\cdots,4\}\) resulted in a similar mIoU to that of the baseline of \(r=12\). In contrast, using \(r\leq 3\) significantly increased the mIoU. These observations are consistent with the analysis of this study. When using a small size of input image, the invalid kernel activates as \(r\) approaches a small value, thereby causing the ASPP module to operate effectively. The best mIoU was observed when \(r=1\), which commensurates with the proposed guidelines stating that \(r^{*}=1\) for \(l=128\) and \(s=16\). To further validate the performance differences for small crop sizes, another dataset called the child heart and health study in england database (CHASE_DB1) [48] was targeted. Similar to the STARE dataset, the CHASE_DB1 dataset contains retinal vessel images of children. Following the common \begin{table} \begin{tabular}{c|c c c c c c c c c c c c} \hline \hline \(r\) & 12 & 11 & 10 & 9 & 8 & 7 & 6 & 5 & 4 & 3 & 2\({}^{*}\) & 1 \\ \hline mIoU & 89.48 & 89.58 & 89.58 & 89.46 & 89.52 & 89.54 & 89.58 & 89.57 & 89.53 & 89.63 & **89.66** & 89.52 \\ \(\Delta\) & 0.00 & +0.10 & +0.10 & -0.02 & +0.04 & +0.06 & +0.10 & +0.09 & +0.05 & +0.15 & +0.18 & +0.04 \\ \hline \hline \end{tabular} \end{table} Table 4: Results of semantic segmentation on the HRF dataset. The mIoU (%) and its improvement \(\Delta\) compared with the baseline (\(r=12\)) are reported. \begin{table} \begin{tabular}{c|c c c c c c c c c c c c} \hline \hline \(r\) & 12 & 11 & 10 & 9 & 8 & 7 & 6 & 5 & 4 & 3 & 2 & 1\({}^{*}\) \\ \hline mIoU & 89.89 & 89.89 & 89.87 & 89.98 & 89.94 & 89.95 & 89.91 & 89.94 & 89.93 & 90.00 & 89.94 & **90.01** \\ \(\Delta\) & 0.00 & 0.00 & -0.02 & +0.09 & +0.05 & +0.06 & +0.02 & +0.05 & +0.04 & +0.11 & +0.05 & +0.12 \\ \hline \hline \end{tabular} \end{table} Table 2: Results of semantic segmentation on the STARE dataset. The mIoU (%) and its improvement \(\Delta\) compared with the baseline (\(r=12\)) are reported. \({}^{*}\) indicates the base atrous rate according to the proposed guideline. \begin{table} \begin{tabular}{c|c c c c c c c c c c c c} \hline \hline \(r\) & 12 & 11 & 10 & 9 & 8 & 7 & 6 & 5 & 4 & 3 & 2 & 1\({}^{*}\) \\ \hline mIoU & 89.49 & 89.53 & 89.52 & 89.48 & 89.50 & 89.48 & 89.48 & 89.46 & 89.44 & 89.51 & 89.57 & **89.59** \\ \(\Delta\) & 0.00 & +0.04 & +0.03 & -0.01 & +0.01 & -0.01 & -0.01 & -0.03 & -0.05 & +0.02 & +0.08 & +0.10 \\ \hline \hline \end{tabular} \end{table} Table 3: Results of semantic segmentation on the CHASE_DB1 dataset. The mIoU (%) and its improvement \(\Delta\) compared with the baseline (\(r=12\)) are reported. practice for semantic segmentation of the CHASE_DB1 dataset [45], a crop size of \(128\times 128\) pixels was used, which was obtained after applying mean-std normalization and a random resize operation with a size of \(960\times 999\) pixels in a ratio range of 0.5 to 2.0. In these experiments, the U-Net and training recipe similar to those for the STARE dataset were used. Table 3 summarizes the results obtained from the average of three runs. Similarly, significant improvements were observed using \(r\leq 2\). The best mIoU was observed at \(r=1\), corresponding to the proposed base atrous rate \(r^{*}=1\). Finally, the high-resolution fundus (HRF) dataset [49] was targeted, which contains retinal fundus images with corresponding segmentation labels. Following the common practice for semantic segmentation of the HRF dataset [50], a crop size of \(256\times 256\) pixels was used, which was obtained after applying mean-std normalization and a random resize operation using a size of \(2336\times 3504\) pixels with a ratio range of 0.5 to 2.0. Furthermore, an additional Dice loss with a coefficient of 3.0 was used. In these experiments, the U-Net and training recipe similar to those for the STARE dataset were used. Table 4 summarizes the results obtained from the average of three runs. The best mIoU was observed at \(r=2\), corresponding to the proposed guidelines stating that \(r^{*}=2.33\) for \(l=256\) and \(s=16\). ### Large Image Size Thereafter, the performance of the segmentation network with the ASPP module was experimented using larger crop sizes. The Cityscapes dataset [30] containing images of urban street scenes was targeted. A crop size of \(769\times 769\) pixels was used, which was obtained after applying mean-std normalization and a random resize operation using a size of \(2049\times 1025\) pixels with a ratio range of 0.5 to 2.0. Furthermore, random flipping with a probability of 0.5 and the photometric distortions were applied. The objective was to classify each pixel into one of the 19 categories and train the segmentation network using the cross-entropy loss function. We target DeepLabV3 that employs the ASPP module with an output stride \(s=8\) and a default value of the base atrous rate \(r=12\) from Eq. 2. To investigate the validity of the proposed atrous rate \(r^{*}\) and compare it with the existing value, seven different values of \(r\in\{12,13,\cdots,18\}\) were experimentally investigated. Additionally, two backbones of ResNet-{50, 101} pretrained on ImageNet [51] were examined. For training, stochastic gradient descent with momentum 0.9, weight decay \(5\times 10^{-4}\), and learning rate \(10^{-2}\) with polynomial decay with an 80K scheduler were used. The training was conducted on a \(4\times\) GPU machine, and SyncBN [52] was used for distributed training. We measured the mIoU and reported the average of three runs. Table 5 summarizes the results on the Cityscapes dataset. Overall, \(r\geq 13\) yielded improved mIoUs compared to the baseline performance of \(r=12\). A peak improvement was observed using \(r=15\) \begin{table} \begin{tabular}{c|c c c c c c c c} \hline \hline \(r\) & 12 & 13 & 14 & 15 & 16 & 17 & 18\({}^{*}\) & 19 & 20 \\ \hline mIoU & 66.28 & 66.41 & 66.91 & 66.68 & 66.51 & 66.96 & **67.03** & 66.72 & 66.86 \\ \(\Delta\) & 0.00 & +0.13 & +0.63 & +0.40 & +0.23 & +0.68 & +0.75 & +0.44 & +0.58 \\ \hline \hline \end{tabular} \end{table} Table 6: Results of semantic segmentation on the iSAID dataset. The mIoU (%) and its improvement \(\Delta\) compared with the baseline (\(r=12\)) are reported. \begin{table} \begin{tabular}{c|c|c c c c c c c} \hline \hline Model & \(r\) & 12 & 13 & 14 & 15\({}^{*}\) & 16 & 17 & 18 \\ \hline \multirow{2}{*}{DeepLabV3 with R-50} & mIoU & 79.33 & 79.73 & 79.48 & **79.93** & 79.73 & 79.62 & 79.69 \\ & \(\Delta\) & 0.00 & +0.40 & +0.15 & +0.60 & +0.40 & +0.29 & +0.36 \\ \hline \multirow{2}{*}{DeepLabV3 with R-101} & mIoU & 78.88 & 79.46 & 79.75 & **79.90** & 79.85 & 79.87 & 79.60 \\ & \(\Delta\) & 0.00 & +0.58 & +0.87 & +1.02 & +0.97 & +0.99 & +0.72 \\ \hline \hline \end{tabular} \end{table} Table 5: Results of semantic segmentation on the Cityscapes dataset. The mIoU (%) and its improvement \(\Delta\) compared with the baseline (\(r=12\)) are reported. corresponding to the proposed guidelines stating that \(r^{*}=15.35\) for \(l=769\) and \(s=8\). Although \(r\geq 16\) yielded an improved mIoU, the degree of improvement decreased as the base atrous rate increased. This phenomenon was observed for the two backbones of ResNet-{50, 101}. These observations are consistent with our analysis. The FOV size of ASPP module should be controlled to exactly match the input size, neither more than that nor less than that. To further validate the performance difference using large crop sizes, instance segmentation in aerial images dataset (iSAID) [53; 54] was targeted, which contains aerial images and the corresponding segmentation labels for object instances. Following the common practice for semantic segmentation of the iSAID dataset [55], an input size of \(896\times 896\) pixels was used, which was obtained after applying mean-std normalization and a random resize operation using a size of \(896\times 896\) pixels with a ratio range of 0.5 to 2.0. Furthermore, random flipping with a probability of 0.5 and the photometric distortions were used. The objective was to classify each pixel into one of the 16 categories and train the segmentation network using the cross-entropy loss function. DeepLabV3+ that employs ASPP module and ResNet-50 with an output stride \(s=8\) was targeted. In these experiments, the training recipe similar to that for the Cityscapes dataset was used. To investigate the validity of the proposed atrous rate \(r^{*}\) and compare it with the existing value, nine different values of \(r\in\{12,13,\cdots,20\}\) were experimented. Table 6 summarizes the results obtained from the average of two runs. Similarly, \(r\geq 13\) yielded improved mIoUs compared to the baseline performance of \(r=12\). The best mIoU was observed at \(r=18\), corresponding to the proposed guidelines stating that \(r^{*}=18\) for \(l=896\) and \(s=8\). ## 5 Discussion ### Further Analysis of Various Segmentation Networks Here, ERFs for other segmentation networks are analyzed. FCN and FCN-D6FCN [56] is a representative segmentation network. Using ERF, FCN and its variant (Figure 8) were investigated. The vanilla FCN, which employs no atrous convolution in the head, yielded an ERF with a simple 2D Gaussian pattern. However, a variant called FCN-D6 yielded an ERF with a \(5\times 5\)-shaped pattern, which was significantly different from a simple 2D Gaussian pattern. The ERF of FCN-D6 can be similarly explained using the analysis in Section 3.2. After obtaining the encoder output \(\mathbf{H}\in\mathbb{R}^{(H/s)\times(W/s)\times M}\) with \(s=16\), FCN and FCN-D6 apply their heads to produce a semantic mask. Contrary to the vanilla FCN, the variant FCN-D6 employs a modified head with two \(3\times 3\) atrous convolutions with atrous rates of \(\{6,6\}\) in a row, not in parallel architecture. Note that applying the two \(3\times 3\) atrous convolutions with atrous rates \(\{6,6\}\) is equivalent to applying a single \(5\times 5\) atrous convolution with an atrous rate of six. Thus, the center-to-center distance between the bottom-left and bottom-right features is 24 feature units. Using an output stride of \(s=16\), the center-to-center distance was converted into \(24\cdot 16=384\) pixels. Indeed, when measuring the center-to-center distance in Figure 8, approximately 379 pixels were obtained, which commensurates with the expected value. By generalizing into atrous rate \(r\) and output stride \(s\), the center-to-center distance of \(4rs\) pixels is derived for FCN-D6. Therefore, when Figure 8: ERFs of FCN for the Cityscapes dataset in \(768\times 768\) input image. \(5\times 5\)-shaped pattern is visible in FCN-D6 but not in the vanilla FCN. using FCN-D6, the FOV size should be controlled by setting a valid atrous rate \(r\) depending on the target task or dataset. Asymmetric Pattern for CityscapesThe ERF of CNN is known as a symmetric 2D Gaussian [19; 20]. However, understanding a scene can require information on specific regions, depending on the size of the region. In this case, the 2D Gaussian is a symmetric 2D Gaussian with on the dataset. In particular, for images in the Cityscapes dataset, distant objects in 3D space, such as the sky and buildings, are captured in the top area of the image, whereas closer objects and the structure of the road are positioned in the bottom area, whose information matters more in understanding the global context of the image (Figure 9). Consequently, the segmentation network for the Cityscapes dataset can prefer to focus on the bottom area of the image, yielding ERFs that exhibit an asymmetric 2D Gaussian pattern. We found that this phenomenon usually occurred when segmentation networks employed global modules, which used contextual information on all positions, such as the Context Encoding Module of EncNet [52]. In other words, the use of vanilla convolution restricts the segmentation network to having an ERF of symmetric 2D Gaussian, whereas employing global operations enables the segmentation network to flexibly choose an ERF pattern. Figure 10 shows the ERFs of asymmetric 2D Gaussian for the Cityscapes dataset, obtained from several segmentation networks that employ global modules. Indeed, NonLocalNet [57] employs the non-local operation to aggregate features across all positions, and ANN [42] is an improved version of NonLocalNet with a similar global operation. DANet [58] employs dual attention modules, which incorporate features with their global context. In addition, using LMfit library [59], we fitted each ERF to a 2D Gaussian to obtain the center coordinates \((x_{c},x_{y})\) and the standard deviations \(\sigma_{x}\) and \(\sigma_{y}\), which represent the wideness of the 2D Gaussian (Table 7). For four models, we observed that \(x_{c}<y_{c}\) and \(\sigma_{x}>\sigma_{y}\), which indicates that these segmentation networks preferred and benefited from an asymmetric 2D Gaussian with a bottom-shifted and fat ERF to focus on road areas. Transformer BackboneIt is worthy to examine the properties of segmentation networks employing a recent vision transformer backbone. Figure 11 represents the ERFs of UperNet [60] with transformer backbones. ViT-B/16 indicates the vision transformer at its base size and patch size of \(16\times 16\) pixels. Because ViT-B/16 [61] partitions an image into a sequence of patches of \(16\times 16\) size, the ERF of UperNet with ViT-B/16 backbone exhibited patterns like patch-partitioned 2D Gaussian. UperNet with DeiT-S/16 [62] has an almost identical architecture to that of ViT with a smaller model size and yielded similar ERF. In contrast, the ERF of UperNet with Swin [63] exhibited a smooth 2D Gaussian without the patch-partitioned pattern because Swin merges features of neighboring patches. Transformer ModelsIn addition to using vision transformers as the backbone, several segmentation networks have been proposed to elaborately incorporate the self-attention mechanism of Figure 11: ERFs for the ADE20K dataset in \(512\times 512\) input image. UperNet with a transformer backbone. “W” indicates window size. Figure 12: ERFs for the ADE20K dataset in \(512\times 512\) input image. transformers. Figure 12 illustrates the ERFs of recent segmentation networks that employ the self-attention mechanism. For SETR [64], Segmenter [65], and DPT [66], which employ ViT backbones, ERFs exhibited patch-partitioned 2D Gaussian patterns, but with different wideness due to their distinct decoder designs. Interestingly, SegFormer [67], which merges features of patches, exhibited ERF highlighting a significantly small area, while achieving high performance in semantic segmentation. This observation indicates that, although using global information from a wide ERF could be advantageous for semantic segmentation, certain segmentation networks rather focused on local information with smaller ERFs, while still achieving high performance. ### Transformer with ASPP Module It is worthy to examine the combination of the ASPP module and segmentation network using the self-attention mechanism. Here, SETR naive that employs ViT-L/16 was targeted. Note that the patch size 16 indicates output stride \(s=16\). A crop size of \(768\times 768\) pixels was used. In these experiments, the same training recipe similar to that for the Cityscapes dataset in Section 4.2 was used. To investigate the validity of the proposed atrous rate \(r^{*}\) and compare it with other values, 12 different values of \(r\in\{1,2,\cdots,12\}\) were experimented. In addition, the baseline performance of SETR was measured. Table 8 summarizes the results obtained from the average of three runs. The peak improvement was observed when \(r=8\). Indeed, the proposed guidelines state that \(r^{*}=8\) for \(l=768\) and \(s=16\). Nevertheless, using other values of \(r\) can exhibit decreased mIoU, which means that when using the ASPP module for performance gain, a valid value of the base atrous rate should be used. ## 6 Conclusion In this study, the mechanism of the ASPP module were analyzed. We introduced and obtained ERFs of segmentation networks to empirically observe the inner behavior of the ASPP module. We found that the use of the ASPP module led to a star-shaped pattern on ERF. Based on these observations, we explained the mechanisms of the ASPP module and quantified its FOV size. To obtain effective behavior of the ASPP module, we suggested adjusting the FOV size to exactly match the size of the input image, proposing guidelines to obtain the optimal atrous rates. The validity of the proposed guidelines was examined across different datasets and image sizes. We observed that the proposed atrous rate consistently yielded improved mIoU compared with other values of the atrous rate.
2307.03405
Syzygies of secant varieties of smooth projective curves and gonality sequences
The purpose of this paper is to prove that one can read off the gonality sequence of a smooth projective curve from syzygies of secant varieties of the curve embedded by a line bundle of sufficiently large degree. More precisely, together with Ein-Niu-Park's theorem, our main result shows that the gonality sequence of a smooth projective curve completely determines the shape of the minimal free resolutions of secant varieties of the curve of sufficiently large degree. This is a natural generalization of the gonality conjecture on syzygies of smooth projective curves established by Ein-Lazarsfeld and Rathmann to the secant varieties.
Junho Choe, Sijong Kwak, Jinhyung Park
2023-07-07T06:11:33Z
http://arxiv.org/abs/2307.03405v1
# Syzygies of secant varieties of smooth projective curves and gonality sequences ###### Abstract. The purpose of this paper is to prove that one can read off the gonality sequence of a smooth projective curve from syzygies of secant varieties of the curve embedded by a line bundle of sufficiently large degree. More precisely, together with Ein-Niu-Park's theorem, our main result shows that the gonality sequence of a smooth projective curve completely determines the shape of the minimal free resolutions of secant varieties of the curve of sufficiently large degree. This is a natural generalization of the gonality conjecture on syzygies of smooth projective curves established by Ein-Lazarsfeld and Rathmann to the secant varieties. Key words and phrases:algebraic curve, secant variety, syzygies, Koszul cohomology, gonality sequence, symmetric product of a curve 2020 Mathematics Subject Classification: 14N07, 14N05, 13D02 J. Choe was supported by a KIAS Individual Grant (MG083301) at Korea Institute for Advanced Study. J. Park was partially supported by the National Research Foundation (NRF) funded by the Korea government (MSIT) (NRF-2021R1C1C1005479 and NRF-2022M3C1C8094326). ## 1. Introduction Let \(X\) be a smooth projective curve over \(X\). Let \(\mathbf{P}^{r}\) be a smooth projective curve over \(X\). By specializing Danila's theorem [7] to the curve case (see [12]), we have \[H^{0}(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(m))=H^{0}(\mathbf{P}^{r},\mathscr{O}_{ \mathbf{P}^{r}}(m))\ \ \mbox{for}\ 0\leq m\leq k+1.\] This then implies that if \(0\leq p\leq\ell\), then \[K_{p,q}(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))\neq 0\iff p=0,q=0\ \mbox{or}\ 1 \leq p\leq\ell\ \mbox{and}\ q=k+1.\] Thus the shape of the first \(\ell\) steps of the minimal free resolution of \(R(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))\) is completely determined. Recall from [11, Theorem 1.2] that the Castelnuovo-Mumford regularity of \(\mathscr{O}_{\Sigma_{k}}\) is \(2k+2\) if \(g\geq 1\) and \(k+1\) if \(g=0\). As \(\Sigma_{k}\subseteq\mathbf{P}^{r}\) is arithmetically Cohen-Macaulay, the projective dimension of \(R(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))\) is \(e:=\operatorname{codim}\Sigma_{k}\). Notice that if \(\deg L=2g+2k+1+\ell\), then \(\ell=e-g\). To summarize the discussion, the Betti table of \(R(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))\) is the following: \[\begin{array}{c|ccccccccc}&0&1&2&\cdots&e-g-1&e-g&e-g+1&\cdots&e\\ \hline 0&1&-&-&\cdots&-&-&-&\cdots&-\\ 1&-&-&-&\cdots&-&-&-&\cdots&-\\ \vdots&\vdots&\vdots&\vdots&&\vdots&\vdots&&\vdots\\ k&-&-&-&\cdots&-&-&-&\cdots&-\\ k+1&-&\ast&\ast&\cdots&\ast&\ast&\mathcal{?}&\cdots&\mathcal{?}\\ k+2&-&-&-&\cdots&-&-&\mathcal{?}&\cdots&\mathcal{?}\\ \vdots&\vdots&\vdots&&\vdots&\vdots&&\vdots&&\vdots\\ 2k+2&-&-&-&\cdots&-&-&\mathcal{?}&\cdots&\mathcal{?}\end{array}\] Here "-" indicates a zero entry, "\(\ast\)" indicates a nonzero entry, and "?" indicates an entry not yet determined. We remark that the syzygies of \(\Sigma_{k}\) in \(\mathbf{P}^{r}\) are not "asymptotic syzygies" considered in [9] and [23]. It is natural to study the undetermined part of the Betti table of \(R(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))\). The problem is to determine vanishing and nonvanishing of \(K_{p,q}(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))\) for \(e-g+1\leq p\leq e\) and \(k+1\leq q\leq 2k+2\). The case of \(g=0\) is a classical result, the case of \(g=1\) was done by Graf von Bothmer-Hulek [14] and Fisher [13], and the case of \(g=2\) was recently settled by Li [22]. For the case of \(g\geq 3\), we assume that \(\deg L\) is sufficiently large. Consider the case of \(k=0\). Green-Lazarsfeld [17, Theorem 2] proved that \[K_{p,2}(C,L)\neq 0\ \ \mbox{for}\ e-g+1\leq p\leq e.\] This was recently generalized by Taylor [27, Corollary 3.6] as \[K_{p,2k+2}(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))\neq 0\ \ \mbox{for}\ e-g+1\leq p\leq e.\] The gonality conjecture of Green-Lazarsfeld asserts that \[K_{p,1}(C,L)\neq 0\ \ \mbox{for}\ 1\leq p\leq\operatorname{codim}(C)- \operatorname{gon}(C)+1,\] where \[\operatorname{gon}(C):=\min\{d\mid C\ \mbox{carries a linear series}\ g_{d}^{1}\}\] is the _gonality_ of \(C\). The gonality conjecture was established by Ein-Lazarsfeld [10] and Rathmann [24]. This suggests that the geometry of \(C\) is deeply related to vanishing and nonvanishing of \(K_{p,1}(C,L)\) for large \(p\). Along this line, Lawrence Ein previously asked what kind of geometry of \(C\) is involved in the behavior of vanishing and nonvanishing of \(K_{p,q}(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))\) \begin{table} \begin{tabular}{c|c c c c c c c c} & \(0\) & \(1\) & \(2\) & \(\cdots\) & \(e-g-1\) & \(e-g\) & \(e-g+1\) & \(\cdots\) & \(e\) \\ \hline \(0\) & \(1\) & \(-\) & \(-\) & \(\cdots\) & \(-\) & \(-\) & \(-\) & \(\cdots\) & \(-\) \\ \(1\) & \(-\) & \(-\) & \(-\) & \(\cdots\) & \(-\) & \(-\) & \(-\) & \(\cdots\) & \(-\) \\ \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & & \(\vdots\) \\ \(k\) & \(-\) & \(-\) & \(-\) & \(\cdots\) & \(-\) & \(-\) & \(-\) & \(\cdots\) & \(-\) \\ \(k+1\) & \(-\) & \(\ast\) & \(\ast\) & \(\cdots\) & \(\ast\) & \(\ast\) & \(\mathcal{?}\) & \(\cdots\) & \(\mathcal{?}\) \\ \(k+2\) & \(-\) & \(-\) & \(-\) & \(\cdots\) & \(-\) & \(-\) & \(\mathcal{?}\) & \(\cdots\) & \(\mathcal{?}\) \\ \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & & \(\vdots\) \\ \(2k+2\) & \(-\) & \(-\) & \(-\) & \(\cdots\) & \(-\) & \(\mathcal{?}\) & \(\cdots\) & \(\mathcal{?}\) \\ \end{tabular} \end{table} Table 1. The Betti table of \(R(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))\) for large \(p\). In [4], the first and second authors proposed that one should consider the _gonality sequence_ of \(C\). For an integer \(q\geq 0\), let \[\gamma^{q}=\gamma^{q}(C):=\min\{d-q\ |\ C\ \text{carries a linear series}\ g_{d}^{q}\}.\] Then \(\gamma^{1}+1=\operatorname{gon}(C)\). The gonality sequence of \(C\) is a sequence \((\gamma^{0}+0,\gamma^{1}+1,\gamma^{2}+2,\ldots)\). The gonality sequence was previously studied by several authors in the theory of algebraic curves (see e.g., [6], [20]). The conjecture of the first and second authors in [4] predicts that \[K_{p,k+1}(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))\neq 0\ \Longleftrightarrow\ 1 \leq p\leq e-\gamma^{k+1}.\] However, the cases of \(K_{p,q}(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))\) for \(k+2\leq q\leq 2k+1\) remained a mystery in general. In this paper, we completely resolve all the aforementioned problems at least when \(L\) is sufficiently positive: We show that the gonality sequence of \(C\) determines vanishing and non-vanishing of \(K_{p,q}(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))\) for \(e-g+1\leq p\leq e\) and \(k+1\leq q\leq 2k+2\). **Theorem 1.1**.: _Let \(C\) be a smooth projective curve of genus \(g\geq 2\), and \(L\) be a very ample line bundle of sufficiently large degree on \(C\). For an integer \(k\geq 0\), consider the \(k\)-th secant variety \(\Sigma_{k}\) of \(C\) in \(\mathbf{P}H^{0}(C,L)=\mathbf{P}^{r}\), and put \(e:=\operatorname{codim}\Sigma_{k}=r-2k-1\). For each \(k+1\leq q\leq 2k+2\), if \(e-g+1\leq p\leq e\), then we have_ \[K_{p,q}(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))\neq 0\ \Longleftrightarrow\ e-g+1\leq p\leq e- \gamma^{2k+2-q}(C).\] As we discussed before, together with [11], our main theorem completely determines the shape of the Betti table of \(R(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))\) in Table 1. Our approach using secant varieties gives an alternative proof of the gonality conjecture which is nothing but the case of \(k=0\) in Theorem 1.1. On the other hand, by duality, we have \[K_{p,q}(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))=K_{e-p,2k+2-q}(\Sigma_{k}, \omega_{\Sigma_{k}};\mathscr{O}_{\Sigma_{k}}(1))^{\vee}.\] The nontrivial parts covered by Theorem 1.1 are \(K_{p,q}(\Sigma_{k},\omega_{\Sigma_{k}};\mathscr{O}_{\Sigma_{k}}(1))\) for \(0\leq p\leq e-1\) and \(0\leq q\leq k+1\). The Betti table of \(R(\Sigma_{k},\omega_{\Sigma_{k}};\mathscr{O}_{\Sigma_{k}}(1))\) in this range - the reverse of the part marked with "\(\gamma\)" in Table 1 - is the following: Notice that \(\gamma^{q}(C)=g\) for \(q\geq g\). Thus if \(k\geq g-1\), then the last \(k-g+2\) rows of Table 2 are all vanishing; in particular, \[K_{p,k+1}(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))=0\ \ \text{for}\ e-g+1\leq p\leq e;\] \[K_{p,q}(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))=0\ \ \text{for all}\ p\ \text{and}\ k+2\leq q\leq 2k+2-g.\] Next, observe that the first \(m+1\) rows of Table 2 have the same vanishing and nonvanishing patterns as those of the Betti table of \(R(\Sigma_{m},\omega_{\Sigma_{m}};\mathscr{O}_{\Sigma_{m}}(1))\) for each \(0\leq m\leq k\). Thus the syzygies of secant varieties of \(C\) have a surprisingly uniform behavior governed by the gonality sequence of \(C\). It resembles a matryoshka doll, which repeats similar patterns over and over again, so one may say that there is a "matryoshka structure" among secant varieties of \(C\) in the sense of [4]. To prove Theorem 1.1, we utilize Bertram's construction [3] as in [11]. There is a vector bundle \(E_{k+2,L}\) on the symmetric product \(C_{k+2}\) of \(C\) such that \(\beta_{k+1}\colon\mathbf{P}(E_{k+2,L})\to\Sigma_{k+1}\) is a resolution of singularities and \(Z_{k}:=\beta_{k+1}^{-1}(\Sigma_{k})\) is an effective divisor. It is worth noting that we are working with \(\Sigma_{k+1}\) to prove Theorem 1.1 for \(\Sigma_{k}\) instead of going to \(\Sigma_{k-1}\) as in [11]. ### Vanishing Using the Du Bois-type condition \[R^{i}\beta_{k+1,*}\mathscr{O}_{\mathbf{P}(E_{k+2,L})}(-Z_{k})=\begin{cases} \mathscr{I}_{\Sigma_{k}|\Sigma_{k+1}}&\text{if $i=0$}\\ 0&\text{if $i>0$}\end{cases}\] established in [11] and proceeding by induction on \(q-k-1\), we reduce the vanishing part of Theorem 1.1 to \[H^{q^{*}+1}(C_{k+2},\bigwedge^{p^{*}+q^{*}}M_{E_{k+2,L}}\otimes S_{k+2,\omega _{C}})=0,\] where \(p^{*}:=e-p\) and \(q^{*}:=2k+2-q\). Here \(M_{E_{k+2,L}}\) is the kernel of the evaluation map \(H^{0}(C,L)\otimes\mathscr{O}_{C_{k+2}}\to E_{k+2,L}\), and \(S_{k+2,\omega_{C}}\) is a line bundle with \(q^{*}_{k+2}S_{k,L}=L^{\boxtimes k+2}\), where \(q_{k+2}\colon C^{k+2}\to C_{k+2}\) is the map given by \((x_{1},\dots,x_{k+2})\mapsto x_{1}+\dots+x_{k+2}\). We then show that \[H^{q^{*}+1}(C_{k+2},\bigwedge^{p^{*}+q^{*}}M_{E_{k+2,L}}\otimes S _{k+2,\omega_{C}})\] \[=H^{q^{*}+1}(C_{p^{*}+q^{*}}\times C_{k+2},(N_{p^{*}+q^{*},L} \boxtimes S_{k+2,\omega_{C}})(-D_{p^{*}+q^{*},k+2})).\] Here \(N_{p^{*}+q^{*},L}\) is a line bundle with \(q^{*}_{p^{*}+q^{*}}N_{p^{*}+q^{*},L}=L^{\boxtimes p^{*}+q^{*}}(-\Delta)\), where \(\Delta\) is the sum of all pairwise diagonal on \(C^{p^{*}+q^{*}}\). Let \(\operatorname{pr}_{1}\colon C_{p^{*}+q^{*}}\times C_{k+2}\to C_{p^{*}+q^{*}}\) be the projection map, and \(D_{p^{*}+q^{*},k+2}:=\{(\xi_{1}+x,\xi_{2}+x)\mid\xi_{1}\in C_{p^{*}+q^{*}-1}, \xi_{2}\in C_{k+1},x\in C\}\) be an effective divisor on \(C_{p^{*}+q^{*}}\times C_{k+2}\). Then it is enough to check that \[(\star)\quad\,\,\,H^{i}(C_{p^{*}+q^{*}},R^{q^{*}+1-i}\operatorname{pr}_{1,*}(N _{p^{*}+q^{*},L}\boxtimes S_{k+2,\omega_{C}})(-D_{p^{*}+q^{*},k+2}))=0\,\,\, \text{for $0\leq i\leq q^{*}+1$}.\] When \(i>0\), the cohomology vanishing \((\star)\) follows from Fujita-Serre vanishing since \(N_{p^{*}+q^{*},L}\) is sufficiently positive. When \(i=0\), the fiber of \(R^{q^{*}+1}\operatorname{pr}_{1,*}(N_{p^{*}+q^{*},L}\boxtimes S_{k+2,\omega_{ C}})(-D_{p^{*}+q^{*},k+2})\) over \(\xi\in C_{p^{*}+q^{*}}\) is \[H^{q^{*}+1}(C_{k+2},S_{k+2,\omega_{C}(-\xi)})=S^{k+1-q^{*}}H^{0}(C,\omega_{C}(- \xi))\otimes\bigwedge^{q^{*}+1}H^{1}(C,\omega_{C}(-\xi)).\] However, \(h^{1}(C,\omega_{C}(-\xi))\leq q^{*}\) thanks to the "gonality sequence condition" \(\gamma^{q^{*}}(C)\geq p^{*}+1\), so \[R^{q^{*}+1}\operatorname{pr}_{1,*}(N_{p^{*}+q^{*},L}\boxtimes S_{k+2,\omega_{ C}})(-D_{p^{*}+q^{*},k+2})=0.\] Thus \((\star)\) holds for \(i=0\). ### Nonvanishing For the nonvanishing part of Theorem 1.1, it suffices to see that the map \[H^{q-1}(\Sigma_{k},\bigwedge^{p+q-1}M_{\mathscr{O}_{\Sigma_{k}}(1)}\otimes \mathscr{I}_{\Sigma_{k-1}|\Sigma_{k}}(1))\longrightarrow H^{q}(\Sigma_{k+1}, \bigwedge^{p+q-1}M_{\mathscr{O}_{\Sigma_{k+1}}(1)}\otimes\mathscr{I}_{\Sigma_ {k}|\Sigma_{k+1}}(1))\] is nonzero. Arguing as in the proof of the vanishing part, we reduce the problem to showing that the map \[(\bullet)\quad R^{q^{*}+1}\operatorname{pr}_{1,*}(N_{p^{*}+q^{*},L }\boxtimes S_{k+2,\omega_{C}})(-D_{p^{*}+q^{*},k+2})\\ \longrightarrow R^{q^{*}}\operatorname{pr}_{1,*}(N_{p^{*}+q^{*},L} \boxtimes S_{k+1,\omega_{C}})(-D_{p^{*}+q^{*},k+1})\otimes H^{1}(C,\omega_{C})\] is nonzero. By the "gonality sequence condition" \(\gamma^{q^{*}}(C)\leq p^{*}\), we can find an effective divisor \(\xi\in C_{p^{*}+q^{*}}\) with \(h^{0}(C,\omega_{C}(-\xi))=g-p^{*}\) and \(h^{1}(C,\omega_{C}(-\xi))=q^{*}+1\). The map \((\bullet)\) looks like \(\operatorname{id}_{S^{k+1-q^{*}}H^{0}(C,\omega_{C}(-\xi))}\otimes\delta\) fiberwisely over \(\xi\in C_{p^{*}+q^{*}}\), where \(\delta\) is the Koszul-like map \[\bigwedge^{q^{*}+1}H^{1}(C,\omega_{C}(-\xi))\longrightarrow\bigwedge^{q^{*}}H ^{1}(C,\omega_{C}(-\xi))\otimes H^{1}(C,\omega_{C}).\] Since \(\delta\) is clearly nonzero, it follows that the map \((\bullet)\) is nonzero. The paper is organized as follows. We begin with collecting basic relevant facts on the gonality sequence of a curve in Section 2. Section 3 provides a review of basic properties of symmetric products and secant varieties of curves. Section 4 is devoted to the proof of Theorem 1.1. Finally, in Section 5, we present some complementary results, and we also discuss some open problems. ### Acknowledgments We would like to thank Lawrence Ein and Wenbo Niu for valuable and interesting discussions. We are also grateful to Daniele Agostini, Marian Aprodu, Daniel Erman, Robert Lazarsfeld, Frank-Olaf Schreyer, Jessica Sidman for their interests. ## 2. Gonality Sequences In this section, we recall the definition and basic properties of the gonality sequence of a smooth projective curve \(C\) of genus \(g\geq 2\), and we show some relevant facts. **Definition 2.1**.: For any integer \(q\geq 0\), we define \[\gamma^{q}(C):=\min\{d-q\mid C\text{ carries a linear series }g_{d}^{q}\}.\] A sequence \((\gamma^{0}(C)+0,\gamma^{1}(C)+1,\gamma^{2}(C)+2,\ldots)\) is called the _gonality sequence_ of \(C\). Note that \(\gamma^{0}(C)=0\) and \(\gamma^{1}(C)+1=\operatorname{gon}(C)\) is the gonality of \(C\). The following is an easy consequence of the Riemann-Roch theorem, the Clifford theorem, and Brill-Noether theory. **Lemma 2.2** ([20, Lemmas 3.1 and 3.2]).: _We have the following:_ 1. \(\gamma^{q}(C)\leq\gamma^{q+1}(C)\) _for_ \(q\geq 0\)_._ 2. \(\min\{q,g\}\leq\gamma^{q}(C)\leq g-\lfloor g/(q+1)\rfloor\) _for_ \(q\geq 0\)_. In particular,_ \(\gamma^{g-1}(C)=g-1\) _and_ \(\gamma^{q}(C)=g\) _for_ \(q\geq g\)_._ If \(C\) is hyperelliptic, then \(\gamma^{q}(C)=q\) for \(q\leq g\). However, as was remarked in [20], it is not easy to compute the gonality sequence of a curve in general. We refer to [20] for more details. Next, we introduce a new positivity notion for a line bundle on \(C\). **Definition 2.3**.: Let \(B\) be a line bundle on \(C\). For integers \(w,p\geq 0\), we say that \(B\) is \(w\)_-weakly \(p\)-very ample_ if \[\operatorname{corank}\big{(}H^{0}(C,B)\longrightarrow H^{0}(C,B|_{\xi}) \big{)}\leq w\] for every effective divisor \(\xi\) of degree \(p+w+1\) on \(C\). Note that \(B\) is \(0\)-weakly \(p\)-very ample if and only if \(B\) is \(p\)-very ample. Recall that \(\gamma^{1}(C)\geq p+1\) (i.e., \(\operatorname{gon}(C)\geq p+2\)) if and only if \(\omega_{C}\) is \(p\)-very ample. The next proposition is a generalization of this fact. **Proposition 2.4**.: _Let \(q\geq 1\) be an integer. Then the following are equivalent:_ 1. \(\gamma^{q}(C)\geq p+1\)_._ 2. \(h^{0}(C,\mathscr{O}_{C}(\xi))=h^{1}(C,\omega_{C}(-\xi))\leq q\) _for every effective divisor_ \(\xi\) _of degree_ \(p+q\) _on_ \(C\)_._ 3. \(\omega_{C}\) _is_ \((q-1)\)_-weakly_ \(p\)_-very ample._ _In particular,_ \[\gamma^{q}(C) =\max\{p\geq 0\mid\omega_{C}\text{ is }(q-1)\text{-weakly }p\text{- very ample}\}+1\] \[=\min\{p\geq 0\mid\omega_{C}\text{ fails to be }(q-1)\text{-weakly }p\text{- very ample}\}.\] Proof.: It is clear from the definitions. **Lemma 2.5**.: _If \(\omega_{C}\) is not \(w\)-weakly \(p\)-very ample with \(0\leq p\leq g\), then there is an effective divisor \(\xi\) of degree \(p+w+1\) such that_ \[\operatorname{corank}\big{(}H^{0}(C,\omega_{C})\longrightarrow H^{0}(C, \omega_{C}|_{\xi})\big{)}=w+1,\] _i.e., \(h^{0}(C,\omega_{C}(-\xi))=g-p\) and \(h^{1}(C,\omega_{C}(-\xi))=w+2\)._ Proof.: Since \(\omega_{C}\) is not \(w\)-weakly \(p\)-very ample, there is an effective divisor \(\xi_{0}\) of degree \(p+w+1\) on \(C\) such that \[h^{1}(C,\omega_{C}(-\xi_{0}))\geq w+2.\] If \(h^{1}(C,\omega_{C}(-\xi_{0}))=w+2\), then we are done by taking \(\xi=\xi_{0}\). Suppose that \(h^{1}(C,\omega_{C}(-\xi_{0}))\geq w+3\). The Riemann-Roch theorem yields \[h^{0}(C,\omega_{C}(-\xi_{0}))=g-p+h^{1}(C,\omega_{C}(-\xi_{0}))-w-2\geq 1.\] It is elementary to see that if \(B\) is a line bundle on \(C\) with \(H^{0}(C,B)\neq 0\) and \(H^{1}(C,B)\neq 0\), then \[h^{0}(C,B(-x_{0}+x_{1}))=h^{0}(C,B)-1\ \text{ and }\ h^{1}(C,B(-x_{0}+x_{1}))=h^{1}(C,B)-1\] for general points \(x_{0},x_{1}\in C\). Thus we find \[h^{0}(C,\mathscr{O}_{C}(\xi_{0}+x_{0}-x_{1}))=h^{1}(C,\omega_{C}(-\xi_{0}-x_{0 }+x_{1}))=h^{1}(C,\omega_{C}(-\xi_{0}))-1\geq w+2,\] so we can choose an effective divisor \(\xi_{1}\in|\xi_{0}+x_{0}-x_{1}|\) of degree \(p+w+1\). Then \[h^{1}(C,\omega_{C}(-\xi_{1}))=h^{1}(C,\omega_{C}(-\xi_{0}))-1.\] Continuing this process, we finally reach an effective divisor \(\xi\) of degree \(p+w+1\) such that \(h^{1}(C,\omega_{C}(-\xi))=w+2\). ## 3. Symmetric Products and Secant Varieties of Curves In this section, we review basic properties of symmetric products and secant varieties of smooth projective curves, and we show some useful lemmas for the proof of Theorem 1.1. We refer to [3] and [11] for a more detailed account. Let \(C\) be a smooth projective curve of genus \(g\). For an integer \(k\geq 1\), we write the \(k\)-th symmetric product of the curve \(C\) as \(C_{k}\) and the \(k\)-th ordinary product of the curve \(C\) as \(C^{k}\). The symmetric group \(\mathfrak{S}_{k}\) naturally acts on \(C^{k}\), and \(C_{k}=C^{k}/\mathfrak{S}_{k}\). We have the quotient morphism \[q_{k}\colon C^{k}\longrightarrow C_{k},\ (x_{1},\dots,x_{k})\longmapsto x_{1}+ \dots+x_{k},\] which is a finite flat surjective morphism of degree \(k!\). For a line bundle \(L\) on \(C\), there are two line bundles \(S_{k,L}\) and \(N_{k,L}\) on \(C_{k}\) such that \[q_{k}^{*}S_{k,L}=L^{\boxtimes k}=\underbrace{L\boxtimes\dots\boxtimes L}_{k\text { times}}\ \text{ and }\ q_{k}^{*}N_{k,L}=L^{\boxtimes k}\big{(}-\sum_{1\leq i<j\leq k}\Delta_{i,j} \big{)},\] where \(\Delta_{i,j}:=\{(x_{1},\ldots,x_{k})\in C^{k}\mid x_{i}=x_{j}\}\) is a pairwise diagonal. Let \(\delta_{k}\) be a divisor on \(C_{k}\) such that \(\mathscr{O}_{C_{k}}(\delta_{k})=S_{k,\mathscr{O}_{C}}\otimes N_{k,\mathscr{O}_{ C}}^{-1}\). Then \(N_{k,L}=S_{k,L}(-\delta_{k})\) for any line bundle \(L\) on \(C\). It is well known that \[H^{0}(C_{k},S_{k,L})=S^{k}H^{0}(C,L)\ \ \text{and}\ \ H^{0}(C_{k},N_{k,L})= \bigwedge^{k}H^{0}(C,L).\] Furthermore, we have the following. **Lemma 3.1** ([1, Lemma 2.4], [11, Lemma 3.7]).: _We have_ \[H^{i}(C_{k},S_{k,L})=S^{k-i}H^{0}(C,L)\otimes\bigwedge^{i}H^{1}(C,L)\ \ \text{for}\ i\geq 0;\] \[H^{i}(C_{k},N_{k,L})=\bigwedge^{k-i}H^{0}(C,L)\otimes S^{i}H^{1}(C,L)\ \ \text{for}\ i\geq 0.\] Let \(D_{k,m}\) be the effective divisor on \(C_{k}\times C_{m}\) given by the image of the map \[C_{k-1}\times C\times C_{m-1}\longrightarrow C_{k}\times C_{m},\ (\xi_{1},x,\xi_{2}) \longmapsto(\xi_{1}+x,\xi_{2}+x),\] and \(\operatorname{pr}_{1}\colon C_{k}\times C_{m}\to C_{k},\ \operatorname{pr}_{2} \colon C_{k}\times C_{m}\to C_{m}\) be the projections. For \(\xi\in C_{k}\), we set \(C_{m,\xi}:=\operatorname{pr}_{1}^{-1}(\xi)=\{\xi\}\times C_{m}\). Then \(\mathscr{O}_{C_{m}}(C_{m,\xi}\cap D_{k,m})=S_{m,\mathscr{O}_{C}(\xi)}\). For a coherent sheaf \(\mathscr{F}\) on \(C_{m}\), we put \[M_{k}^{i}\mathscr{F}:=R^{i}\operatorname{pr}_{1,*}(\mathscr{O}_{C_{k}}\boxtimes \mathscr{F})(-D_{k,m})\ \ \text{for}\ i\geq 0.\] It is a coherent sheaf on \(C_{k}\). Identifying \(C_{m,\xi}=C_{m}\) for each \(\xi\in C_{k}\), we have a natural map \[\rho^{i}(\xi)\colon M_{k}^{i}\mathscr{F}\otimes\mathbf{k}(\xi)\longrightarrow H ^{i}(C_{m},\mathscr{F}\otimes S_{m,\mathscr{O}_{C}(-\xi)}).\] Suppose that \(\mathscr{F}\) is flat over \(C_{k}\). By Grauert's theorem, when \(h^{i}(C_{m},\mathscr{F}\otimes S_{m,\mathscr{O}_{C}(-\xi)})\) is constant for all \(\xi\in C_{k}\), \(M_{k}^{i}\mathscr{F}\) is a vector bundle and \(\rho^{i}(\xi)\) is an isomorphism. By the cohomology and base change, when \(\rho^{i+1}(\xi)\) is surjective for \(\xi\in C_{k}\), \(\rho^{i}(\xi)\) is an isomorphism if and only if \(M_{k}^{i+1}\mathscr{F}\) is locally free in a neighborhood of \(\xi\in C_{k}\). **Lemma 3.2** (cf. [10, Lemma 1.2]).: _For a given coherent sheaf \(\mathscr{F}\) on \(C_{m}\), if \(\deg L\) is sufficiently large, then_ \[H^{i}(C_{k},M_{k}^{j}\mathscr{F}\otimes N_{k,L})=0\ \ \text{for}\ i>0\ \text{and}\ j\geq 0.\] Proof.: As \(\deg L\gg 0\), we may write \(L=L^{\prime}\otimes\mathscr{O}_{C}(mx)\) for a point \(x\in C\) and a sufficiently large integer \(m\gg 0\) such that \(N_{k,L^{\prime}}\) is nef. Then \(N_{k,L}=N_{k,L^{\prime}}\otimes S_{k,\mathscr{O}_{C}(x)}^{m}\). Since \(S_{k,\mathscr{O}_{C}(x)}\) is ample and \(m\) is sufficiently large, the lemma follows from Fujita-Serre vanishing [21, Theorem 1.4.35]. In the above situation, we now consider the case \(m=1\). Then \(D_{k,1}\) is the image of the injective map \[C_{k-1}\times C\longrightarrow C_{k}\times C,\ (\xi,x)\longmapsto(\xi+x,x).\] Let \(\sigma_{k}:=\operatorname{pr}_{1}|_{D_{k,1}}\). Identifying \(D_{k,1}\) with \(C_{k-1}\times C\), we obtain a map \[\sigma_{k}\colon C_{k-1}\times C\longrightarrow C_{k},\ (\xi,x)\longmapsto\xi+x,\] which is a finite flat surjective morphism of degree \(k\). If we view \(C_{k}\) as the Hilbert scheme of \(k\) points on \(C\), then \(\sigma_{k}\) is the universal family. The _tautological bundle_ on \(C_{k}\) associated to \(L\) is defined to be \[E_{k,L}:=\sigma_{k,*}(\mathscr{O}_{C_{k-1}}\boxtimes L).\] It is a vector bundle of rank \(k\) on \(C_{k}\). Note that \(H^{0}(C,E_{k,L})=H^{0}(C,L)\) and \(\det E_{k,L}=N_{k,L}\). Suppose that \(L\) is \((k-1)\)-very ample. Then \(E_{k,L}\) is globally generated. Applying \(\operatorname{pr}_{1,*}\) to the short exact sequence \[\begin{CD}0@>{}>{}>(\mathscr{O}_{C_{k}}\boxtimes L)(-D_{k,1})@>{\cdot D_{k,1} }>{}>\mathscr{O}_{C_{k}}\boxtimes L@>{}>{}>\mathscr{O}_{C_{k-1}}\boxtimes L@>{}>{}>0, \end{CD}\] we get a short exact sequence \[0\xrightarrow{}M_{E_{k,L}}\xrightarrow{}H^{0}(C,L)\otimes\mathscr{O}_{C_{k}} \xrightarrow{\mathrm{ev}}E_{k,L}\xrightarrow{}0.\] Notice that \(M_{E_{k,L}}=M_{k}^{0}L\) is a vector bundle of rank \(h^{0}(C,L)-k\) on \(C_{k}\). This short exact sequence looks like \[0\xrightarrow{}H^{0}(C,L(-\xi))\xrightarrow{}H^{0}(C,L)\xrightarrow{}H^{0}(C,L |_{\xi})\xrightarrow{}0\] over \(\xi\in C_{k}\) fiberwisely. **Lemma 3.3**.: _Suppose that \(\deg L\geq 2g+k-1\). Then_ \[M_{k}^{i}N_{m,L}=\begin{cases}\bigwedge^{m}M_{E_{k,L}}&\text{if }i=0\\ 0&\text{if }i>0.\end{cases}\] _In particular, for any line bundle \(B\) on \(C_{k}\), we have_ \[H^{i}(C_{k}\times C_{m},(B\boxtimes N_{m,L})(-D_{k,m}))=H^{i}(C_{k},\bigwedge^{ m}M_{E_{k,L}}\otimes B)\ \text{ for }i\geq 0.\] Proof.: By Lemma 3.1, \[H^{i}(C_{m},N_{m,L(-\xi)})=\bigwedge^{m-i}H^{0}(C,L(-\xi))\otimes S^{i}H^{1}(C,L(-\xi)).\] for any \(\xi\in C_{k}\). Since \(\deg L(-\xi)\geq 2g-1\), it follows that \(H^{1}(C,L(-\xi))=0\). Thus we get \(H^{i}(C_{m},N_{m,L(-\xi)})=0\) for \(i>0\), so we obtain \(M_{k}^{i}N_{m,L}=0\) for \(i>0\). Note that \(M_{k}^{0}N_{m,L}\) is a vector bundle on \(C_{k}\) whose fiber is \(\bigwedge^{m}H^{0}(C,L(-\xi))\) over \(\xi\in C_{k}\). Applying \(\operatorname{pr}_{1,*}\) to the injective map \[(\mathscr{O}_{C_{k}}\boxtimes N_{m,L})(-D_{k,m})\xhookrightarrow\mathscr{O}_ {C_{k}}\boxtimes N_{m,L},\] we get an injective map \[M_{k}^{0}N_{m,L}\xhookrightarrow\bigwedge^{m}H^{0}(C,L)\otimes\mathscr{O}_{C _{k}},\] which looks like \[\bigwedge^{m}H^{0}(C,L(-\xi))\xhookrightarrow\bigwedge^{m}H^{0}(C,L)\] over \(\xi\in C_{k}\) fiberwisely. On the other hand, notice that \(L\) is \((k-1)\)-very ample. The injective map \[M_{E_{k,L}}\xhookrightarrow H^{0}(C,L)\otimes\mathscr{O}_{C_{k}}\] induces an injective map \[\bigwedge^{m}M_{E_{k,L}}\xhookrightarrow\bigwedge^{m}H^{0}(C,L)\otimes \mathscr{O}_{C_{k}},\] which looks like \[\bigwedge^{m}H^{0}(C,L(-\xi))\xhookrightarrow\bigwedge^{m}H^{0}(C,L)\] over \(\xi\in C_{k}\) fiberwisely. Thus we can conclude that \(M_{k}^{0}N_{m,L}=\bigwedge^{m}M_{E_{k,L}}\). Now, the second statement follows from the projection formula and the Leray spectral sequence for \(\operatorname{pr}_{1}\). From now on, as in [3] and [11], suppose that \[\deg L\geq 2g+2k+1.\] For an integer \(k\geq 0\), let \[B_{k}=B_{k}(L):=\mathbf{P}(E_{k+1,L})\] with the canonical projection \(\pi_{k}\colon B_{k}\to C_{k+1}\), and \(H_{k}\) be a tautological divisor so that \(\mathscr{O}_{B_{k}}(H_{k})=\mathscr{O}_{\mathbf{P}(E_{k+1,L})}(1)\). As \(E_{k+1,L}\) is globally generated, \(H_{k}\) is base point free. Note that \[H^{0}(B_{k},H_{k})=H^{0}(C_{k+1},E_{k+1,L})=H^{0}(C,L).\] The image of the morphism given by the complete linear system \(|H_{k}|\) is the \(k\)_-th secant variety_\(\Sigma_{k}\) of \(C\) in \(\mathbf{P}H^{0}(C,L)=\mathbf{P}^{r}\). Denote the induced map by \[\beta_{k}\colon B_{k}\longrightarrow\Sigma_{k},\] which is a resolution of singularities. By [11, Theorem 1.1], \(\Sigma_{k}\) has normal Du Bois singularities, and in particular, \[\beta_{k,*}\mathscr{O}_{B_{k}}=\mathscr{O}_{\Sigma_{k}}.\] By [12, Theorem 1.2], \(\Sigma_{k}\subseteq\mathbf{P}^{r}\) is arithmetically Cohen-Macaulay, and \(H^{2k+1}(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(m))=0\) for \(m>0\). Note that \(\mathscr{O}_{B_{k}}(H_{k})=\beta^{*}\mathscr{O}_{\Sigma_{k}}(1)\). Put \(M_{H_{k}}:=\beta_{k}^{*}M_{\mathscr{O}_{\Sigma_{k}}(1)}\), which fits into a short exact sequence Set \(Z_{k-1}:=\beta_{k}^{-1}(\Sigma_{k-1})\), which is an irreducible effective divisor on \(B_{k}\). Then \(\beta_{k,*}\mathscr{O}_{B_{k}}(-Z_{k-1})=\mathscr{I}_{\Sigma_{k-1}|\Sigma_{k}}\). We have a commutative diagram Notice that \(\omega_{C_{k+1}}=N_{k+1,\omega_{C}}\). Then we have \[\omega_{B_{k}}=\mathscr{O}_{B_{k}}(-(k+1)H_{k})\otimes\pi_{k}^{*}(\omega_{C_{ k+1}}\otimes\det E_{k+1,L})=\mathscr{O}_{B_{k}}(-(k+1)H_{k})\otimes\pi_{k}^{*}S_{k+1, \omega_{C}\otimes L}(-2\delta_{k+1}).\] We will compute \(\omega_{\Sigma_{k}}\) in Proposition 3.6. On the other hand, the map \(\sigma_{k+1}\colon C_{k}\times C\to C_{k+1}\) provides a morphism \(\alpha_{k}\colon B_{k-1}\times C\to B_{k}\) birational onto its image (see [3, p. 432]). By [3, Lemma 1.1\((a)\)] (see [11, Subsection 3.2]), we have a commutative diagram (3.1) where the left vertical map is the first projection followed by \(\beta_{k-1}\). **Proposition 3.4** ([11]).: _We have the following:_ 1. \(\mathscr{O}_{B_{k}}(Z_{k-1})=\mathscr{O}_{B_{k}}((k+1)H_{k})\otimes\pi_{k}^{* }S_{k+1,L}(-2\delta_{k+1})^{-1}\) _and_ \(\omega_{B_{k}}(Z_{k-1})=\pi_{k}^{*}S_{k+1,\omega_{C}}\)_._ 2. \(R^{i}\beta_{k,*}\mathscr{O}_{B_{k}}(-Z_{k-1})=\begin{cases}\mathscr{I}_{\Sigma _{k-1}|\Sigma_{k}}&\text{if $i=0$}\\ 0&\text{if $i>0$}.\end{cases}\)__ 3. \(R^{i}\pi_{k,*}\big{/}^{j}M_{H_{k}}=\begin{cases}\bigwedge^{j}M_{E_{k+1,L}}& \text{if $i=0$}\\ 0&\text{if $i>0$}.\end{cases}\)__ Proof.: (1) The first assertion is [11, Proposition 3.5\((2)\)]. Note that \(\det E_{k+1,L}=S_{k+1,L}(-\delta_{k+1})\) and \(\omega_{C_{k+1}}=S_{k+1,\omega_{C}}(-\delta_{k+1})\). Since \(\omega_{B_{k}}=\mathscr{O}_{B_{k}}(-(k+1)H_{k+1})\otimes\pi_{k}^{*}S_{k+1, \omega_{C}\otimes L}(-2\delta_{k+1})\), the second assertion follows. (2) It is [11, Theorem 5.2\((2)\)]. (3) It is shown in [11, Proof of Lemma 5.1]. For reader's convenience, we give a sketch of the proof. We have a short exact sequence \[\begin{CD}0@>{}>{}>\pi_{k}^{*}M_{E_{k+1,L}}@>{}>{}>M_{H_{k}}@>{}>{}>K@>{}>{}>0, \end{CD}\] where \(K|_{\pi_{k}^{-1}(\xi)}=M_{\mathscr{O}_{\mathbf{p}^{k}}(1)}\) for \(\xi\in C_{k+1}\). By Bott vanishing, \[R^{i}\pi_{k,*}\bigwedge^{j}K=\begin{cases}\mathscr{O}_{C_{k+1}}&\text{if $i=0$ and $j=0$}\\ 0&\text{if $i>0$ or $j>0$}.\end{cases}\] Considering the filtration of \(\bigwedge^{j}M_{H_{k}}\) associated to the above short exact sequence, we obtain the assertion. **Lemma 3.5**.: _We have_ \[H^{i}(\Sigma_{k},\bigwedge^{j}M_{\mathscr{O}_{\Sigma_{k}}(1)} \otimes\mathscr{I}_{\Sigma_{k-1}|\Sigma_{k}}(1))\] \[=H^{i}(B_{k},\bigwedge^{j}M_{H_{k}}\otimes\mathscr{O}_{B_{k}}(H_{ k}-Z_{k-1}))\] \[=H^{2k+1-i}(B_{k},\bigwedge^{r-j}M_{H_{k}}\otimes\omega_{B_{k}}( Z_{k-1}))^{\vee}\] \[=H^{2k+1-i}(C_{k+1},\bigwedge^{r-j}M_{E_{k+1,L}}\otimes S_{k+1, \omega_{C}})^{\vee}\] \[=H^{2k+1-i}(C_{r-j}\times C_{k+1},(N_{r-j,L}\boxtimes S_{k+1, \omega_{C}})(-D_{r-j,k+1}))^{\vee}.\] Proof.: The first equality follows from Proposition 3.4 (2) and the projection formula. Note that \(\operatorname{rank}M_{H_{k}}=h^{0}(C,L)-1=r\) and \(\det M_{H_{k}}=\mathscr{O}_{B_{k}}(-H_{k})\). It follows that \(\bigwedge^{j}M_{H_{k}}^{\vee}=\bigwedge^{r-j}M_{H_{k}}\otimes\mathscr{O}_{B_{k }}(H_{k})\). Then the second equality follows from Serre duality. The third equality follows from Proposition 3.4 (1) and (3). The final equality follows from Lemma 3.3. Finally, we show some useful facts on the dualizing sheaf \(\omega_{\Sigma_{k}}\). The following proposition will not be used in the proof of Theorem 1.1 but will be used for some additional results. **Proposition 3.6** (Ein1).: _We have the following:_ 1. \(\beta_{k,*}\omega_{B_{k}}(Z_{k-1})=\omega_{\Sigma_{k}}\)_._ 2. _There is a short exact sequence_ \[0\xrightarrow{}\beta_{k,*}\omega_{B_{k}}\xrightarrow{}\omega_{\Sigma_{k}} \xrightarrow{}\beta_{k,*}\omega_{Z_{k-1}}\xrightarrow{}0.\] 3. \(H^{0}(\Sigma_{k},\omega_{\Sigma_{k}}(\ell))=H^{0}(C_{k+1},S^{\ell}E_{k+1,L} \otimes S_{k+1,\omega_{C}})\) _for all_ \(\ell\geq 0\)_._ 4. _If_ \(k\geq 2\)_, then_ \(H^{0}(\Sigma_{k},\omega_{\Sigma_{k}}(\ell))=H^{0}(\Sigma_{k-1},\beta_{k,*} \omega_{Z_{k-1}}(\ell))\) _for each_ \(0\leq\ell\leq k\)_._ Proof.: (1) By [11, Proposition 3.15], there is a log resolution \(b_{k}\colon\operatorname{bl}_{k}(B_{k})\to B_{k}\) of \((B_{k},Z_{k-1})\) constructed by Bertram in [3] such that \[b_{k}^{*}\omega_{B_{k}}(Z_{k-1})=\omega_{\operatorname{bl}_{k}(B_{k})}(E_{0}+ E_{1}+\cdots+E_{k-1}),\] where \(E_{0},E_{1},\ldots,E_{k-2}\) are \(\operatorname{bl}_{k}\)-exceptional divisors and \(E_{k-1}=\operatorname{bl}_{k,*}^{-1}Z_{k-1}\). We have \[(\beta_{k}\circ b_{k})_{*}\omega_{\operatorname{bl}_{k}(B_{k})}(E_{0}+E_{1}+ \cdots+E_{k-1})=\beta_{k,*}\omega_{B_{k}}(Z_{k-1}).\] Note that \(\beta_{k}\circ b_{k}\colon\operatorname{bl}_{k}(B_{k})\to\Sigma_{k}\) is a log resolution of \(\Sigma_{k}\). Since \(\Sigma_{k}\) has normal Cohen-Macaulay Du Bois singularities by [11, Theorems 1.1 and 1.2], it follows from [19, Theorem 1.1] that \[(\beta_{k}\circ b_{k})_{*}\omega_{\operatorname{bl}_{k}(B_{k})}(E_{0}+E_{1}+ \cdots+E_{k-1})=\omega_{\Sigma_{k}}.\] Thus \(\beta_{k,*}\omega_{B_{k}}(Z_{k-1})=\omega_{\Sigma_{k}}\). (2) We have a short exact sequence \[0\xrightarrow{}\omega_{B_{k}}\xrightarrow{}\omega_{B_{k}}(Z_{k-1}) \xrightarrow{}\omega_{Z_{k-1}}\xrightarrow{}0.\] By Grauert-Riemenschneider vanishing, \(R^{i}\beta_{k,*}\omega_{B_{k}}=0\) for \(i>0\). Applying \(\beta_{k,*}\) to the above short exact sequence, we obtain the assertion (2). (3) We have \(H^{0}(\Sigma_{k},\omega_{\Sigma_{k}}(\ell))=H^{0}(B_{k},\omega_{B_{k}}(Z_{k-1}+ \ell H_{k}))\). Recall from Proposition 3.4 (1) that \(\omega_{B_{k}}(Z_{k-1})=\pi_{k}^{*}S_{k+1,\omega_{C}}\). Thus \(H^{0}(B_{k},\omega_{B_{k}}(Z_{k-1}+\ell H_{k}))=H^{0}(C_{k+1},S^{\ell}E_{k+1,L} \otimes S_{k+1,\omega_{C}})\). (4) Since \(R^{i}\beta_{k,*}\omega_{B_{k}}=0\) for \(i>0\), we have \[H^{i}(\Sigma_{k},\beta_{k,*}\omega_{B_{k}}(\ell))=H^{i}(B_{k},\omega_{B_{k}}( \ell H_{k}))\ \ \text{for $i\geq 0$.}\] If \(k\geq 2\) and \(0\leq\ell\leq k\), we have \(H^{i}(B_{k},\omega_{B_{k}}(\ell H_{k}))=0\) for each \(i=0,1\). Thus the assertion (4) follows. ## 4. Proof of Main Theorem In this section, we prove Theorem 1.1. First, we recall the setting. Let \(C\) be a smooth projective curve of genus \(g\geq 2\), and \(L\) be a very ample line bundle on \(C\). Consider the \(k\)-th secant variety \(\Sigma_{k}\) of \(C\) in \(\mathbf{P}H^{0}(C,L)=\mathbf{P}^{r}\). Assume that \(\deg L\gg 0\). When \(k=0\) (i.e., \(\Sigma_{0}=C\)), Theorem 1.1 is the gonality conjecture established by Ein-Lazarsfeld [10] and Rathmann [24]. Thus we assume that \(k\geq 1\).2 Put \(e:=\operatorname{codim}\Sigma_{k}=r-2k-1\) and \(\gamma^{i}:=\gamma^{i}(C)\) for \(i\geq 0\). Fix an index \(k+1\leq q\leq 2k+2\). Footnote 2: By a small modification, our proof works for the case of \(k=0\). The vanishing part gives an alternative proof of the gonality conjecture. Indeed, when \(k=0\) and \(q=1\), we only need to verify (4.3b). **Vanishing.** We show that \[K_{p,q}(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))=H^{q-1}(\Sigma_{k},\bigwedge ^{p+q-1}M_{\mathscr{O}_{\Sigma_{k}}(1)}\otimes\mathscr{O}_{\Sigma_{k}}(1))= 0\ \ \text{for $p\geq e-\gamma^{2k+2-q}+1$.} \tag{4.1}\] Consider a short exact sequence This induces an exact sequence \[H^{q-1}(\Sigma_{k+1},\bigwedge^{p+q-1}M_{\mathscr{O}_{\Sigma_{k+1}}(1)} \otimes\mathscr{O}_{\Sigma_{k+1}}(1))\longrightarrow H^{q-1}(\Sigma_{k}, \bigwedge^{p+q-1}M_{\mathscr{O}_{\Sigma_{k}}(1)}\otimes\mathscr{O}_{\Sigma_{ k}}(1))\\ \longrightarrow H^{q}(\Sigma_{k+1},\bigwedge^{p+q-1}M_{\mathscr{O}_{ \Sigma_{k+1}}(1)}\otimes\mathscr{I}_{\Sigma_{k}|\Sigma_{k+1}}(1))\longrightarrow H ^{q}(\Sigma_{k+1},\bigwedge^{p+q-1}M_{\mathscr{O}_{\Sigma_{k+1}}(1)}\otimes \mathscr{O}_{\Sigma_{k+1}}(1)). \tag{4.2}\] It suffices to prove that \[H^{q-1}(\Sigma_{k+1},\bigwedge^{p+q-1}M_{\mathscr{O}_{\Sigma_{k+ 1}}(1)}\otimes\mathscr{O}_{\Sigma_{k+1}}(1))=0; \tag{4.3b}\] \[H^{q}(\Sigma_{k+1},\bigwedge^{p+q-1}M_{\mathscr{O}_{\Sigma_{k+1} }(1)}\otimes\mathscr{I}_{\Sigma_{k}|\Sigma_{k+1}}(1))=0. \tag{4.3a}\] First, we check (4.3b). By Lemma 3.5, \[H^{q}(\Sigma_{k+1},\bigwedge^{p+q-1}M_{\mathscr{O}_{\Sigma_{k+1} }(1)}\otimes\mathscr{I}_{\Sigma_{k}|\Sigma_{k+1}}(1))\] \[=H^{q^{*}+1}(C_{p^{*}+q^{*}}\times C_{k+2},(N_{p^{*}+q^{*},L} \boxtimes S_{k+2,\omega_{C}})(-D_{p^{*}+q^{*},k+2})),\] where \(p^{*}:=e-p\leq\gamma^{q^{*}}-1\) and \(0\leq q^{*}:=2k+2-q\leq k+1\). By the Leray spectral sequence for \(\operatorname{pr}_{1}\colon C_{p^{*}+q^{*}}\times C_{k+2}\to C_{p^{*}+q^{*}}\), it is enough to confirm that \[H^{i}(C_{p^{*}+q^{*}},M_{p^{*}+q^{*}}^{q^{*}+1-i}S_{k+2,\omega_{C}}\otimes N_{ p^{*}+q^{*},L})=0\ \ \text{for $0\leq i\leq q^{*}+1$.}\] When \(i>0\), this follows from Lemma 3.2. For the case \(i=0\), we apply Lemma 3.1 to see that \[H^{q^{*}+1}(C_{k+2},S_{k+2,\omega_{C}(-\xi)})=S^{k+1-q^{*}}H^{0}(C,\omega_{C}(- \xi))\otimes\bigwedge^{q^{*}+1}H^{1}(C,\omega_{C}(-\xi))\] for any \(\xi\in C_{p^{*}+q^{*}}\). Proposition 2.4 says that \(H^{1}(C,\omega_{C}(-\xi))\leq q^{*}\) since \(\gamma^{q^{*}}\geq p^{*}+1\), so we obtain \[H^{q^{*}+1}(C_{k+2},S_{k+2,\omega_{C}(-\xi)})=0.\] Thus \(M_{p^{*}+q^{*}}^{q^{*}+1}S_{k+2,\omega_{C}}=0\), and we obtain (4.3b). To finish the proof of (4.1), we proceed by induction on \(q-k-1\). If \(q=k+1\), then clearly \[H^{k}(\Sigma_{k+1},\bigwedge^{p+k}M_{\mathscr{O}_{\Sigma_{k+1}}(1)}\otimes \mathscr{O}_{\Sigma_{k+1}}(1))=K_{p,k+1}(\Sigma_{k+1},\mathscr{O}_{\Sigma_{k+1 }}(1))=0,\] i.e., (4.3a) holds. Thus (4.1) follows in this case. Suppose that \(q\geq k+2\). Lemma 2.2 (1) implies that \(e-\gamma^{2k+2-q}+1\geq(e-2)-\gamma^{2k+4-q}+1\). By induction hypothesis, \[H^{q-1}(\Sigma_{k+1},\bigwedge^{p+q-1}M_{\mathscr{O}_{\Sigma_{k+1}}(1)}\otimes \mathscr{O}_{\Sigma_{k}}(1))=K_{p,q}(\Sigma_{k+1},\mathscr{O}_{\Sigma_{k+1}}( 1))=0,\] i.e., (4.3a) holds. Thus (4.1) follows. ### Nonvanishing We show that \[K_{p,q}(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))=H^{q-1}(\Sigma_{k},\bigwedge^{ p+q-1}M_{\mathscr{O}_{\Sigma_{k}}(1)}\otimes\mathscr{O}_{\Sigma_{k}}(1))\neq 0 \ \text{ for }e-g+1\leq p\leq e-\gamma^{2k+2-q}. \tag{4.4}\] We have a commutative diagram with exact rows This gives a commutative diagram It is enough to prove that the map \(\varphi\) is nonzero. For this purpose, considering the commutative diagram (3.1), we regard \(\varphi\) as a map \[\varphi\colon H^{q-1}(B_{k},\bigwedge^{p+q-1}M_{H_{k}}\otimes \mathscr{O}_{B_{k}}(H_{k}-Z_{k-1}))\otimes H^{0}(C,\mathscr{O}_{C})\\ \longrightarrow H^{q}(B_{k+1},\bigwedge^{p+q-1}M_{H_{k+1}}\otimes \mathscr{O}_{B_{k+1}}(H_{k+1}-Z_{k}))\] In view of Lemma 3.5, the map \(\varphi\) is dual to the map \[\varphi^{\vee}\colon H^{q^{*}+1}(C_{p^{*}+q^{*}}\times C_{k+2},(N _{p^{*}+q^{*},L}\boxtimes S_{k+2,\omega_{C}})(-D_{p^{*}+q^{*},k+2}))\\ \longrightarrow H^{q^{*}}(C_{p^{*}+q^{*}}\times C_{k+1},(N_{p^{*}+q ^{*},L}\boxtimes S_{k+1,\omega_{C}})(-D_{p^{*}+q^{*},k+1}))\otimes H^{1}(C, \omega_{C}),\] where \(\gamma^{q^{*}}\leq p^{*}:=e-p\leq g-1\) and \(0\leq q^{*}:=2k+2-q\leq k+1\). Notice that this map is induced from an injective map \[(\text{id}_{C_{p^{*}+q^{*}}}\times\sigma_{k+2})^{*}(N_{p^{*}+q^{*},L} \boxtimes S_{k+2,\omega_{C}})(-D_{p^{*}+q^{*},k+2})\longleftrightarrow(N_{p^ {*}+q^{*},L}\boxtimes S_{k+1,\omega_{C}})(-D_{p^{*}+q^{*},k+1})\boxtimes \omega_{C}\] of line bundles on \(C_{p^{*}+q^{*}}\times C_{k+1}\times C\). Lemma 3.2 says that \[H^{i}(C_{p^{*}+q^{*}},M_{p^{*}+q^{*}}^{j}(S_{\ell,\omega_{C}})\otimes N_{p^{*} +q^{*},L})=0\ \text{ for }i>0,\ j\geq 0,\ \ell=k+1\text{ or }k+2.\] By the Leray spectral sequences for \(\mathrm{pr}_{1}\), we may think that \(\varphi^{\vee}\) is a map \[\varphi^{\vee}\colon H^{0}(C_{p^{*}+q^{*}},M^{q^{*}+1}_{p^{*}+q^{*}} S_{k+2,\omega_{C}}\otimes N_{p^{*}+q^{*},L})\\ \longrightarrow H^{0}(C_{p^{*}+q^{*}},M^{q^{*}}_{p^{*}+q^{*}}S_{k+1, \omega_{C}}\otimes N_{p^{*}+q^{*},L})\otimes H^{1}(C,\omega_{C}).\] Notice that this map is induced from a map \[\psi\colon M^{q^{*}+1}_{p^{*}+q^{*}}S_{k+2,\omega_{C}}\longrightarrow M^{q^{* }}_{p^{*}+q^{*}}S_{k+1,\omega_{C}}\otimes H^{1}(C,\omega_{C})\] of coherent sheaves on \(C_{p^{*}+q^{*}}\) tensoring by \(N_{p^{*}+q^{*},L}\). As \(N_{p^{*}+q^{*},L}\) is sufficiently positive, to prove that the map \(\varphi^{\vee}\) is nonzero, it suffices to confirm that the map \(\psi\) is nonzero. To this end, we apply Proposition 2.4 to see that \(\omega_{C}\) fails to be \((q^{*}-1)\)-weakly \(p^{*}\)-very ample since \(p^{*}\geq\gamma^{q^{*}}\). Then Lemma 2.5 gives an effective divisor \(\xi\in C_{p^{*}+q^{*}}\) on \(C\) such that \[h^{0}(C,\omega_{C}(-\xi))=g-p^{*}\geq 1\ \text{ and }\ h^{1}(C,\omega_{C}(-\xi))=q^{*}+1.\] By Lemma 3.1, \[H^{i}(C_{\ell},S_{\ell,\omega_{C}(-\xi)})=S^{\ell-i}H^{0}(C,\omega_{C}(-\xi)) \otimes\bigwedge^{i}H^{1}(C,\omega_{C}(-\xi)),\] so this cohomology vanishes when \(i\geq q^{*}+2\). By semicontinuity, \(h^{1}(C,\omega_{C}(-\xi^{\prime}))\leq q^{*}+1\) (and hence \(H^{q^{*}+2}(C_{\ell},S_{\ell,\omega_{C}(-\xi^{\prime})})=0\)) for \(\xi^{\prime}\) in a neighborhood of \(\xi\) in \(C_{p^{*}+q^{*}}\). By the cohomology and base change, \[\rho(\xi)^{q^{*}+1}\colon M^{q^{*}+1}_{p^{*}+q^{*}}S_{k+2,\omega_{C}}\otimes \mathbf{k}(\xi)\longrightarrow H^{q^{*}+1}(C_{k+2},S_{k+2,\omega_{C}(-\xi)})\] is an isomorphism. We have a commutative diagram We reduce the problem to checking that the bottom map is nonzero. To this end, note that the bottom map can be identified with the map \[\mathrm{id}_{S^{k+1-q^{*}}H^{0}(C,\omega_{C}(-\xi))}\otimes\delta \colon S^{k+1-q^{*}}H^{0}(C,\omega_{C}(-\xi))\otimes\bigwedge^{q^{*}+ 1}H^{1}(C,\omega_{C}(-\xi))\\ \longrightarrow S^{k+1-q^{*}}H^{0}(C,\omega_{C}(-\xi))\otimes \bigwedge^{q^{*}}H^{1}(C,\omega_{C}(-\xi))\otimes H^{1}(C,\omega_{C}),\] where \(\delta\) is a Koszul-like map. For a surjective map \[\eta\colon H^{1}(C,\omega_{C}(-\xi))\stackrel{{\cdot\xi}}{{ \longrightarrow}}H^{1}(C,\omega_{C}),\] let \(s_{1},\ldots,s_{q^{*}+1}\) be a basis of \(H^{1}(C,\omega_{C}(-\xi))\) with \(\eta(s_{1})=\cdots=\eta(s_{q^{*}})=0\) but \(\eta(s_{q^{*}+1})\neq 0\). Then \[\delta(s_{1}\wedge\cdots\wedge s_{q^{*}}\wedge s_{q^{*}+1})=(-1)^{q^{*}}s_{1} \wedge\cdots\wedge s_{q^{*}}\otimes\eta(s_{q^{*}+1})\neq 0.\] Thus the bottom map \(\mathrm{id}_{S^{k+1-q^{*}}H^{0}(C,\omega_{C}(-\xi))}\otimes\delta\) in the above commutative diagram is nonzero. Therefore, the map \(\varphi^{\vee}\) (and hence \(\varphi\)) is nonzero, so (4.4) follows. ## 5. Complements and Questions In this section, we present some additonal results and problems. We keep using the notations in the previous section. Let \(C\) be a smooth projective curve of genus \(g\geq 2\), and \(L\) be a line bundle on \(C\) with \(\deg L\geq 2g+2k+1\). We denote by \(\Sigma_{k}\) the \(k\)-th secant variety of \(C\) in \(\mathbf{P}^{2k+1+e}=\mathbf{P}H^{0}(C,L)\). Consider the case that \(k=0\). Recall from [24, Theorem 1.1] that if \(H^{1}(C,L\otimes\omega_{C}^{-1})=0\), then \[K_{p,1}(C,L)\neq 0\iff 1\leq p\leq e-\operatorname{gon}(C)+1.\] Recall from [16, Theorem (4.a.1)], [17, Theorem 2] that if \(H^{0}(C,L\otimes\omega_{C}^{-1})\neq 0\), then \[K_{p,2}(C,L)\neq 0\iff e-g+1\leq p\leq e.\] Thus Theorem 1.1 holds for \(k=0\) as soon as \(\deg L\geq 4g-3\). **Problem 5.1**.: Find an effective bound for \(\deg L\) such that the conclusion of Theorem 1.1 holds. We do not attempt to make a conjecture for what the best bound for \(\deg L\) should be, but we expect that it would be linear in \(g\). Here we give answers for some partial cases. ### Effective Nonvanishing for \(q=k+1\) Recall from Lemma 2.2 (2) that \(\gamma^{k+1}(C)=g\) for \(k\geq g-1\). If \(k\geq g-1\) and \(\deg L\geq 2g+2k+1\), then [11, Theorem 1.2] implies that \[K_{p,k+1}(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))\neq 0\ \ \text{for}\ 1\leq p\leq e-\gamma^{k+1}(C).\] Thus we assume that \(k\leq g-2\). On the other hand, Sidman-Vermeire [25, Theorem 1.2] proved that if \(L=L_{1}\otimes L_{2}\), where \(L_{1},L_{2}\) are line bundles on \(C\) with \(s+1:=h^{0}(C,L_{1})\geq k+2\) and \(t+1:=h^{0}(C,L_{2})\geq k+2\), then \[K_{p,k+1}(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))\neq 0\ \ \text{for}\ 1\leq p\leq s+t-2k-1.\] This yields the following effective nonvanishing statement: **Proposition 5.2**.: _Assume that \(k\leq g-2\) and \(\deg L\geq 2g+\gamma^{k+1}(C)+k\). Then_ \[K_{p,k+1}(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))\neq 0\ \ \text{for}\ 1\leq p\leq e-\gamma^{k+1}(C).\] Proof.: By Lemma 2.2 (2), \(\gamma^{k+1}(C)\geq k+1\), so \(\deg L\geq 2g+2k+1\). We write \(\deg L=2g+\gamma^{k+1}(C)+k+\ell\) for some integer \(\ell\geq 0\). Then \(e=g+\gamma^{k+1}(C)+\ell-k-1\). Lemma 2.5 gives a line bundle \(L_{1}\) on \(C\) with \(\deg L_{1}=\gamma^{k+1}(C)+k+1\) and \(s+1:=h^{0}(C,L_{1})=k+2\). Let \(L_{2}:=L\otimes L_{1}^{-1}\) so that \(L=L_{1}\otimes L_{2}\). Then \(\deg L_{2}=2g-1+\ell\) and \(t+1:=h^{0}(C,L_{2})=g+\ell\). Note that \[s+t-2k-1=g+\ell-k-1=e-\gamma^{k+1}(C).\] Thus the proposition follows from [25, Theorem 1.2]. _Remark 5.3_.: Assume that \(k\leq g-2\). By Lemma 2.2 (2), \(\gamma^{k+1}(C)\leq g-1\). Then Proposition 5.2 holds when \(\deg L\geq 4g-3\). ### Effective Nonvanishing for \(q=2k+2\) Assume that \(\deg L\geq 2g+2k+1\). By duality, we have \[K_{p,2k+2}(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))=K_{e-p,0}(\Sigma_{k}, \omega_{\Sigma_{k}};\mathscr{O}_{\Sigma_{k}}(1))^{\vee}.\] Note that if \(K_{g-1,0}(\Sigma_{k},\omega_{\Sigma_{k}};\mathscr{O}_{\Sigma_{k}}(1))\neq 0\), then \(K_{p,2k+2}(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))\neq 0\) for \(e-g+1\leq p\leq e\). We need to find an effective bound on \(\deg L\) for \[K_{g-1,0}(\Sigma_{k},\omega_{\Sigma_{k}};\mathscr{O}_{\Sigma_{k}}(1))=H^{0}( \Sigma_{k},\bigwedge^{g-1}M_{\mathscr{O}_{\Sigma_{k}}(1)}\otimes\omega_{\Sigma _{k}})\neq 0.\] Notice that \(K_{g-1,0}(\Sigma_{k},\omega_{\Sigma_{k}};\mathscr{O}_{\Sigma_{k}}(1))\) is the kernel of the Koszul differential \[\delta\colon\bigwedge^{g-1}H^{0}(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))\otimes H ^{0}(\Sigma_{k},\omega_{\Sigma_{k}})\longrightarrow\bigwedge^{g-2}H^{0}( \Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))\otimes H^{0}(\Sigma_{k},\omega_{ \Sigma_{k}}(1)).\] In view of Proposition 3.6, \(\delta\) can be identified with the map \[\delta\colon\bigwedge^{g-1}H^{0}(C,L)\otimes S^{k+1}H^{0}(C,\omega_{C}) \longrightarrow\bigwedge^{g-2}H^{0}(C,L)\otimes H^{0}(C,L\otimes\omega_{C}) \otimes S^{k}H^{0}(C,\omega_{C})\] given by \[\delta(s_{1}\wedge\cdots\wedge s_{g-1}\otimes f)=\sum_{i=1}^{g-1}\sum_{j=1}^{ g}(-1)^{i-1}s_{1}\wedge\cdots\wedge\widehat{s_{i}}\wedge\cdots\wedge s_{g-1} \otimes s_{i}x_{j}\otimes\frac{\partial f}{\partial x_{j}},\] where \(x_{1},\ldots,x_{g}\) is a basis of \(H^{0}(C,\omega_{C})\). The following gives an answer to a question of Sidman-Vermeire in [26, p.164]. **Proposition 5.4**.: _We have the following:_ 1. _If_ \(k\) _is even, then there is an injective map_ \[S^{g-1}H^{0}(C,L\otimes\omega_{C}^{-k-1})\longleftrightarrow K_{g-1,0}(\Sigma _{k},\omega_{\Sigma_{k}};\mathscr{O}_{\Sigma_{k}}(1)).\] 2. _If_ \(k\) _is odd, then there is an injective map_ \[\bigwedge^{g-1}H^{0}(C,L\otimes\omega_{C}^{-k-1})\longleftrightarrow K_{g-1,0}( \Sigma_{k},\omega_{\Sigma_{k}};\mathscr{O}_{\Sigma_{k}}(1)).\] _In particular, if_ \[h^{0}(C,L\otimes\omega_{C}^{-k-1})\geq\begin{cases}1&\text{when $k$ is even}\\ g-1&\text{when $k$ is odd},\end{cases}\] _then_ \[K_{p,2k+2}(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))\neq 0\ \text{ for $e-g+1\leq p\leq e$}.\] Proof.: First, we recall some notations from multilinear algebra. Let \(V\) be a vector space over \(\mathbf{k}\), and \[T^{m}V:=\underbrace{V\otimes\cdots\otimes V}_{m\text{ times}}\ \text{ for any integer $m\geq 0$}.\] Since \(\operatorname{char}(\mathbf{k})=0\), there are natural splitting injective \(\mathbf{k}\)-linear maps \[\operatorname{alt} \colon\bigwedge^{m}V\longleftrightarrow T^{m}V,\ v_{1}\wedge \cdots\wedge v_{m}\longmapsto\sum_{\sigma\in\mathfrak{S}_{m}}\operatorname{ sign}(\sigma)v_{\sigma(1)}\otimes\cdots\otimes v_{\sigma(m)};\] \[\operatorname{sym} \colon S^{m}V\longleftrightarrow T^{m}V,\ v_{1}\cdots v_{m} \longmapsto\sum_{\sigma\in\mathfrak{S}_{m}}v_{\sigma(1)}\otimes\cdots \otimes v_{\sigma(m)}.\] Put \(\operatorname{Alt}^{m}V:=\operatorname{alt}(\bigwedge^{m}V)\) and \(\operatorname{Sym}^{m}V:=\operatorname{sym}(S^{m}V)\). Now, write \(L_{0}:=L\otimes\omega_{C}^{-k-1}\), and let \[R:=\bigoplus_{i,j\geq 0}H^{0}(C,L_{0}^{i}\otimes\omega_{C}^{j}).\] where the bottom map \(d\) is defined by \[d(s_{1}\otimes\cdots\otimes s_{g-2}\otimes s_{g-1}\otimes f)=\sum_{i=1}^{g}s_{1} \otimes\cdots\otimes s_{g-2}\otimes s_{g-1}x_{i}\otimes\frac{\partial f}{ \partial x_{i}}.\] Notice that there is a canonical ring structure on \(T^{g-1}R\otimes S^{*}H^{0}(C,\omega_{C})\) and the operator \(d\) on \(T^{g-1}R\otimes S^{*}H^{0}(C,\omega_{C})\) satisfies the chain rule. Consider the alternating tensor \[\operatorname{alt}(x_{1}\otimes\cdots\otimes x_{g})\in\operatorname{Alt}^{g} H^{0}(C,\omega_{C})\subseteq T^{g-1}R\otimes S^{*}H^{0}(C,\omega_{C}).\] Suppose that \(k\) is even. We may assume that \(H^{0}(C,L\otimes\omega_{C}^{-k-1})=H^{0}(C,L_{0})\neq 0\). Let \(\alpha_{0}\in S^{g-1}H^{0}(C,L_{0})\) be any nonzero element, and \[\alpha:=(\operatorname{sym}(\alpha_{0})\otimes 1)(\operatorname{alt}(x_{1} \otimes\cdots\otimes x_{g}))^{k+1}\in T^{g-1}R\otimes S^{*}H^{0}(C,\omega_{C}).\] On the factor \(T^{g-1}R\), the element \(\operatorname{sym}(\alpha_{0})\otimes 1\) is symmetric, and the element \(\operatorname{alt}(x_{1}\otimes\cdots\otimes x_{g})\) is alternating. Thus \(\alpha\) is alternating, that is, \(\alpha\in\operatorname{Alt}^{g-1}H^{0}(C,L)\otimes S^{k+1}H^{0}(C,\omega_{C})\). On the other hand, by the chain rule, \(d\alpha=0\) since \(d(\operatorname{sym}(\alpha_{0})\otimes 1)=0\) and \(d(\operatorname{alt}(x_{1}\otimes\cdots\otimes x_{g}))=0\). As \(T^{g-1}R\otimes S^{*}H^{0}(C,\omega_{C})\) is an integral domain, \(\alpha\) is a nonzero element. We have shown that there is an element \(\alpha^{\prime}\in\bigwedge^{g-1}H^{0}(C,L)\otimes S^{k+1}H^{0}(C,\omega_{C})\) such that \(\delta(\alpha^{\prime})=0\). By sending \(\alpha_{0}\) to \(\alpha^{\prime}\), we obtain the injective map in (1). Suppose that \(k\) is odd. Replacing \(\operatorname{sym}(\alpha_{0})\) with \(\operatorname{alt}(\alpha_{0})\) in the definition of \(\alpha\), we obtain the injective map in (2). If \(C\) is a hyperelliptic curve, then there is a morphism \(\tau\colon C\to\mathbf{P}^{1}\) of degree two such that \(\tau^{*}\mathscr{O}_{\mathbf{P}^{1}}(g-1)=\omega_{C}\). Let \(P:=\tau^{*}\mathscr{O}_{\mathbf{P}^{1}}(1)\) so that \(\omega_{C}=P^{g-1}\). In this case, we can improve the previous proposition as follows. **Proposition 5.5**.: _Assume that \(C\) is a hyperelliptic curve. If \(H^{0}(C,L\otimes P^{-g-k+1})\neq 0\), then_ \[K_{p,0}(\Sigma_{k},\omega_{\Sigma_{k}};\mathscr{O}_{\Sigma_{k}}(1))\neq 0\ \ \text{for}\ 0\leq p\leq g-1.\] Proof.: Let \(A:=\mathscr{O}_{\mathbf{P}^{1}}(g-1)\) and \(B:=\mathscr{O}_{\mathbf{P}^{1}}(g+k-1)\). Then \(H^{0}(C,L\otimes\tau^{*}B^{-1})\neq 0\). We have a commutative diagram It suffices to show that the upper horizontal map \(\delta^{\prime}\) is not injective. To this end, notice that \(\delta^{\prime}\) can be identified with the map \[\delta^{\prime}\colon\bigwedge^{g-1}H^{0}(\mathbf{P}^{1},B)\otimes H^{0}( \mathbf{P}^{k+1},S_{k+1,A})\longrightarrow\bigwedge^{g-2}H^{0}(\mathbf{P}^{1},B)\otimes H^{0}(\mathbf{P}^{k+1},E_{k+1,B}\otimes S_{k+1,A})\] by regarding \(\mathbf{P}^{k+1}=(\mathbf{P}^{1})_{k+1}\). Thus we obtain \[\ker(\delta^{\prime})=H^{0}(\mathbf{P}^{k+1},\bigwedge^{g-1}M_{E_{k+1,B}} \otimes S_{k+1,A}).\] Since \[M_{E_{k+1,B}}=H^{0}(\mathbf{P}^{1},\mathscr{O}_{\mathbf{P}^{1}}(g-2))\otimes \mathscr{O}_{\mathbf{P}^{k+1}}(-1)\ \ \text{and}\ \ S_{k+1,A}=\mathscr{O}_{\mathbf{P}^{k+1}}(g-1),\] it follows that \(\ker(\delta^{\prime})=H^{0}(\mathbf{P}^{k+1},\mathscr{O}_{\mathbf{P}^{k+1}})\neq 0\). **P _Remark 5.6_.: The last part of Proposition 5.4 holds as soon as \[\deg L\geq\begin{cases}g+(k+1)(2g-2)&\text{when $k$ is even}\\ 2g-2+(k+1)(2g-2)&\text{when $k$ is odd}.\end{cases}\] When \(C\) is hyperelliptic, Proposition 5.5 holds as soon as \(\deg L\geq 3g+2k-2\). _Example 5.7_.: Suppose that \(C\) is nonhyperelliptic and \(L:=\omega_{C}(D)\), where \(D\) is a general divisor of degree \(g-1\) so that \(h^{0}(C,\mathscr{O}_{C}(D))=h^{1}(C,\mathscr{O}_{C}(D))=0\). Then \(\deg L=3g-3\), and \(K_{g-1,0}(C,\omega_{C};L)=0\) by [17, Theorem 2]. For an integer \(1\leq k\leq(g-4)/2\), we have \(\deg L\geq 2g+2k+1\). Consider the commutative diagram where \(m\colon S^{k+1}H^{0}(C,\omega_{C})\to H^{0}(C,\omega_{C})\otimes S^{k}H^{0}(C, \omega_{C})\) is given by \(m(f)=\sum_{j=1}^{g}x_{j}\otimes\partial f/\partial x_{j}\). As \(K_{g-1,0}(C,\omega_{C};L)\) is the kernel of the Koszul differential \[\delta\colon\bigwedge^{g-1}H^{0}(C,L)\otimes H^{0}(C,\omega_{C})\longrightarrow \bigwedge^{g-2}H^{0}(C,L)\otimes H^{0}(C,L\otimes\omega_{C}),\] we see that \(K_{g-1,0}(\Sigma_{k},\omega_{\Sigma_{k}};\mathscr{O}_{\Sigma_{k}}(1))\subseteq K _{g-1,0}(C,\omega_{C};L)\otimes S^{k}H^{0}(C,\omega_{C})\). Thus we obtain \(K_{g-1,0}(\Sigma_{k},\omega_{\Sigma_{k}};\mathscr{O}_{\Sigma_{k}}(1))=0\) in this case. **Effective Vanishing for \(q=2k+1\).** Let \(c:=\gamma^{1}(C)=\operatorname{gon}(C)-1\). Then \(\omega_{C}\) is \((c-1)\)-very ample. For any \(1\leq p\leq c\), as \(h^{0}(C,\omega_{C}(-\xi))=g-p\) for all \(\xi\in C_{p}\), we see that \(M_{E_{p,\omega_{C}}}=\operatorname{pr}_{1,*}(\mathscr{O}_{C_{p}}\boxtimes \omega_{C})(-D_{p,1})\) is a vector bundle on \(C_{p}\). First, we prove the following vanishing result: **Proposition 5.8** (cf. [10, Proposition 2.1]).: _Assume that \(\deg L\geq(c^{2}+kc+k+1)(g-1)+1\). Then_ \[H^{i}(C_{p},S^{k}M_{E_{p,\omega_{C}}}\otimes N_{p,L})=0\ \ \text{for $i>0$ and $1\leq p\leq c$}.\] Proof.: Let \(V\subseteq H^{0}(C,\omega_{C})\) be a general subspace of dimension \(2p\) so that the evaluation map \(\operatorname{ev}\colon V\otimes C_{p}\to E_{p,\omega_{C}}\) is surjective, and \(M_{V}\) be the kernel of the evaluation map. Then \(M_{V}\) is a vector bundle of rank \(p\) on \(C_{p}\). We have a short exact sequence By considering the filtration of \(S^{k}M_{E_{p,\omega_{C}}}\) associated to this short exact sequence, we reduce the problem to proving that \[H^{i}(C_{p},S^{j}M_{V}\otimes N_{p,L})=0\ \ \text{for $i>0$ and $0\leq j\leq k$}. \tag{5.1}\] Notice that \(M_{V}\otimes N_{p,\omega_{C}}\) is globally generated and \(A_{j}:=N_{p,L}\otimes N_{p,\omega_{C}}^{-(p+j)}\) is ample for \(0\leq j\leq k\) (see [10, Proof of Proposition 2.1]). Then \[S^{j}M_{V}\otimes N_{p,L}=N_{p,\omega_{C}}\otimes S^{j}(M_{V}\otimes N_{p, \omega_{C}})\otimes\det(M_{V}\otimes N_{p,\omega_{C}})\otimes A_{j}.\] As \(\omega_{C_{p}}=N_{p,\omega_{C}}\), the required cohomology vanishing (5.1) follows from Griffiths vanishing [21, Variant 7.3.2]. **Proposition 5.9**.: _Assume that \(\deg L\geq\big{(}c^{2}+(c+1)(k+1+\lfloor c/2\rfloor)+1\big{)}(g-1)+1\). Then_ \[K_{p,2k+1}(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))=0\ \ \text{for $p\geq e-c+1$}.\] Proof.: Arguing as in the proof of Theorem 1.1, we reduce the problem to proving that \[H^{2k+1}(\Sigma_{k+i},\bigwedge^{p+2k}M_{\mathscr{O}_{\Sigma_{k+i}}(1)}\otimes \mathscr{I}_{\Sigma_{k+i-1}|\Sigma_{k+i}}(1))=0\ \ \text{for $i\geq 1$},\] which is equivalent to \[H^{2i}(C_{p^{*}}\times C_{k+1+i},(N_{p^{*},L}\boxtimes S_{k+1+i,\omega_{C}})(-D _{p^{*},k+1+i}))=0\ \ \text{for $i\geq 1$},\] where \(p^{*}:=e-p+1\leq c\), by Lemma 3.5. Using Lemma 3.1, a similar argument of the proof of Lemma 3.3 yields that \[M_{p^{*}}^{j}S_{k+1+i,\omega_{C}}=\begin{cases}S^{k+1+i}M_{E_{p^{*},\omega_{C} }}&\text{if $j=0$}\\ S^{k+i}M_{E_{p^{*},\omega_{C}}}&\text{if $j=1$}\\ 0&\text{if $j\geq 2$}.\end{cases}\] Then it is enough to show that \[H^{2i-1+j}(C_{p^{*}},S^{k+i+j}M_{E_{p^{*},\omega_{C}}}\otimes N_{p^{*},L})=0 \ \text{for $i\geq 1$ and $j=0,1$},\] but this follows from Proposition 5.8. _Remark 5.10_.: In view of [11, Theorem 4.1] and [24, Theorem 3.1], we expect that Propositions 5.8 and 5.9 hold under a much weaker assumption. _Example 5.11_.: Let \(C\) be a smooth plane quartic curve. Then the genus \(g\) of \(C\) is \(3\), and \(\gamma^{0}(C)=0,\gamma^{1}(C)=2,\gamma^{2}(C)=2\). Let \(L_{1}:=\omega_{C}^{3},L_{2}:=L_{1}(-x),L_{3}:=L_{1}(-x-y)\), where \(x,y\) are random points on \(C\). Note that \(\deg L_{1}=12,\ \deg L_{2}=11,\ \deg L_{3}=10\). A Macaulay2 [15] computation shows that the Betti tables of \(R(\Sigma_{1},\mathscr{O}_{\Sigma_{1}}(1))\) for \(L=L_{1},L_{2},L_{3}\) are the following: When \(L=L_{3}\), we see that \(K_{g-1,0}(\Sigma_{1},\omega_{\Sigma_{1}};\mathscr{O}_{\Sigma_{1}}(1))=0\). In this case, \(h^{0}(C,L\otimes\omega_{C}^{-2})=h^{0}(C,\omega_{C}(-x-y))=1<2=g-1\). This shows that the condition in Proposition 5.4 is sharp. On the other hand, notice that \(K_{1,1}(\Sigma_{1},\omega_{\Sigma_{1}};\mathscr{O}_{\Sigma_{1}}(1))\neq 0\) for \(L=L_{2},L_{3}\); in other words, the conclusion of Proposition 5.9 does not hold. However, \(K_{1,1}(C,\omega_{C};L)=0\) for \(L=L_{2},L_{3}\) by [24, Theorem 1.1] since \(\deg L\geq 9=4g-3\). We now turn to the quantitative study of the nonzero Betti numbers \[\kappa_{p,q}(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1)):=\dim K_{p,q}(\Sigma_{k },\mathscr{O}_{\Sigma_{k}}(1)).\] It would be exceedingly interesting to know whether there is a uniform asymptotic behavior of \(\kappa_{p,q}(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))\) as the positivity of \(L\) grows. If so, one may further ask what kind of geometry of \(C\) is related to this asymptotic behavior. For integers \(m,\ell\geq 1\), we define \[\mathcal{L}_{m}^{\ell}=\mathcal{L}_{m}^{\ell}(C):=\{\xi\in C_{m}\mid h^{1}(C,\omega_{C}(-\xi))\geq\ell\}.\] Let \(e:=\operatorname{codim}\Sigma_{k}\). **Proposition 5.12**.: _Fix an integer \(k+1\leq q\leq 2k+1\). Assume that \(L=L_{d}:=\mathscr{O}_{C}(dA+P)\) for an integer \(d\gg 0\), where \(A\) is an ample divisor on \(C\) and \(P\) is any divisor on \(C\). Then \(\kappa_{e-\gamma^{2k+2-q}(C),q}(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))\) is a polynomial in \(d\) of degree \(\dim\mathscr{L}_{2k+2-q+\gamma^{2k+2-q}(C)}^{2k+3-q}(C)\)._ Proof.: Put \(p:=e-\gamma^{2k+2-q}(C)\). By Theorem 1.1, \[H^{q-1}(\Sigma_{k+1},\bigwedge^{p+q-1}M_{\mathscr{O}_{\Sigma_{k+1 }}(1)}\otimes\mathscr{O}_{\Sigma_{k+1}}(1))=K_{p,q}(\Sigma_{k+1},\mathscr{O}_{ \Sigma_{k+1}}(1))=0;\] \[H^{q}(\Sigma_{k+1},\bigwedge^{p+q-1}M_{\mathscr{O}_{\Sigma_{k+1 }}(1)}\otimes\mathscr{O}_{\Sigma_{k+1}}(1))=K_{p-1,q+1}(\Sigma_{k+1},\mathscr{O }_{\Sigma_{k+1}}(1))=0.\] Then the exact sequence (4.2) shows that \[\kappa_{p,q}(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))=h^{q-1}(\Sigma_{k}, \bigwedge^{p+q-1}M_{\mathscr{O}_{\Sigma_{k}}(1)}\otimes\mathscr{O}_{\Sigma_{k} }(1))=h^{q}(\Sigma_{k+1},\bigwedge^{p+q-1}M_{\mathscr{O}_{\Sigma_{k+1}}(1)} \otimes\mathscr{I}_{\Sigma_{k}|\Sigma_{k+1}}(1)).\] By Lemmas 3.2 and 3.5 and the Leray spectral sequence for \(\operatorname{pr}_{1}\colon C_{p^{*}+q^{*}}\times C_{k+2}\to C_{p^{*}+q^{*}}\), we have \[h^{q}(\Sigma_{k+1},\bigwedge^{p+q-1}M_{\mathscr{O}_{\Sigma_{k+1}}(1)}\otimes \mathscr{I}_{\Sigma_{k}|\Sigma_{k+1}}(1))=h^{0}(C_{p^{*}+q^{*}},M_{p^{*}+q^{*} }^{q^{*}+1}S_{k+2,\omega_{C}}\otimes N_{p^{*}+q^{*},L}),\] where \(p^{*}:=e-p\) and \(q^{*}:=2k+2-q\). Note that \(\operatorname{Supp}M_{p^{*}+q^{*}}^{q^{*}+1}S_{k+2,\omega_{C}}=\mathscr{L}_{p^ {*}+q^{*}}^{q^{*}+1}(C)\). As we may write \(N_{p^{*}+q^{*},L}=N_{p^{*}+q^{*},\mathscr{O}_{C}(P)}\otimes S_{p^{*}+q^{*}, \mathscr{O}_{C}(A)}^{d}\) and \(S_{p^{*}+q^{*},\mathscr{O}_{C}(A)}\) is ample, we see that \[h^{0}(C_{p^{*}+q^{*}},M_{p^{*}+q^{*}}^{q^{*}+1}S_{k+2,\omega_{C}}\otimes N_{p^ {*}+q^{*},L})=\chi(M_{p^{*}+q^{*}}^{q^{*}+1}S_{k+2,\omega_{C}}\otimes N_{p^{*} +q^{*},\mathscr{O}_{C}(P)}\otimes S_{p^{*}+q^{*},\mathscr{O}_{C}(A)}^{d})\] is a polynomial in \(d\) of degree \(\dim\mathscr{L}_{p^{*}+q^{*}}^{q^{*}+1}(C)\). In the situation of the above proposition, for \(e-g+1\leq p\leq e\), Ein-Lazarsfeld [10, Theorem C] proved that \(\kappa_{p,1}(C,\omega_{C};L)\) is a polynomial in \(d\) (see [29] for a higher dimensional generalization). Thus it is natural to ask the following. **Question 5.13**.: For \(e-g+1\leq p\leq e\) and \(k+1\leq q\leq 2k+2\), is \(\kappa_{p,q}(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))\) a polynomial in \(d:=\deg L\) when \(d\gg 0\)? In some cases, one can compute \(\kappa_{p,q}(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))\) exactly. For instance, \(\kappa_{e,2k+2}(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))=\binom{g+k}{k+1}\) (see [11, Theorem 1.2]). In the curve case, Kemeny [18, Theorem 1.1] proved that if \(C\) is a general curve of genus \(g\geq 2k-1\) and gonality \(k=\gamma^{1}(C)+1\geq 4\) and \(L\) is a line bundle on \(C\) with \(\deg L\geq 2g+k\), then \[\kappa_{e-\gamma^{1}(C),1}(C,L)=e-\gamma^{1}(C),\] where \(e:=h^{1}(C,L)-2\) is the codimension of \(C\) in \(\mathbf{P}H^{0}(C,L)=\mathbf{P}^{r}\). This theorem can be geometrically interpreted as follows. Let \(\tau\colon C\to\mathbf{P}^{1}\) be a branched covering of degree \(k\). Then the linear spans of the fibers of \(\tau\) in \(\mathbf{P}^{r}\) sweep out a \(k\)-dimensional scroll \(S\) containing \(C\). There is a natural injective map \(\iota_{p}\colon K_{p,1}(S,\mathscr{O}_{S}(1))\to K_{p,1}(C,L)\). Kemeny's theorem says that \(\iota_{e-\gamma^{1}(C)}\) is in fact an isomorphism. Along this line, one may ask the following: **Question 5.14**.: Fix an integer \(k+1\leq q\leq 2k+1\). Under what conditions, can one compute \(\kappa_{e-\gamma^{2k+2-q}(C),q}(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))\) exactly? In this case, can one find some interesting geometric meaning of spanning Koszul classes of \(K_{e-\gamma^{2k+2-q}(C),q}(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))\)? For an integer \(k\geq 0\), suppose that \(C\) is a general curve carrying a unique \((k+1)\)-dimensional linear system \(|L_{1}|\) of degree \(\gamma^{k+1}(C)+k+1\). Then we expect that \[\kappa_{e-\gamma^{k+1}(C),k+1}(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))=\binom{e -\gamma^{k+1}(C)+k}{k+1}.\] Suppose that the expectation is true. Let \(M\) be a matrix given by the multiplication map \[H^{0}(C,L_{1})\otimes H^{0}(C,L\otimes L_{1}^{-1})\longrightarrow H^{0}(C,L),\] and \(X\subseteq\mathbb{P}^{r}\) be the projective variety cut out by \((k+2)\)-minors of \(M\). Then the natural map \[K_{e\to\gamma^{k+1}(C),k+1}(X,\mathscr{O}_{X}(1))\longrightarrow K_{e\to \gamma^{k+1}(C),k+1}(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))\] is an isomorphism. We remark that \(R(X,\mathscr{O}_{X}(1))\) is minimally resolved by the Eagon-Northcott complex associated to \(M\). Thus \(K_{e\to\gamma^{k+1}(C),k+1}(\Sigma_{k},\mathscr{O}_{\Sigma_{k}}(1))\) is spanned by Koszul classes of the smallest rank \(e-\gamma^{k+1}(C)+k+1\) (see [4, Corollary 4.3]).
2308.13093
EgoBlur: Responsible Innovation in Aria
Project Aria pushes the frontiers of Egocentric AI with large-scale real-world data collection using purposely designed glasses with privacy first approach. To protect the privacy of bystanders being recorded by the glasses, our research protocols are designed to ensure recorded video is processed by an AI anonymization model that removes bystander faces and vehicle license plates. Detected face and license plate regions are processed with a Gaussian blur such that these personal identification information (PII) regions are obscured. This process helps to ensure that anonymized versions of the video is retained for research purposes. In Project Aria, we have developed a state-of-the-art anonymization system EgoBlur. In this paper, we present extensive analysis of EgoBlur on challenging datasets comparing its performance with other state-of-the-art systems from industry and academia including extensive Responsible AI analysis on recently released Casual Conversations V2 dataset.
Nikhil Raina, Guruprasad Somasundaram, Kang Zheng, Sagar Miglani, Steve Saarinen, Jeff Meissner, Mark Schwesinger, Luis Pesqueira, Ishita Prasad, Edward Miller, Prince Gupta, Mingfei Yan, Richard Newcombe, Carl Ren, Omkar M Parkhi
2023-08-24T21:36:11Z
http://arxiv.org/abs/2308.13093v2
# EgoBlur: Responsible Innovation in Aria ###### Abstract Project Aria pushes the frontiers of Egocentric AI with large-scale real-world data collection using purposely designed glasses with privacy first approach. To protect the privacy of bystanders being recorded by the glasses, our research protocols are designed to ensure recorded video is processed by an AI anonymization model that removes bystander faces and vehicle license plates. Detected face and license plate regions are processed with a Gaussian blur such that these personal identification information (PII) regions are obscured. This process helps to ensure that anonymized versions of the video is retained for research purposes. In Project Aria, we have developed a state-of-the-art anonymization system 'EgoBlur'. In this paper, we present extensive analysis of EgoBlur on challenging datasets comparing its performance with other state-of-the-art systems from industry and academia including extensive Responsible AI analysis on recently released Casual Conversations V2 [10] dataset. ## I Introduction As part of our commitment to a privacy-first approach in Project Aria, we are committed to anonymizing people's faces and vehicle license plates in our recordings. Our anonymization system operates within our data ingestion platform, ensuring that human faces and license plates are obfuscated before a recording is made available for research purposes. We conducted extensive evaluations of our system on various challenging datasets across different axes of evaluation. Additionally, we performed a detailed Responsible AI analysis of our face detection model. The goal of this paper is to present the results of that analysis and compare it to a few other state-of-the-art systems. The objective of EgoBlur is to obscure human faces and vehicle license plates captured by Aria glasses. While there have been previous works on in-place face editing and replacement, which could serve as obfuscation strategies [5, 1, 9], these methods have not been extensively tested on real-world videos from a user's egocentric perspective. We opt for a simpler yet effective approach of detecting these objects (faces and license plates) using traditional object detectors and obfuscating the underlying pixels with a Gaussian blur function. This selection subsequently opens up choices in the object detection world with several research works showcasing state-of-the-art performance on challenging datasets for face detection as well as for generic object detection. We select FasterRCNN [11] as our choice of object detector system. It offers several advantages. FasterRCNN and its subsequent variants such as MaskRCNN [6] are one of the top performing methods on benchmark datasets such as MS-COCO [7]. They have been widely studied, cited, and put into production systems. They are applicable to a wide variety of objects and do not need task-specific treatment for specific classes such as facial keypoints annotations for better face detection performance. To demonstrate the effectiveness of our choice of FasterRCNN-based generic object detector, we compare its performance for the problem of face detection/anonymization with state-of-the-art RetinaFace[4] and MediaPipe[2] face detectors. We demonstrate that our choice of using a task-agnostic detector for both face and license plate detection outperforms or matches the performance of leading techniques, achieving over 90% recall on challenging benchmark datasets. Our anonymization pipeline is designed to be flexible, allowing for easy replacement of the underlying detector with improved versions of our detectors or any new detection model. In this paper, we provide details of two subsystems of EgoBlur: first, we discuss our face anonymization method, providing details on detector training and a detailed analysis of its performance compared to other state-of-the-art methods on challenging datasets. Then, we provide similar insights into the performance of our license plate anonymization method, discussing its training and analysis on our benchmarking dataset. ## II Anonymization Benchmarking In this section, we provide a comprehensive overview of face and license plate anonymization subsystems. We begin by describing our training methodology briefly. We then follow it up with detailed performance analysis of the underlying detectors. ### _Faces_ For training the FasterRCNN-based face detector, we adopt a weakly supervised approach [3]. We selected a large corpus of images and use the publicly available RetinaFace model as a strong teacher to provide pseudo ground truth. We then feed this data through the standard ResNet-101-32x8 FPN-based FasterRCNN model using Detectron2 [12]. To improve performance, we follow a learning rate schedule based on long-term training experiments and increase the share of grayscale images during training. In the following sections, we describe the datasets used for evaluation and present a detailed analysis of our results. #### Vi-B1 Benchmarking Datasets CCV2 Dataset\(:\) : The recently released Casual Conversations V2 dataset [10] provides valuable annotations for evaluating the performance of a model on various Responsible AI attributes, such as age, skin tone, gender, and country of origin. To leverage this dataset for our face detection benchmarking, we augmented it with manually annotated face bounding boxes. This allowed us to carefully evaluate the performance of our face detector on these important responsible AI attributes provided by the dataset. Specifically, we uniformly sampled frames from the videos of CCV2 and manually annotated them with face bounding boxes, resulting in a dataset of 259,656 bounding boxes. Aria Pilot DatasetThe Aria Pilot Dataset [8] is an open-source egocentric dataset collected using the Aria glasses. To use it for face detector benchmarking, we comprehensively annotated this dataset with manual face bounding box annotations. We created a dataset of 18,508 annotated frames with 23,242 bounding boxes. This complementary dataset to CCV2 provides essential in-domain data specific to the use-cases typically observed in our recordings. In addition to the bounding boxes, we augmented this dataset with various attribute labels such as wearing glasses, truncated and occluded faces, dark lighting scenarios, etc., to understand the fine-grained performance of our system. To avoid annotator bias, these annotations were carried out in a multi-review process (3 annotators labeling attributes for the same bounding box), and the attribute labels were selected using majority voting. The resultant dataset provides a strong benchmark to evaluate detection performance in common scenarios observed in our recordings and provides insights for areas of further improvement. #### Vi-B2 Evaluation For evaluating our detectors, we use standard object detection evaluation metrics. We compute the intersection over union (IoU) with an overlap threshold of 0.5 and calculate the average precision (AP) and average recall (AR) using the MS-COCO API[7]. To provide a comprehensive comparison, we benchmark our detector against two publicly available face detectors: RetinaFace[4] and MediaPipe [2]. While MediaPipe is designed for low latency applications, RetinaFace is a strong academic baseline that has demonstrated state-of-the-art results on various face detection tasks. By comparing our performance to these leading methods on the carefully annotated datasets described above, we can contextualize our results and provide a more meaningful assessment of our detector's capabilities. ### _License-plates_ Similar to the face detector training described above, for vehicle license plate anonymization, we aimed at establishing a strong baseline performance using the FasterRCNN architecture. Due to the lack of a previous strong baseline model as a strong teacher, we bootstrapped our data engine using training data obtained from manual annotation of large scale images. We created a dataset of over 200K images using this process. Similar to face detector training described above, we used these images for training the FasterRCNN based detector based on the ResNet101-32x8 backbone. Benchmarking DatasetTo benchmark the performance, we collected a comprehensive test dataset using Aria devices. Our in-house data collection team acquired over 40 recordings spanning two weeks at the parking lots of our offices. These recordings were captured under varying conditions such as different times of day, viewing distances, \begin{table} \begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{**Monk skin tone**} & \multicolumn{2}{c|}{**Mediapipe**} & \multicolumn{2}{c|}{**RetinaFace**} & \multicolumn{1}{c|}{**EgoBlur**} \\ \cline{2-4} & \multicolumn{3}{c|}{**Average Recall (AR)**} \\ \hline **scale 1** & 0.987 & 0.995 & 0.998 \\ \hline **scale 2** & 0.988 & 0.998 & 0.998 \\ \hline **scale 3** & 0.99 & 0.997 & 0.997 \\ \hline **scale 4** & 0.99 & 0.998 & 0.998 \\ \hline **scale 5** & 0.991 & 0.999 & 0.999 \\ \hline **scale 6** & 0.989 & 0.997 & 0.998 \\ \hline **scale 7** & 0.99 & 0.996 & 0.997 \\ \hline **scale 8** & 0.985 & 0.996 & 0.997 \\ \hline **scale 9** & 0.98 & 0.996 & 0.997 \\ \hline **scale 10** & 0.966 & 0.997 & 0.997 \\ \hline \end{tabular} \end{table} TABLE V: Performance comparison of systems across monk scale skin tone annotations on the CCV2 dataset. Fig. 1: Qualitative results on CCV2 dataset. CCV2 dataset has actors from various countries, age group, gender and skin tone buckets. Our method provides consistent results across all buckets. \begin{table} \begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{**Fitzpatrick skin tone**} & \multicolumn{2}{c|}{**Mediapipe**} & \multicolumn{1}{c|}{**RetinaFace**} & \multicolumn{1}{c|}{**EgoBlur**} \\ \cline{2-4} & \multicolumn{3}{c|}{**Average Recall (AR)**} \\ \hline **type i** & 0.986 & 0.996 & 0.998 \\ \hline **type ii** & 0.989 & 0.997 & 0.998 \\ \hline **type ii** & 0.991 & 0.998 & 0.998 \\ \hline **type iv** & 0.99 & 0.998 & 0.998 \\ \hline **type v** & 0.989 & 0.998 & 0.999 \\ \hline **type vi** & 0.98 & 0.997 & 0.997 \\ \hline \end{tabular} \end{table} TABLE IV: Performance comparison of systems across Fitzpatrick skin tone annotations on the CCV2 dataset. Fig. 2: Qualitative results on Aria Pilot dataset. Aria Pilot dataset provides challenging benchmark for evaluating performance of our method. angles, car types, and motion types. We sampled a total of 56,561 frames from these videos and sent them through two phases of manual annotations similar to those performed on the Aria pilot test dataset. The first phase involved labeling boxes, while the second focused on fine-grained attribute annotations. ResultsTo evaluate the performance of our vehicle license plate anonymization method, we used Intersection Over Union (IoU) with a threshold of 0.5 and average precision and recall as metrics. The results are presented in Table X, demonstrating strong and consistent performance across both RGB and grayscale streams of the Aria recordings. ### _Conclusion_ We have successfully developed EgoBlur, a system for face and license plate anonymization in Aria recordings, demonstrating our commitment to preserving the privacy of individuals. Our analysis shows that the face model perform similarly or better than strong baseline methods from academia and industry. The fine-grained performance of this model on Responsible AI datasets is consistent across different buckets and recording streams. Additionally, our analysis provides guidance for future improvements, particularly in anonymizing truncated faces. We also establish a strong baseline model for vehicle license plate anonymization. It's important to note that these models are only trained to locate faces and license plates in images and do not produce any additional attributes. Fine-grained annotations were provided on test data but were not used in training our models.
2310.09281
Holographic imaging of antiferromagnetic domains with in-situ magnetic field
Lensless coherent x-ray imaging techniques have great potential for high-resolution imaging of magnetic systems with a variety of in-situ perturbations. Despite many investigations of ferromagnets, extending these techniques to the study of other magnetic materials, primarily antiferromagnets, is lacking. Here, we demonstrate the first (to our knowledge) study of an antiferromagnet using holographic imaging through the "holography with extended reference by autocorrelation linear differential operation" technique. Energy-dependent contrast with both linearly and circularly polarised x-rays are demonstrated. Antiferromagnetic domains and topological textures are studied in the presence of applied magnetic fields, demonstrating quasi-cyclic domain reconfiguration up to 500 mT.
Jack Harrison, Hariom Jani, Junxiong Hu, Manohar Lal, Jheng-Cyuan Lin, Horia Popescu, Jason Brown, Nicolas Jaouen, A. Ariando, Paolo G. Radaelli
2023-10-13T17:44:40Z
http://arxiv.org/abs/2310.09281v1
# Holographic imaging of antiferromagnetic domains with in-situ magnetic field ###### Abstract Lensless coherent x-ray imaging techniques have great potential for high-resolution imaging of magnetic systems with a variety of in-situ perturbations. Despite many investigations of ferromagnets, extending these techniques to the study of other magnetic materials, primarily antiferromagnets, is lacking. Here, we demonstrate the first (to our knowledge) study of an antiferromagnet using holographic imaging through the 'holography with extended reference by autocorrelation linear differential operation' technique. Energy-dependent contrast with both linearly and circularly polarised x-rays are demonstrated. Antiferromagnetic domains and topological textures are studied in the presence of applied magnetic fields, demonstrating quasi-cyclic domain reconfiguration up to 500 mT. ## 1 Introduction Synchrotron-based x-ray imaging has become an invaluable tool for the study of magnetic, quantum and functional materials for a wide variety of fundamental and application-based research. X-ray photoelectron emission microscopy (X-PEEM), scanning transmission x-ray microscopy (STXM) and several other related techniques have been successfully employed to study a wide range of materials with spatial resolutions down to a few tens of nm [1, 2, 3, 4]. All of these methods rely on core-level spectroscopy to produce contrast, but differ in the way the image is created. The availability of appropriate sample environments to apply _in-situ_ perturbations to the sample is of great importance to enhance the impact of this research, and much progress has been made to provide non-standard environments and stimuli [5, 6, 7]. In this respect, many x-ray based imaging techniques suffer from fundamental limitations, which restrict the complexity of the sample environment and consequently the accessible phase space. For transmission-based techniques such as STXM, the limitation is geometrical and relates to the requirement for a zone plate very close to the sample [1]. In the case of X-PEEM, large voltage differentials at the sample stage are required to extract and accelerate secondary electrons, significantly limiting the available space around the sample [8]. Moreover, the very nature of the X-PEEM technique, which is based on charged particles, makes it difficult to image samples in large applied magnetic or electric fields. As an alternative, photon-based lensless imaging is particularly appealing, since the x-ray beam is formed far away from the sample stage and no distortion is introduced when fields are applied [9]. The necessity to employ coherent x-ray beams, traditionally regarded as a limitation of these methods, is being progressively overcome by modern synchrotron designs, which provide large coherent fractions in the soft x-ray regime. A further boost to techniques based on bright coherent sources will be provided by fourth-generation storage rings based on the multi-bend achromat lattice concept [10, 11], as well as x-ray free electron lasers [12]. In this context, it is worth emphasising that full-field lensless methods do not require scanning, and are therefore ideal for time-resolved studies. X-ray Fourier transform holography (FTH) is a family of related lensless imaging techniques,
2307.15267
The global stability of the Kaluza-Klein spacetime
In this paper we show the classical global stability of the flat Kaluza-Klein spacetime, which corresponds to Minkowski spacetime in $\m R^{1+4}$ with one direction compactified on a circle. We consider small perturbations which are allowed to vary in all directions including the compact direction. These perturbations lead to the creation of massless modes and Klein-Gordon modes. On the analytic side, this leads to a PDE system coupling wave equations to an infinite sequence of Klein-Gordon equations with different masses. The techniques we use are based purely in physical space using the vectorfield method.
Cécile Huneau, Annalaura Stingo, Zoe Wyatt
2023-07-28T02:23:52Z
http://arxiv.org/abs/2307.15267v1
# The global stability of the Kaluza-Klein spacetime ###### Abstract. In this paper we show the classical global stability of the flat Kaluza-Klein spacetime, which corresponds to Minkowski spacetime in \(\mathbb{R}^{1+4}\) with one direction compactified on a circle. We consider small perturbations which are allowed to vary in all directions including the compact direction. These perturbations lead to the creation of massless modes and Klein-Gordon modes. On the analytic side, this leads to a PDE system coupling wave equations to an infinite sequence of Klein-Gordon equations with different masses. The techniques we use are based purely in physical space using the vectorfield method. ## 1. Introduction The goal of the present article is to prove the global stability of the Kaluza-Klein spacetime for the Einstein vacuum equations \[R_{\mu\nu}[g]=0 \tag{1.1}\] where \(R_{\mu\nu}\) denotes the Ricci tensor of an unknown Lorentzian metric \(g\). The Kaluza-Klein spacetime is a solution of (1.1) on \(\mathbb{R}^{1+3}\times\mathbb{S}^{1}\) and consists of a Lorentzian metric \(\overline{g}\), given in the standard coordinates \((t,x)\in\mathbb{R}^{1+3}\), \(y\in\mathbb{S}^{1}\) by \[\overline{g}=-(dt)^{2}+\sum_{\boldsymbol{i}=1}^{3}(dx^{\boldsymbol{i}})^{2}+( dy)^{2}.\] The Einstein equations in this higher dimensional setting have, as in the standard \(3+1\) setting, a well-posed initial value formulation. The data consist of a triplet \((\Sigma_{0},g_{0},K_{0})\) where \(\Sigma_{0}\) is a 4-dimensional manifold diffeomorphic to \(\mathbb{R}^{3}\times\mathbb{S}^{1}\) equipped with a Riemannian metric \(g_{0}\) and \(K_{0}\) is a symmetric two-tensor. Solving (1.1) with initial data \((\Sigma_{0},g_{0},K_{0})\) means that one looks for a 5-dimensional manifold \(\mathscr{M}\) with a Lorentzian metric \(g\) satisfying (1.1) and an embedding \(\Sigma_{0}\hookrightarrow\mathscr{M}\) such that \(g_{0}\) is the pullback of \(g\) to \(\Sigma_{0}\) and \(K_{0}\) is the second fundamental form of \(\Sigma_{0}\). The initial value problem is overdetermined and the data must satisfy the _constraint equations_1 Footnote 1: We use the Einstein summation convention over repeated indexes. Greek indexes run from \(0\) to \(4\) while Latin indexes run from \(1\) to \(4\). **Bold** Greek and Latin indexes run up to \(3\). We use the notation \(x^{0}=t\) and \(x^{4}=y\) so that \(\partial_{\mu}=\partial/\partial x^{\mu}\) for \(\mu=0,\dots,4\) denotes any derivative along the coordinate axes. \[R[g_{0}]-K_{0}^{ij}K_{0ij}+K_{0}^{\;i}K_{0}^{\;j}=0,\quad\nabla^{j}K_{0ij}- \nabla_{i}K_{0}^{\;j}=0\] where \(R[g_{0}]\) is the scalar curvature of \(g_{0}\) and \(\nabla\) is the Levi-Civita connection of \(g_{0}\). These equations simply come from the vanishing of the time components of the Einstein tensor \[R_{0i}=0,\qquad R_{00}-\frac{1}{2}Rg_{00}=0.\] In PDE terminology, the _local well-posedness_ of the Einstein equations was proved in the seminal works of Choquet-Bruhat [8] and Choquet-Bruhat and Geroch [10], who show the existence and uniqueness (up to diffeomorphisms) of a maximal globally hyperbolic spacetime arising from any set of smooth initial data satisfying the constraint equations. This is a local result in the sense that it does not guarantee that the spacetime solution \((\mathscr{M},g)\) is causally geodesically complete. We observe that their proofs, which are performed in a 4-dimensional setting, do not actually depend on the particular manifold \(\mathscr{M}\) considered (nor on its dimension, or whether or not it is compact or a product with compact factors) and therefore apply to the Kaluza-Klein setting. We also mention the recent work of the first author with Valcu [22] in which initial data for the Einstein equations on manifolds of the form \(\mathbb{R}^{1+n}\times\mathbb{T}^{m}\) are constructed. The articles mentioned above constitute the starting point to investigate and prove the global stability of the flat metric \(\overline{g}\). An informal statement of our main result is the following **Theorem 1.1**.: _Let \((\Sigma_{0},g_{0},K_{0})\) be an arbitrary set of smooth asymptotically flat initial data satisfying the constraint equations, with \(\Sigma_{0}\cong\mathbb{R}^{3}\times\mathbb{S}^{1}\),_ \[g_{0} =\begin{pmatrix}(1+\chi(r)M/r)I_{3}&0\\ 0&1\end{pmatrix}+g_{0}^{1},\quad(I_{3})_{\boldsymbol{i}\boldsymbol{j}}= \delta_{\boldsymbol{i}\boldsymbol{j}}\] \[\text{where }g_{0\,ij}^{1} =O(r^{-1-\kappa}),\ K_{0\,ij}=O(r^{-2-\kappa})\text{ as }r=|x|\to\infty,\ \kappa>0\] _and such that \(g_{0}-\delta\) and \(K_{0}\) satisfy global smallness assumptions. Then, there exists a causally geodesically complete spacetime asymptotically converging to the Kaluza-Klein spacetime._ In the above theorem, \(\chi\) is a cut-off function supported outside some ball centered at \(0\) and \(M\) is a positive constant corresponding to the ADM mass. We refer to the work of Dai [12] on the positive mass theorem for manifolds including those of Kaluza-Klein type. The global stability problem for the flat metric \(\overline{g}\) can be cast into the form of a small data global existence problem for quasilinear wave equations. The Einstein equations can be written as a system of quasilinear wave equations for the unknown metric coefficients \(g_{\alpha\beta}\) if one works with a standard gauge, called the _harmonic_ or _wave coordinate_ or _De Donder_ gauge, in which the (harmonic) coordinates \(\{x^{\alpha}\}_{\alpha=0,\ldots,4}\) are defined to be solutions of the geometric wave equation2\(\square_{g}x^{\alpha}=g^{\mu\nu}\nabla_{\mu}\nabla_{\nu}x^{\alpha}=0\), where \(\nabla\) denotes the Levi-Civita connection of \(g\). Relative to these coordinates the metric \(g\) satisfies the so-called _wave condition_ Footnote 2: \(g^{\mu\nu}\) and \(\overline{g}^{\mu\nu}\) denote respectively the coefficients of the inverse metric of \(g\) and \(\overline{g}\). Unless differently specified, we lower and raise indexes using the metric \(\overline{g}\), i.e. for any tensor \(\pi_{\alpha\beta}\) we define \(\pi^{\alpha\beta}:=\overline{g}^{\alpha\mu}\overline{g}^{\beta\nu}\pi_{\mu\nu}\). \[g^{\alpha\beta}g_{\nu\mu}\Gamma^{\nu}_{\alpha\beta}=g^{\alpha\beta}\partial_ {\beta}g_{\alpha\mu}-\frac{1}{2}g^{\alpha\beta}\partial_{\mu}g_{\alpha\beta}= 0,\quad\mu=0,\ldots,4 \tag{1.2}\] under which the wave operator \(\square_{g}\) on functions coincides with the reduced wave operator \(\tilde{\square}_{g}=g^{\mu\nu}\partial_{\mu}\partial_{\nu}\). In this gauge the equations (1.1) become \[\tilde{\square}_{g}g_{\alpha\beta}=\tilde{F}_{\alpha\beta}(g)(\partial g, \partial g)\quad\text{on }\mathbb{R}^{1+3}\times\mathbb{S}^{1} \tag{1.3}\] where \(\tilde{F}_{\alpha\beta}(u)(v,v)\) depends quadratically on \(v\). A straightforward computation shows that these source terms decompose into the sum of the following \[\tilde{P}(\partial_{\alpha}g,\partial_{\beta}g) :=\frac{1}{4}g^{\mu\nu}g^{\rho\sigma}\left(\partial_{\alpha}g_{ \mu\nu}\partial_{\beta}g_{\rho\sigma}-2\partial_{\alpha}g_{\mu\rho}\partial_{ \beta}g_{\nu\sigma}\right)\] \[\tilde{Q}_{\alpha\beta}(\partial g,\partial g) :=g^{\mu\nu}g^{\rho\sigma}\partial_{\mu}g_{\rho\alpha}\partial_{ \nu}g_{\sigma\beta}-g^{\mu\nu}g^{\rho\sigma}Q_{\mu\sigma}(\partial g_{\rho \alpha},\partial g_{\nu\beta})+g^{\mu\nu}g^{\rho\sigma}Q_{\alpha\mu}(\partial g _{\nu\sigma},\partial g_{\rho\beta})\] \[\qquad+g^{\mu\nu}g^{\rho\sigma}Q_{\beta\mu}(\partial g_{\nu \sigma},\partial g_{\rho\alpha})+\frac{1}{2}g^{\mu\nu}g^{\rho\sigma}Q_{ \sigma\alpha}(\partial g_{\mu\nu},\partial g_{\rho\beta})+\frac{1}{2}g^{\mu \nu}g^{\rho\sigma}Q_{\sigma\beta}(\partial g_{\mu\nu},\partial g_{\rho\alpha})\] where \(Q_{\mu\nu}\) denotes the quadratic null form3 Footnote 3: The quadratic form \(Q_{0}(\partial\phi,\partial\psi)=\overline{g}^{\mu\nu}\partial_{\mu}\phi \partial_{\nu}\psi\) is also a null form. \[Q_{\mu\nu}(\partial\phi,\partial\psi)=\partial_{\mu}\phi\partial_{\nu}\psi- \partial_{\nu}\phi\partial_{\mu}\psi.\] The initial conditions \((g_{\alpha\beta}|_{t=0},\partial_{t}g_{\alpha\beta}|_{t=0})\) for (1.3) are defined from \((\Sigma_{0},g_{0},K_{0})\) as follows \[\begin{split}& g_{ij}|_{t=0}=g_{0ij},\qquad g_{00}|_{t=0}=-a^{2}, \qquad g_{0i}|_{t=0}=g_{i0}|_{t=0}=0,\\ &(\partial_{t}g_{ij})|_{t=0}=-2aK_{0ij},\qquad(\partial_{t}g_{00 })|_{t=0}=2a^{3}g_{0}^{ij}K_{0ij},\\ &(\partial_{t}g_{0i})_{t=0}=a^{2}g_{0}^{kl}\partial_{l}g_{0ki}- \frac{1}{2}a^{2}g_{0}^{kl}\partial_{i}g_{0kl}-a\partial_{i}a\end{split} \tag{1.4}\] where \(a^{2}:=(1-M\chi(r)r^{-1})\) denotes the lapse function, so that they are compatible with the constraint equations and satisfy the wave condition. In particular the constraint equations yield a decay for \(g_{ij}\) of the form \[g_{\boldsymbol{ij}}=\big{(}1+M\chi(r)r^{-1}\big{)}\delta_{\boldsymbol{ij}}+O( r^{-1-\kappa}),\qquad g_{44}=1+O(r^{-1-\kappa})\] The initial data for \(g_{00}\) and \(g_{0i}\) are free and we set them as in (1.4), following what was done by Lindblad and Rodnianski in their work [40], for compatibility with the wave coordinates for Schwarzschild. The condition \((\partial_{t}g_{ij})|_{t=0}=-2aK_{0ij}\) is given so that \(K_{0}\) is the second fundamental form of \(\Sigma_{0}\), i.e. \(K_{0}(X,Y)=-g|_{t=0}(\nabla_{X}\partial_{t},Y)\) for any vector fields \(X,Y\). Any solution to the Einstein equations (1.1) with smooth data \((\Sigma_{0},g_{0},K_{0})\) satisfies (1.3)-(1.4) when written in harmonic coordinates. Conversely, any solution \(g_{\alpha\beta}\) of (1.3)-(1.4) with initial data compatible with the constraint equations and satisfying the wave condition (1.2) will satisfy (1.2) for all times and hence gives rise to a solution of (1.1) with data \((\Sigma_{0},g_{0},K_{0})\) defined from (1.4). We refer to Ringstrom [49] for more details on the subject. From now on, we will then entirely focus on the formulation (1.3)-(1.4). ### State of the art There is a vast literature in general relativity concerning the stability of physical solutions to the Einstein equations. In the 4-dimensional setting, the global stability of the simplest solution, the Minkowski metric, was proved in a monumental work by Chistodoulou and Klainerman [11] and later revisited in the works of Lindblad and Rodnianski [39, 40] using the harmonic gauge. See also the results by Klainerman and Nicolo [32], Bieri and Zipser [5], Hintz and Vasy [20], Choquet-Bruhat, Chrusciel and Loizelet [9] for Einstein-Maxwell systems and by Speck [50] for Einstein equations coupled to a family of nonlinear electromagnetic field equations. Analogous global stability results have also been proved for other 4-dimensional coupled Einstein matter systems. Einsten-Klein-Gordon systems were investigated by LeFloch and Ma [37] in the case of restricted data coinciding with the Schwarzschild metric outside a compact set, and global stability was later proved by Ionescu and Pausader [25] in the case of unrestricted data. We also cite the works by Fajman, Joudioux and Smulevici [17] and Lindblad and Taylor [41] proving a global stability result for Einstein-Vlasov systems for a class of restricted data, and the result by Bigorgne, Fajman, Joudioux, Smulevici and Thaller [6] about the asymptotic stability of Minkowski spacetime with non-compactly supported massless Vlasov matter. There is also a very rich literature concerning the stability of other explicit 4-dimensional solutions to the Einstein equations, for instance the Kerr solution or solutions to the Einstein equations with positive cosmological constants, but it is not our purpose to list such references here. Higher dimensional solutions of the Einstein equations, in particular spacetimes with additional compact directions \(\mathbb{R}^{1+3}\times\mathscr{K}\), have attracted substantial attention from the theoretical physics community throughout the past century. Theories of higher dimensional gravity are in fact of great interest in supergravity and string theory as possible models for quantum gravity and are possible candidates for providing a unified description of all the fundamental forces in nature (gravity, electromagnetism, weak force and strong force). A guiding philosophy of supergravity theories is that one should be able to recover 4-dimensional physics from higher-dimensional models, hence to perform some sort of dimensional reduction by assuming the extra directions to be compact. The classical mathematical approach to the unification of general relativity with electromagnetism goes back to the works of physicists Kaluza [26] and Klein [34]. In their original works, one extra dimension is considered and the five spacetime dimensional gravity is compactified on a circle \(\mathbb{S}^{1}_{R}\) of radius \(R\) to obtain at low energies a \(1+3\) dimensional Einstein-Maxwell-Scalar field system. We will briefly discuss the reduction from the 5-dimensional to the 4-dimensional model in the next subsection. In a seminal work by Witten [59] it was proved that the Kaluza-Klein spacetime \(\overline{g}\) is unstable at the semiclassical level. However, classical global stability was conjectured to hold true and such a result was proved by the third author [60] for small perturbations that do not depend on the compact direction. The goal of this paper is to extend the result of [60] and to prove the global stability of \(\overline{g}\) for more general perturbations that can a-priori depend also on the compact direction. We mention that a result analogous to [60] for cosmological Kaluza-Klein spacetimes, where the Minkowski spacetime is replaced by the 4-dimensional Milne spacetime, has also recently been shown by Branding, Fajman and Kroncke [7]. Furthermore global existence, without a restriction to \(\mathbb{S}^{1}\)-independent data, was shown on a quasilinear system of wave equations by the first two authors in [21] and on a semilinear wave equation on a cosmological Kaluza-Klein spacetime in [56]. In the context of higher-dimensional gravity we also cite a result by Ettinger [16] on the global well-posedness of a 11-dimensional, semilinear, gauge-invariant wave equation, and a global stability result by Andersson, Blue, Yau and the third author [2] for spacetimes with a supersymmetric compactification: that is, spacetimes \((\mathscr{M},\hat{g})\) with \(\mathscr{M}=\mathbb{R}^{1+n}\times K\) and \(\hat{g}=\eta_{1+n}+k\), where \(\eta_{1+n}\) is the \((1+n)\)-dimensional Minkowski metric and \((K,k)\) is a compact Riemannian manifold that admits a spin structure and a nonzero parallel spinor. Their proof uses the assumption \(n\geq 9\) but the result is conjectured to hold true for \(n\geq 3\). ### The zero-mode truncation The Einstein equations in harmonic coordinates reduce to (1.3), which is a system of wave equations on the product space \(\mathbb{R}^{1+3}\times\mathbb{S}^{1}\). Assuming for a moment that the compactifying circle \(\mathbb{S}^{1}\) is replaced by the circle \(\mathbb{S}^{1}_{R}\) of radius \(R>0\), and by Fourier expanding the solution \(g\) of (1.3) along the periodic coordinate \[g_{\alpha\beta}(t,x,y)=\sum_{k\in\mathbb{Z}}e^{iky}g_{\alpha\beta}^{k}(t,x),\] it turns out that \[(-\partial_{t}^{2}+\Delta_{x}+\partial_{y}^{2})g_{\alpha\beta}=\sum_{k\in \mathbb{Z}}e^{iky}(-\partial_{t}^{2}+\Delta_{x}-(|k|/R)^{2})g_{\alpha\beta}^{k}\] which shows that the zero-modes \(g_{\alpha\beta}^{0}\) of the metric coefficients are massless waves while the non-zero modes \(g_{\alpha\beta}^{k}\) are massive (Klein-Gordon) waves with mass \(|k|/R\) for \(k\neq 0\). Equations (1.3) are hence equivalent to a system on \(\mathbb{R}^{1+3}\) which couples wave equations to an infinite sequence of Klein-Gordon equations with mass \(|k|/R\), \(k\in\mathbb{Z}\setminus\{0\}\). The heuristic physics argument, as explained by Pope in [48], to deal with this phenomenon is to assume the radius \(R\) to be very small (a choice that would justify why we "don't see" the additional dimensions) so that the masses \(|k|/R\) are too large to be physically observable. The non-zero modes are then neglected and the solution is truncated to the massless mode, in other words one assumes that \(g_{\alpha\beta}(t,x,y)=g_{\alpha\beta}(t,x)\) is independent of the \(y\) coordinate. Under the zero-mode truncation assumption, one can reduce the Kaluza-Klein model to a three-dimensional Einstein-Maxwell-scalar field system. As explained in [48], this is done using the following standard ansatz, in which the higher dimensional metric coefficients \(g_{\alpha\beta}\) are expressed in terms of three-dimensional fields \(\hat{g}_{\boldsymbol{\alpha}\boldsymbol{\beta}},\phi,\mathscr{A}_{\boldsymbol {\alpha}}\) by \[g_{\boldsymbol{\alpha}\boldsymbol{\beta}}=e^{2\kappa\phi}\hat{g}_{\boldsymbol {\alpha}\boldsymbol{\beta}}+e^{2\rho\phi}\mathscr{A}_{\boldsymbol{\alpha}} \mathscr{A}_{\boldsymbol{\beta}},\quad g_{\boldsymbol{\alpha}4}=e^{2\rho\phi} \mathscr{A}_{\boldsymbol{\alpha}},\quad g_{44}=e^{2\rho\phi}\] where \(\kappa=\sqrt{12}/12\) and \(\rho=-2/\sqrt{12}\). The Einstein vacuum equations (1.1) reduce then to the following minimally coupled \((1+3)\)-dimensional Einstein-Maxwell-Scalar field system \[R_{\boldsymbol{\alpha}\boldsymbol{\beta}}=\frac{1}{2}\partial_{ \boldsymbol{\alpha}}\phi\,\partial_{\boldsymbol{\beta}}\phi+\frac{1}{2}e^{-6 \kappa\phi}\big{(}\mathscr{F}_{\boldsymbol{\alpha}\boldsymbol{\mu}}\mathscr{F }_{\boldsymbol{\beta}}{}^{\boldsymbol{\mu}}-\frac{1}{4}\mathscr{F}_{ \boldsymbol{\mu}\boldsymbol{\nu}}\mathscr{F}^{\boldsymbol{\mu}\boldsymbol{\nu }}\hat{g}_{\boldsymbol{\alpha}\boldsymbol{\beta}}\big{)}\] \[\nabla^{\boldsymbol{\alpha}}\big{(}e^{-6\kappa\phi}\mathscr{F}_{ \boldsymbol{\alpha}\boldsymbol{\beta}}\big{)}=0\] \[\tilde{\square}_{\tilde{g}}\phi=-\frac{3}{2}\kappa e^{-6\kappa \phi}\mathscr{F}_{\boldsymbol{\mu}\boldsymbol{\nu}}\mathscr{F}^{\boldsymbol{ \mu}\boldsymbol{\nu}}\] where \(\mathscr{F}_{\boldsymbol{\alpha}\boldsymbol{\beta}}=\partial_{\boldsymbol{ \alpha}}\mathscr{A}_{\boldsymbol{\beta}}-\partial_{\boldsymbol{\beta}} \mathscr{A}_{\boldsymbol{\alpha}}\). The above reduction can be also performed in higher dimensional settings where \(\mathscr{M}=\mathbb{R}^{1+3}\times\mathbb{T}^{d}\). In the Kaluza-Klein setting, this truncation to the zero mode is consistent in the sense that a solution to the above Einstein-Maxwell-Scalar field system will be a solution to the original vacuum Einstein equations in 5 dimensions. The full global stability of the Kaluza-Klein spacetime to general perturbations, that may a-priori depend on the compact direction, involves studying solutions to a significantly more complicated PDE system than the simpler dynamics of the above Einstein-Maxwell-Scalar field system studied in [60]. This is the goal of the present article. We point out that we do not want to focus here on the dependence of the solution on the radius \(R\) and, since there is no canonical choice of the radius \(R\), we set \(R=1\). ### 4D Wave-Klein-Gordon systems The dependence of the metric coefficients \(g_{\alpha\beta}\) on the periodic coordinate \(y\) and their Fourier decomposition along this direction reveal that system (1.3) is equivalent to a system coupling wave equations to an infinite sequence of Klein-Gordon equations with different masses. The new system is also quasilinear and the coupling between the wave and Klein-Gordon components of the solution is strong. The study of systems coupling (a finite number of) wave and Klein-Gordon equations has attracted considerable interest from the mathematical community, especially in the past three decades. In terms of small data global well-posedness results in \(1+3\) spacetime dimensions we cite the initial results by Georgiev [19] and Katayama [27], followed by LeFloch and Ma [36], Wang [57, 58] and Ionescu and Pausader [24] who study such systems as a model for the full Einstein-Klein-Gordon equations, see [37] and [25]. In [36] and [57] global well-posedness is proved for compactly supported initial data and quadratic quasilinear nonlinearities that satisfy some suitable conditions, including the _null condition_ of Klainerman [31] for self-interactions between the wave components of the solution. An idea used in these works is that of employing hyperbolic coordinates in the forward light cone; this was first introduced by Klainerman [29] for Klein-Gordon equations and Tataru in the wave context [52], and later reintroduced by LeFloch and Ma in [36] under the name of _hyperboloidal foliation method_. In [24] global regularity and scattering is proved in the case of small smooth initial data that decay at a suitable rate at infinity and nonlinearities that do not verify the null condition but present a particular resonant structure. We also cite the work by Dong and the third author [14], who prove global well-posedness for a quadratic semilinear interaction in which there are no derivatives on the massless wave component. Other related results are [4, 13, 33, 47, 53, 54, 55]. See also [42, 43, 44, 45, 46, 51, 23, 47, 51] for results about wave-Klein-Gordon systems in lower dimensions, in particular a work by the second author [51] and a subsequent result in collaboration with Ifrim [23], which are the only ones where 2-dimensional strongly coupled quadratic wave-Klein-Gordon systems with small mildly decaying data are investigated. Advanced techniques, among which semiclassical microlocal analysis, para/pseudo-differential calculus, wave packets, modified quasilinear energies, are employed there to tackle a problem that is critical, quasilinear and very weakly dispersive. A now-standard tool used in most of the aforementioned works is the vector field method. Linear wave and Klein-Gordon equations on \(\mathbb{R}^{1+n}\) are invariant under translations, Euclidean rotations and hyperbolic rotations (linear wave equations are also scale-invariant). These symmetries provide a family of admissible vector fields (in the common terminology they are also referred to as _Killing_ vector fields of Minkowski spacetime), \[\partial_{\mu},\qquad\Omega_{\boldsymbol{i}\boldsymbol{j}}=x_{\boldsymbol{i}} \partial_{\boldsymbol{j}}-x_{\boldsymbol{j}}\partial_{\boldsymbol{i}},\qquad \Omega_{0\boldsymbol{i}}=t\partial_{\boldsymbol{i}}+x_{\boldsymbol{i}} \partial_{t}\] which commute with the linear wave and Klein-Gordon operators and are used to define higher order energy functionals which control the Sobolev regularity of the solution as well as its decay (and that of its derivatives) in space at infinity. The rotations \(\Omega_{\boldsymbol{i}\boldsymbol{j}}\) and \(\Omega_{0\boldsymbol{i}}\) are also usually referred to as _Klainerman vector fields_. In the absence of Klein-Gordon equations, that is in the case of wave equations only, one can also consider the scaling vector field \(\mathscr{S}=t\partial_{t}+x^{\boldsymbol{i}}\partial_{\boldsymbol{i}}\) (a _conformal Killing_ vector field of Minkowski) and use the control on higher order energies to derive fixed-time pointwise decay bounds for the solution via the so-called Klainerman-Sobolev inequalities (see Klainerman [30]) \[(1+|t|+|x|)^{n-1}(1+||t|-|x||)|u(t,x)|^{2}\leq C\sum_{|I|\leq(n+2)/2}\|Z^{I}u( t,\cdot)\|_{L^{2}(\mathbb{R}^{n})}^{2}. \tag{1.5}\] In the above inequality \(Z\) denotes any of the vector fields \(\partial_{\boldsymbol{\mu}},\Omega_{\boldsymbol{i}\boldsymbol{j}},\Omega_{0 \boldsymbol{i}},\mathscr{S}\) and \(Z^{I}\) is any product of \(|I|\) such vector fields. Suitable energy estimates and pointwise decay bounds are subsequently used to control the nonlinear terms in the energy inequality and are essential to close the continuity argument which is at the core of the proof of a long-time/global existence result for small data. The inequality (1.5) is, however, useless when dealing with Klein-Gordon equations. The scaling vector field does not commute well with the linear operator and one cannot generally expect to have a good control of the \(L^{2}\) norm of \(\mathscr{S}u\) when \(u\) is a Klein-Gordon solution. Instead, if \(u\) is compactly supported inside the light cone4\(t=|x|+1\) one can define higher order energy functionals on hyperboloids \(t^{2}-|x|^{2}=s^{2}\) and exploit Klainerman-Sobolev inequalities on hyperboloids (see for instance [18]) Footnote 4: Any cone \(t=|x|+c\) with \(c>0\) would do. \[\sup_{\mathscr{H}_{s}}t^{n/2}|u(t,x)|\leq C\sum_{|I|\leq(n+2)/2}\|B^{I}u\|_{L^{ 2}(\mathscr{H}_{s})} \tag{1.6}\] where now \(B^{I}\) are products involving hyperbolic rotations only, to get a good pointwise control on the solution. This approach has been largely used in the case of compactly supported initial data thanks to the finite speed of propagation satisfied by both wave and Klein-Gordon equations, but it is not adapted to treat the case of initial data that only enjoys some decay at infinity. Other methods have been employed to handle such cases, based on Fourier analysis, normal forms and/or microlocal analysis: see for instance the work by Ionescu and Pausader [24] in the 1+3 dimensional setting, by the second author [51] and in collaboration with Ifrim [23] for the 1+2 dimensional case, and references therein. See also a recent work by LeFloch and Ma [35] using a foliation that merges hyperboloids with constant time slices. ### The 5D problem: main theorem and overview of the proof According to the positive mass theorem, the solution \(g_{\alpha\beta}\) of the Cauchy problem (1.3)-(1.4) must have a non-trivial tail at spacelike infinity5 which suggests to set \(g_{\alpha\beta}=\overline{g}_{\alpha\beta}+h^{0}_{\alpha\beta}+h^{1}_{\alpha\beta}\) where Footnote 5: We choose to write this tail so that \(g_{\alpha\beta}\) corresponds to the Schwarzschild metric in wave coordinates at leading order. \[h^{0}_{\boldsymbol{\alpha\beta}}=\chi\Big{(}\frac{r}{t}\Big{)} \chi(r)\frac{M}{r}\delta_{\boldsymbol{\alpha\beta}},\qquad h^{0}_{44}=0,\] \[\chi\in\mathscr{C}^{\infty}(\mathbb{R})\text{ with }\chi(s)=0\text{ for }s \leq 1/2,\ \chi(s)=1\text{ for }s\geq 3/4,\ r=|x| \tag{1.7}\] and look for \(h^{1}_{\alpha\beta}\) the solution to the following system of quasilinear wave equations \[\vec{\square}_{g}h^{1}_{\alpha\beta}=F_{\alpha\beta}(h)(\partial h,\partial h) -\vec{\square}_{g}h^{0}_{\alpha\beta},\qquad\text{on }\mathbb{R}^{1+3}\times\mathbb{S}^{1} \tag{1.8}\] with data \((h^{1}|_{\alpha\beta},\partial_{t}h^{1}_{\alpha\beta})|_{t=2}\) being small and sufficiently decaying in space. The semilinear source term in the above right hand side decompose into the following sum \[F_{\alpha\beta}(h)(\partial h,\partial h)=P_{\alpha\beta}(\partial h,\partial h )+\mathbf{Q}_{\alpha\beta}(\partial h,\partial h)+G_{\alpha\beta}(h)(\partial h,\partial h)\] where * \(P_{\alpha\beta}(\partial h,\partial h)\) are quadratic _weak null_ terms \[P_{\alpha\beta}(\partial h,\partial h)=\frac{1}{4}\bar{g}^{\mu\rho}\bar{g}^{\nu \sigma}\left(\partial_{\alpha}h_{\mu\rho}\partial_{\beta}h_{\nu\sigma}-2 \partial_{\alpha}h_{\mu\nu}\partial_{\beta}h_{\rho\sigma}\right),\] * \(\mathbf{Q}_{\alpha\beta}(\partial h,\partial h)\) is a linear combination of the classical quadratic null forms, * \(G_{\alpha\beta}(h)(\partial h,\partial h)\) are cubic terms. More precisely, they are quadratic in \(\partial h\) with smooth coefficients depending on \(h\) so that \(G_{\alpha\beta}(0)(\partial h,\partial h)=0\). The reduced wave operator can be written as \(\vec{\Box}_{g}=\Box_{xy}+H^{\mu\nu}\partial_{\mu}\partial_{\nu}\), where \(\Box_{xy}=-\partial_{t}^{2}+\Delta_{x}+\partial_{y}^{2}\) is the flat wave operator and \(H^{\mu\nu}:=g^{\mu\nu}-\overline{g}^{\mu\nu}\) is the formal inverse of \(h_{\mu\nu}\) for small \(h\), i.e. \[H^{\mu\nu}=-h^{\mu\nu}+\mathscr{O}^{\mu\nu}(h^{2})=-\overline{g}^{\mu\rho} \overline{g}^{\nu\sigma}h_{\rho\sigma}+\mathscr{O}^{\mu\nu}(h^{2}). \tag{1.9}\] We can now give a more precise statement of our main result. **Theorem 1.2**.: _Let \(\kappa>0\). There exists \(N\in\mathbb{N}\) sufficiently large and \(\epsilon_{0}>0\) small such that, for any \(0<\epsilon<\epsilon_{0}\) and initial data \(g_{0},K_{0}\) solving the constraint equations and satisfying_ \[\begin{split}&\sum_{m\leq N}\sum_{i+j=m}\|(1+r)^{\frac{1}{2}+i+ \kappa}\partial_{y}^{j}\nabla_{x}^{i}(g_{0}-g^{0})\|_{\dot{H}^{1}_{x,y}}+\|(1+ r)^{\frac{1}{2}+i+\kappa}\partial_{y}^{j}\nabla_{x}^{i}K_{0}\|_{L^{2}_{x,y}}\leq \epsilon,\\ &\sum_{m\leq N-1}\sum_{i+j=m}\|(1+r)^{\frac{3}{2}+i+\kappa} \partial_{y}^{j}\nabla_{x}^{i}(g_{0}-g^{0})\|_{\dot{H}^{2}_{x,y}}+\|(1+r)^{ \frac{3}{2}+i+\kappa}\partial_{y}^{j}\nabla_{x}^{i}K_{0}\|_{\dot{H}^{1}_{x,y} }\leq\epsilon\end{split} \tag{1.10}\] _together with the \(L^{2}\) estimate_ \[\|(1+r)^{-\frac{1}{2}+\kappa}(g_{0}-g^{0})\|_{L^{2}_{x,y}}\leq\epsilon\] _with \(r=|x|\) and \(g^{0}\) defined by_ \[g^{0}_{\boldsymbol{ij}}=(1+M\chi(r)r^{-1})\delta_{\boldsymbol{ij}},\quad g^{0 }_{44}=1,\quad g^{0}_{4\boldsymbol{i}}=0,\] _there exists a unique global solution \(g_{\alpha\beta}\) to (1.3) with initial data given by (1.4). This solution obeys the Einstein equations and decomposes as \(g_{\alpha\beta}=\bar{g}_{\alpha\beta}+h^{0}_{\alpha\beta}+h^{1}_{\alpha\beta}\), with \(h^{0}_{\alpha\beta}\) defined by (1.7) and \(h^{1}_{\alpha\beta}\) satisfying the pointwise estimate_ \[|h^{1}_{\alpha\beta}|\leq\frac{C_{0}\epsilon}{(1+t+|x|)^{1-\gamma}}\] _with \(C_{0}\) a numerical constant and \(\gamma>0\) arbitrarily small but fixed._ The proof of the above result is based on a bootstrap argument, i.e. on the propagation of some suitable a-priori energy estimates and pointwise decay bounds on the solution, which is performed in two main steps: _Step 1_: deduction of higher order energy inequalities and of sharp pointwise estimates from the a-priori energy assumptions; _Step 2_: estimates of the trilinear and quartic terms appearing in the right hand side of the energy inequalities. In particular, deduction of suitable higher order \(L^{2}\) estimates of the source terms from the a-priori energy assumptions and the pointwise decay bounds. In order to run the above argument and in view of the issues discussed in the previous subsection, one needs to find a strategy to obtain (at least in the first instance) pointwise decay bounds on the solution from the a-priori assumptions, knowing that inequality (1.5) cannot be used and (1.6) is valid only in the interior of some light cone. Similar to [21], the approach we take in the present paper is to decompose the whole spacetime and study the problem separately in two regions, corresponding to the interior and exterior of a hyperboloid6 asymptotically approaching the cone \(\{t=|x|+1\}\times\mathbb{S}^{1}\). This decomposition is quite natural, in that the analysis in the exterior is totally independent of that in the interior and requires different tools. It also allows us to explain our arguments with more clarity. Footnote 6: In this curved background, the Minkowski cone \(\{t=|x|+1\}\) is in fact only asymptotically spacelike. #### 1.4.1. Exterior region: the bootstrap assumptions The bootstrap assumptions in the exterior region are higher order weighted energy estimates on the solution \[E^{\mathrm{e},\kappa}(t,Z^{\leq N}h^{1})^{1/2}\leq 2C_{0}\epsilon t^{ \sigma},\] \[E^{\mathrm{e},1+\kappa}(t,\partial Z^{\leq N-1}h^{1})^{1/2}\leq 2C_ {0}\epsilon t^{\sigma}\] where the weighted energy functional is defined, for any \(\lambda>0\), as \[E^{e,\lambda}(t,h^{1}_{\alpha\beta})=\iint_{\{|x|\geq t-1\}\times \mathbb{S}^{1}}(2+|x|-t)^{1+2\lambda}|\nabla_{txy}h^{1}_{\alpha\beta}(t,x,y)|^ {2}dxdy\\ +\int_{2}^{t}\iint_{\{|x|\geq\tau-1\}\times\mathbb{S}^{1}}(2+|x|- t)^{2\lambda}|\overline{\nabla}h^{1}_{\alpha\beta}(\tau,x,y)|^{2}dxdyd\tau.\] In the above integrals, \(\nabla_{txy}\) denotes the spacetime gradient while \(\overline{\nabla}=(\overline{\partial}_{0},\ldots,\overline{\partial}_{4})=( \partial_{t}+\partial_{r},\not{\partial_{i}},\partial_{y})\) denotes the tangent gradient to the cones \(\{t=r+1\}\times\mathbb{S}^{1}\), with \(\not{\partial_{i}}=\partial_{i}-\frac{x_{i}}{r}\partial_{t}\) being the angular derivatives. The parameter \(\kappa\) in the above a-priori estimates is related to the asymptotic decay of the data, \(N\in\mathbb{N}\) is assumed to be sufficiently large and \(0<\sigma<\kappa\) sufficiently small. Weighted Sobolev and Hardy inequalities allow us to obtain fixed-time pointwise decay bounds on the solution from the assumptions on the weighted energies, as for any given smooth function \(U\) \[|\nabla_{txy}U(t,x,y)| \leq C(1+|x|)^{-1}(2+|x|-t)^{-\frac{1}{2}-\lambda}\sum_{|I|\leq 3 }E^{e,\lambda}(t,Z^{I}U)^{1/2},\] \[|U(t,x,y)| \leq C(1+|x|)^{-1}(2+|x|-t)^{-\lambda}\sum_{|I|\leq 2}E^{e, \lambda}(t,Z^{I}U)^{1/2}.\] They also allow us to uncover faster spacetime decay for the tangential derivatives, since their weighted \(L^{2}\)-spacetime norm is controlled by the energy, and to recover the well-known property of waves that higher order derivatives enjoy better decay in terms of the distance from the outgoing Minkowski cones, which follows from the second energy assumption above. We point out that, in the context of waves on \(\mathbb{R}^{1+3}\) where the full range of vector fields \(\Gamma\in\{\Omega_{\boldsymbol{ij}},\Omega_{0\boldsymbol{i}},\mathscr{S}\}\) is available, the latter two properties are easily derived from algebraic relations. In particular, one can use that \[|\overline{\partial}\psi|\lesssim\sum_{|I|\leq 1}\frac{|\Gamma^{I}\psi|}{1+t+|t -|x||},\qquad|\partial^{2}\psi|\lesssim\sum_{|I|\leq 1}\frac{|\partial \Gamma^{I}\psi|}{1+|t-|x||}.\] #### 1.4.2. Interior region: the bootstrap assumptions The bootstrap assumptions in the interior region are bounds on higher order energies defined on truncated hyperboloids \[\mathscr{H}_{s}=\{(t,x):t^{2}-|x|^{2}=s^{2}\text{ and }t\geq 1+\sqrt{1+|x|^{2}} \}\times\mathbb{S}^{1},\qquad s\geq 2\] which are the branches of hyperboloids contained in the interior region, and pointwise decay bounds on differentiated metric coefficients carrying only Klainerman vector field derivatives. We denote the zero-mode (respectively zero-average) component of coefficient \(h^{1,\flat}_{\alpha\beta}\) by \[h^{1,\flat}_{\alpha\beta}=\fint_{\mathbb{S}^{1}}h^{1}_{\alpha\beta}dy,\quad \text{ and }\quad h^{1,\natural}_{\alpha\beta}=h^{1}_{\alpha\beta}-h^{1,\flat}_{ \alpha\beta}.\] We assume that, for some large integers \(1\ll N_{1}\ll N\) and some small7\(0<\zeta<\gamma\ll\delta\), the following bounds are satisfied Footnote 7: In practice, \(\zeta,\gamma\) and \(\delta\) are going to be replaced with a hierarchy of increasing \(\zeta_{k},\gamma_{k}\) and \(\delta_{k}\), where \(k\) accounts for the number of Klainerman vector fields in the product \(Z^{I}\), so that \(\zeta_{i}\ll\gamma_{j}\ll\delta_{k}\) for any \(i,j,k\) and the algebraic relation \(\gamma_{i}+\delta_{j}<\delta_{k}\) whenever \(j<k\). \[E^{i}(s,\partial^{\leq 1}Z^{\leq N}h^{1}_{\alpha\beta})\leq C \epsilon^{2}s^{1+\zeta},\] \[E^{i}(s,Z^{\leq N}h^{1,\flat}_{\alpha\beta})\leq C\epsilon^{2}s^ {\zeta},\] \[E^{i}(s,\partial^{\leq N-N_{1}}Z^{\leq N_{1}}h^{1}_{\alpha\beta })\leq C\epsilon^{2}s^{\delta}\] where \[E^{i}(s,h^{1}_{\alpha\beta}) :=\iint_{\mathscr{H}_{s}}\left|(s/t)\partial_{t}h^{1}_{\alpha \beta}\right|^{2}+|\underline{\nabla}h^{1}_{\alpha\beta}|^{2}dxdy\] \[=\iint_{\mathscr{H}_{s}}\left|(s/t)\nabla_{x}h^{1}_{\alpha\beta }\right|^{2}+\left|(1/t)\mathscr{S}h^{1}_{\alpha\beta}\right|^{2}+\sum_{1\leq i <j\leq 3}\left|(1/t)\Omega_{ij}h^{1}_{\alpha\beta}\right|^{2}+|\partial_{y} h^{1}_{\alpha\beta}|^{2}\,dxdy\] and with \(\Gamma\in\{\Omega_{\mathbf{ij}},\Omega_{\mathbf{0i}}\}\) \[|\Gamma^{\leq N_{1}}h^{1,\flat}_{\alpha\beta}(t,x)|\leq C\epsilon(1+t)^{-1+ \gamma}(1+|t-|x||)^{\gamma},\] \[\|t^{\frac{3}{2}}\partial_{y}^{\leq 1}(\partial^{I}\Gamma^{J}h^{1,\flat}_{ \alpha\beta})\|_{L^{\infty}_{x}L^{2}_{y}(\mathscr{H}_{s})}+\|t^{\frac{1}{2}} s\partial_{tx}(\partial^{I}\Gamma^{J}h^{1,\flat}_{\alpha\beta})\|_{L^{\infty}_{x}L^{2}_{ y}(\mathscr{H}_{s})}\leq C\epsilon s^{\gamma},\quad|I|+|J|\leq N_{1}+1,\;|J|\leq N _{1}.\] In the above energy functional, \(\underline{\nabla}=(\underline{\partial}_{1},\ldots,\underline{\partial}_{4})\) denotes the tangent gradient (to the hyperboloids) with \(\underline{\partial}_{\mathbf{i}}=\partial_{\mathbf{i}}+(x_{i}/t)\partial_{t}\) for \(\mathbf{i}=1,2,3\) and \(\underline{\partial}_{4}=\partial_{4}\). Klainerman-Sobolev inequalities on hyperboloids permit us to deduce pointwise decay bounds for the solution, as for any given smooth function \(U\) one has \[\sup_{\mathbb{S}^{1}}|\nabla_{tx}U(t,x,y)|\leq C(1+t)^{-1}(1+|t-| x||)^{-1/2}\sum_{|I|\leq 3}E^{i}(s,Z^{I}U)^{1/2},\] \[\sup_{\mathbb{S}^{1}}|\underline{\nabla}U(t,x,y)|\leq C(1+t)^{-3/ 2}\sum_{|I|\leq 3}E^{i}(s,Z^{I}U)^{1/2}.\] Note that the latter inequality shows, again, that tangential derivatives enjoy better decay estimates than usual derivatives. We postpone the explanation of why we use the above hierarchy of energy assumptions to later in this section. #### 1.4.3. Estimates on inhomogeneities: null and weak-null terms Once energy bounds and pointwise decay bounds are available, one has to estimate the trilinear and quartic terms appearing in the right hand side of the energy inequalities. These involve the source terms of the equation satisfied by the differentiated coefficients \(Z^{K}h^{1}_{\alpha\beta}\) \[\tilde{\square}_{g}Z^{K}h^{1}_{\alpha\beta}=F^{K}_{\alpha\beta}+F^{0,K}_{ \alpha\beta}\] where \[F^{K}_{\alpha\beta}=Z^{K}F_{\alpha\beta}(h)(\partial h,\partial h)-[Z^{K},H^{ \mu\nu}\partial_{\mu}\partial_{\nu}]h^{1}_{\alpha\beta},\qquad F^{0,K}_{ \alpha\beta}=Z^{K}\tilde{\square}_{g}h^{0}_{\alpha\beta}\] are semilinear quadratic interactions. The explicit inhomogeneous terms \(F^{0,K}_{\alpha\beta}\) and the differentiated cubic terms \(Z^{K}G_{\alpha\beta}(h)(\partial h,\partial h)\) are short range perturbations of the linear equations. We do not discuss them here as they cause no issue in the analysis. The differentiated null terms \(Z^{K}\mathbf{Q}_{\alpha\beta}(\partial h,\partial h)\) are also easily controlled, thanks to the following well-known property \[|Q_{0}(\partial\psi,\partial\varphi)|+|Q_{\alpha\beta}(\partial \psi,\partial\varphi)| \lesssim|\overline{\partial}\psi||\partial\varphi|+|\partial\psi| |\overline{\partial}\varphi|\] \[\lesssim|\partial\psi||\partial\varphi|+|\partial\psi|| \underline{\partial}\varphi|+(s/t)^{2}|\partial\psi||\partial\varphi|\] and the better behavior of tangential derivatives. The quadratic interactions that are more delicate to treat and require special attention are the differentiated null terms \(Z^{K}P_{\alpha\beta}(\partial h,\partial h)\) and the commutator terms \([Z^{K},H^{\mu\nu}\partial_{\mu}\partial_{\nu}]h^{1}_{\alpha\beta}\). The particular structure of such terms was first highlighted by Lindblad and Rodhianski [38, 39, 40] in the 4-dimensional setting and shows all its potential in the null frame \(\mathscr{U}=\{L,\underline{L},S^{1},S^{2}\}\cup\{\partial_{y}\}\), where \(L=\partial_{t}+\partial_{r}\), \(\underline{L}=\partial_{t}-\partial_{r}\) and \(S^{1},S^{2}\) are smooth vector fields tangent to the spheres \(\mathbb{S}^{2}=\{u\in\mathbb{R}^{3}:u\cdot x/|x|=0\}\). As concerns the weak null terms, one sees that if the metric tensor is expressed with respect to \(\mathscr{U}\) then \[P_{\alpha\beta}(\partial h,\partial h)\sim\partial h^{2}_{TU}+\partial h_{LL} \partial h_{\underline{LL}},\qquad T\in\mathscr{T},U\in\mathscr{U}^{8}\] where \(\mathscr{T}=\{L,S^{1},S^{2}\}\cup\{\partial_{y}\}\) denotes the frame tangent to the flat outgoing cones. On the one hand, the choice of gauge (in particular the _wave coordinate condition_) ensures that the derivatives of \(h_{LT}\) coefficients are well behaved, as they satisfy \[|\partial h_{LT}|\lesssim|\overline{\partial}h|+\mathscr{O}(h\cdot\partial h). \tag{1.11}\] On the other hand, the metric coefficients \(h_{TU}\) solve quasilinear wave equations whose source terms are null or cubic. In the exterior region, we exploit this property to prove that the higher order weighted energies of such coefficients grow at a slower rate \(t^{C\epsilon}\), where \(\epsilon\ll\sigma\) is the size of the data. From this we infer an improved pointwise decay for \(\partial Z^{K}h_{TU}\) with \(|K|\ll N\) and the following weighted \(L^{2}\) bound for the differentiated weak null terms \[\sum_{i=0}^{1}\left\|(2+r-t)^{\frac{1}{2}+i+\kappa}\partial^{i}Z^{\leq N-i}P_ {\alpha\beta}\right\|_{L^{2}}\lesssim\epsilon^{2}t^{-1+C\epsilon}+\mathscr{O }(\epsilon^{2}t^{-1-}).\] The above estimate shows that the weak null terms contribute to a slow growth of the exterior energies. In the interior region, the enhanced pointwise bounds satisfied by the derivatives of \(Z^{K}h^{1}_{TU}\) for \(|K|\ll N\) are instead obtained directly from the equations they satisfy, using integration along characteristics as done in [40]. This approach is possible provided that we already have at our disposal suitable bounds on the solution in the exterior region. #### 1.4.4. Commutator terms in the exterior region The commutator terms also display an important structure when expressed with respect to the null frame. The tensor \(H^{\mu\nu}\) is decomposed as follows \[H^{\mu\nu}:=H^{0,\mu\nu}+H^{1,\mu\nu},\quad H^{0,\boldsymbol{\mu\nu}}:=-\chi \Big{(}\frac{r}{t}\Big{)}\chi(r)\frac{M}{r}\delta^{\boldsymbol{\mu\nu}}, \quad H^{0,44}=0, \tag{1.12}\] where \(H^{0,\mu\nu}\) is the "Schwarzschild part" of \(H\). The estimates of \([Z^{K},H^{0,\mu\nu}\partial_{\mu}\partial_{\nu}]h^{1}_{\alpha\beta}\) are straightforward and, similar to the weak null terms discussed above, responsible for a slow growth of the exterior energy. The estimates of the commutator involving coefficients \(H^{1,\mu\nu}\) are instead obtained using the fact that, for any tensor \(\pi^{\mu\nu}\) and function \(\psi\), \[|\pi^{\mu\nu}\partial_{\mu}\partial_{\nu}\psi|\lesssim|\pi_{LL}||\partial^{2} \psi|+|\pi||\partial\overline{\partial}\psi|\] so that either the tensor coefficient is a "good" coefficient \(\pi_{LL}\) or one of the two derivatives acting on \(\psi\) is a tangential derivative. As highlighted above, in the exterior region the enhanced behaviour of second order derivatives \(\partial\overline{\partial}\) as well as of \(\partial^{2}\) is encoded in the energy assumptions. What is more, weighted Hardy type inequalities and weighted Sobolev-Hardy inequalities allow us to get a good control of the higher order weighted \(L^{2}\) norms, as well as to recover good pointwise decay bounds, of the solution with no derivatives. Suitable higher order weighted \(L^{2}\) estimates for these commutator terms in the exterior region follow then rather easily. #### 1.4.5. Commutator terms in the interior region A much more delicate analysis of the commutator terms is required in the interior region. On the one hand, the interior energy assumptions do not provide us with additional information on the second order derivatives and the interior energy functionals only give a \(\dot{H}^{1}\) type control on the differentiated solution. The classical Hardy inequality written on hyperboloids is \[\|r^{-1}U\|_{L^{2}(\mathscr{H}_{s})}\lesssim\|\underline{\partial}U\|_{L^{2}( \mathscr{H}_{s})}+\|\partial U\|_{L^{2}(\Sigma^{\mathrm{e}}_{t_{s}})}\] where \(\Sigma^{\mathrm{e}}_{t_{s}}\) is the exterior constant time slice that intersects the interior hyperboloid \(\mathscr{H}_{s}\) on the boundary between the two regions. Such an inequality provides us with a control of the \(L^{2}\) norm of the undifferentiated solution at the costly expense of a \(r^{-1}\) factor. On the other hand, no extra decay (in terms of the distance from the outgoing cones) is expected for the second order derivatives of the solution. In fact, the zero-average component of the solution \(h^{1,\natural}_{\alpha\beta}\) is a Klein-Gordon type function, in that each of its Fourier mode along the \(y\)-direction is solution to a Klein-Gordon equation (see subsection 1.2). As a consequence of this latter fact one only has \(|\partial^{2}h^{1,\natural}_{\alpha\beta}|+|\partial h^{1,\natural}_{\alpha \beta}|\lesssim(1+t+r)^{-3/2}\), which coupled with the above Hardy inequality gives \[\big{\|}Z^{K}h^{1,\flat}\cdot\partial^{2}h^{1,\natural}_{\alpha\beta}\big{\|} _{L^{2}(\mathscr{H}_{s})}\lesssim s^{-1/2}\big{(}\|\underline{\partial}h^{1, \flat}\|_{L^{2}(\mathscr{H}_{s})}+\|\partial h^{1,\flat}\|_{L^{2}(\Sigma^{ \mathrm{e}}_{t_{s}})}\big{)}\lesssim s^{-1/2+\delta}.\] The same inequality holds if \(h^{1,\flat}\) is replaced by \((H^{1,\mu\nu})^{\flat}\). These "wave-Klein-Gordon" contributions to the commutator are the ones responsible for the \(s^{1+}\) growth of the higher order energies on \(\mathscr{H}_{s}\). They are, however, absent in the equations satisfied by the zero-modes \(Z^{K}h^{1,\flat}_{\alpha\beta}\), as for any two functions \(f,g\) one has \[(f\cdot g)^{\flat}=f^{\flat}\cdot g^{\flat}+\big{(}f^{\natural}\cdot g^{ \natural}\big{)}^{\flat},\] therefore a much slower growth is expected for the higher order energies of \(h^{1,\flat}_{\alpha\beta}\). The above observation motivates the use of a hierarchy in the interior energy assumptions and to separately propagate the higher order energy estimates for the zero-modes. To propagate the different interior energy assumptions, we then need to estimate the commutators \([Z^{K},\pi^{1,\mu\nu}\partial_{\mu}\partial_{\nu}]\phi\) separately for \(\pi=H^{1,\flat},H^{1,\natural}\) and \(\phi=h^{1,\flat}_{\alpha\beta},h^{1,\natural}_{\alpha\beta}\). The analysis is reasonably straightforward when \(\pi=H^{1,\natural}\) as we can rely on the Poincare inequality. When \(\pi=H^{1,\flat}\) the analysis is finer, as we express the metric coefficients \(H^{1,\mu\nu}\) relative to the null framework and all derivatives in terms of \(\partial_{t},\partial_{y}\) and of the tangential derivatives \(\underline{\partial}_{\boldsymbol{a}}\) to hyperboloids. Doing this, we see that \[|[Z^{K},(H^{1,\mu\nu})^{\flat}\partial_{\mu}\partial_{\nu}]\phi|\\ \lesssim|Z^{K}H^{1,\flat}_{LL}||\partial_{t}^{2}\phi|+|Z^{K}H^{1, \flat}_{4L}||\partial_{t}\partial_{y}\phi|+\frac{|t^{2}-r^{2}|}{t^{2}}|Z^{K}H^{ 1,\flat}||\partial_{t}^{2}\phi|+\frac{|Z^{K}H^{1,\flat}||\partial Z\phi|}{1+t+r }+\ldots\] The remarkable property of the above right hand side is that each quadratic term either contains coefficients \(H^{1,\flat}_{LL}\) and \(H^{1,\flat}_{4L}\) - which are "good" as a consequence of the wave condition - or have an extra decaying factor \((|t^{2}-r^{2}|/t)^{2}\) and \((1+t+r)^{-1}\). Then suitable estimates on the \(L^{2}(\mathscr{H}_{s})\) norms of the above terms are obtained by using a Hardy inequality _a la_ Lindblad and Rodnianski with weights in \(t-r\), which allow us to better exploit the pointwise decay of our solution. We point the reader to subsection 4.6 for further details. #### 1.4.6. The null framework We emphasise the importance of choosing a framework which correctly highlights the structure of the weak null terms. In fact, the absence of "bad" interactions, such as \((\partial h_{\underline{LL}})^{2}\) and \(\partial h_{LL}\cdot\partial h_{TU}\) in the expression of the weak null terms with respect to the null framework, is crucially related to the fact that the transversal field \(\underline{L}\) is orthogonal to \(\mathbb{S}^{2}\times\mathbb{S}^{1}\), i.e. that \(\bar{g}(\underline{L},A)=0\) for \(A\in\{S^{1},S^{2},\partial_{y}\}\). On the contrary, the framework arising naturally from hyperboloids \[\mathscr{F}=\Big{\{}\partial_{t},\quad\underline{\partial}_{\boldsymbol{a}}= \partial_{a}+\frac{x^{\boldsymbol{a}}}{t}\partial_{t}\Big{\}}\cup\{\partial_ {y}\},\] in which the "transversal field" (\(\partial_{t}\) in the above example) is not orthogonal to \(\mathbb{S}^{2}\times\mathbb{S}^{1}\), causes the analogue of the bad interaction \(\partial h_{\underline{LL}}\cdot\partial h_{TU}\) to appear and critically fails to give a useful expression for the weak null terms. This consideration leads us to adopt the null frame decomposition both in the exterior and the interior region and to combine it with the foliation by hyperboloids in the latter region. Indeed in this region, and when required, the metric coefficients are expressed with respect to the null frame \(\mathscr{U}\) (in order to use the enhanced behavior of \(h_{LT}\) and \(h_{TU}\) coefficients) while derivatives are written in terms of those in \(\mathscr{F}\) (in order to distinguish between the "good" tangential derivatives \(\underline{\partial}_{\boldsymbol{a}},\partial_{y}\) and the "bad" direction \(\partial_{t}\)). Note our approach is different from what was done in previous works on Einstein-Klein-Gordon systems. We finally mention that a different framework than the null one is used by Ionescu and Pausader [25], which is reminiscent of the div-curl decomposition of vector-fields in fluid models and is more compatible with the Fourier transform approach employed there. #### 1.4.7. The Einstein-Klein-Gordon equations We conclude by pointing out that our proof can be used, _mutatis mutandis_, to provide a new proof of the stability of the Minkowski solution to the Einstein-Klein-Gordon equations. To briefly illustrate this point, we recall that in a harmonic gauge the Einstein-Klein-Gordon equations read \[\tilde{\square}_{g}h_{\boldsymbol{\alpha}\boldsymbol{\beta}}=\tilde{F}_{ \boldsymbol{\alpha}\boldsymbol{\beta}}(h)(\partial h,\partial h)-2\Big{(} \partial_{\boldsymbol{\alpha}}\phi\partial_{\boldsymbol{\beta}}\phi+\frac{m^ {2}}{2}g_{\boldsymbol{\alpha}\boldsymbol{\beta}}\Big{)},\qquad\tilde{ \square}_{g}\phi=m^{2}\phi. \tag{1.13}\] These equations are posed on \(\mathbb{R}^{1+3}\), \(m>0\) is a constant parameter and \(h\) is a perturbation away from the Minkowski spacetime \(\mathbf{m}\) defined via \(g_{\boldsymbol{\alpha}\boldsymbol{\beta}}=\mathbf{m}_{\boldsymbol{\alpha} \boldsymbol{\beta}}-h_{\boldsymbol{\alpha}\boldsymbol{\beta}}\). The system (1.13) is much simpler to treat than (1.3). For example, without the \(\mathbb{S}^{1}\), the metric tensor \(h\) remains entirely wave-like and so all problematic wave-Klein-Gordon commutators no longer occur. The only Klein-Gordon field is \(\phi\) and it couples into the equation for the metric only via semilinear nonlinearities. This coupling is weak in the sense that the bootstrap assumptions for \(h_{\boldsymbol{\alpha}\boldsymbol{\beta}}\) and \(\phi\) can be propagated separately. To conclude, due to our choice of null framework, combined with the separate analysis used in the interior and exterior regions, our proof provides an alternative perspective from what was done in previous works on Einstein-Klein-Gordon systems in [25, 37]. ### Notation Below is a list of notation, some of which have already been introduced in the introduction, that we will use throughout the paper. Coordinates: * \(\{x^{\alpha}\}_{\alpha=0,\ldots,4}\) with \(x^{0}=t\in\mathbb{R}\), \(x=(x^{1},x^{2},x^{3})\in\mathbb{R}^{3}\), \(x^{4}=y\in\mathbb{S}^{1}\) are the harmonic coordinates. They satisfy the geometric wave equation \(g^{\mu\nu}\nabla_{\mu}\nabla_{\nu}x^{\alpha}=0\). We will always denote by \(r=|x|\) the radial component of \(x\); * \(u=t+r\) and \(\underline{u}=t-r\) are the null coordinates. They are used in the exterior region. Derivatives: * \(\nabla_{txy}=(\partial_{0},\ldots,\partial_{4})\) denotes the spacetime gradient, with \(\partial_{\mu}=\partial/\partial x^{\mu}\). \(\nabla_{xy}\) denotes the full spatial gradient in \(\mathbb{R}^{3}\times\mathbb{S}^{1}\) while \(\nabla_{x}\) is the spatial gradient in \(\mathbb{R}^{3}\). \(\nabla_{tx}\) is the 4D spacetime gradient; * \(\partial_{xy}\) denotes any of the derivatives \(\partial_{i}\) with \(i=1,\ldots,4\), while \(\partial_{x}\) denotes any of the derivatives \(\partial_{\boldsymbol{i}}\) with \(\boldsymbol{i}=1,2,3\). The definition of \(\partial_{tx}\) and \(\partial_{txy}\) are similar. We will use \(\partial\) and \(\partial_{txy}\) interchangeably; * \(\square_{xy}=-\partial_{t}^{2}+\Delta_{x}+\partial_{y}^{2}\) and \(\square_{x}=-\partial_{t}^{2}+\Delta_{x}\); * \(\partial_{r}=(x^{i}/r)\partial_{i}\) denotes the radial derivative in \(\mathbb{R}^{3}\); * \(\boldsymbol{\dot{\partial}}\) denotes any of the angular components \(\boldsymbol{\dot{\partial}_{i}}=\partial_{\boldsymbol{i}}-(x_{\boldsymbol{i} }/r)\partial_{r}\) of \(\partial_{\boldsymbol{i}}\) for \(\boldsymbol{i}=1,2,3\); * \(\partial_{u}=(1/2)(\partial_{t}+\partial_{r})\) and \(\partial_{\boldsymbol{u}}=(1/2)(\partial_{t}-\partial_{r})\) denote the null derivatives; * \(\overline{\nabla}=(\overline{\partial}_{0},\ldots,\overline{\partial}_{4}) =(\partial_{t}+\partial_{r},\boldsymbol{\dot{\partial}_{i}},\partial_{y})\) denotes the tangent gradient to the cones \(\{t=r+1\}\times\mathbb{S}^{1}\). Moreover \(\overline{\nabla}_{x}=(\overline{\partial}_{0},\ldots,\overline{\partial}_{3 })=(\partial_{t}+\partial_{r},\boldsymbol{\dot{\partial}_{i}})\); * \(\overline{\partial}\) denotes any of the tangent derivatives \(\overline{\partial}_{\alpha}\) in \(\mathbb{R}^{1+3}\times\mathbb{S}^{1}\), \(\overline{\partial}_{x}\) denotes any of the tangent derivatives \(\overline{\partial}_{\boldsymbol{\alpha}}\) in \(\mathbb{R}^{1+3}\); * \(\underline{\nabla}=(\underline{\partial}_{1},\ldots,\underline{\partial}_{4})\) denotes the tangent gradient to the hyperboloids in \(\mathbb{R}^{1+3}\times\mathbb{S}^{1}\), with \(\underline{\partial_{\boldsymbol{i}}}=\partial_{\boldsymbol{i}}+(x^{i}/t) \partial_{t}\) and \(\underline{\partial}_{4}=\partial_{y}\). Moreover \(\underline{\nabla}_{x}=(\underline{\partial}_{1},\underline{\partial}_{2}, \underline{\partial}_{3})\); * \(\underline{\partial}\) denotes any of the tangent derivatives \(\underline{\partial}_{\boldsymbol{\alpha}},\underline{\partial}_{4}\) in \(\mathbb{R}^{1+3}\times\mathbb{S}^{1}\), \(\underline{\partial}_{x}\) denotes any of tangent derivatives \(\underline{\partial}_{\boldsymbol{\alpha}}\) in \(\mathbb{R}^{1+3}\). Sometimes we will use \(\underline{\partial}_{0}=\partial_{t}\). Products: * Given a multi-index \(\alpha=(\alpha_{0},\alpha_{1},\ldots,\alpha_{4})\in\mathbb{N}^{5}\), its length is computed classically as \(|\alpha|=\sum_{i=0}^{4}\alpha_{i}\). We set \(\partial^{\alpha}:=\partial_{0}^{\alpha_{0}}\partial_{1}^{\alpha_{1}}\partial _{2}^{\alpha_{2}}\partial_{3}^{\alpha_{3}}\partial_{4}^{\alpha_{4}}\) and \(\partial_{x}^{\alpha}:=\partial_{1}^{\alpha_{1}}\partial_{2}^{\alpha_{2}} \partial_{3}^{\alpha_{3}}\). The definition of \(\partial_{xy}^{\alpha}\) and \(\partial_{tx}^{\alpha}\) are analogous; * More generally, given a family of vector fields \(\{X_{1},\ldots,X_{n}\}\) and a multi-index \(\alpha=(\alpha_{1},\ldots,\alpha_{n})\in\mathbb{N}^{n}\), \(X^{\alpha}=X_{1}^{\alpha_{1}}\ldots X_{n}^{\alpha_{n}}\). With an abuse of notation we will sometimes write \(X^{k}\) (resp. \(X^{\leq k}\)) instead of \(\sum_{\alpha:|\alpha|=k}X^{\alpha}\) (resp. \(\sum_{\alpha:|\alpha|\leq k}X^{\alpha}\)). Metrics: * \(\overline{g}=-(dt)^{2}+\sum_{\boldsymbol{i}}(dx^{\boldsymbol{i}})^{2}+(dy)^{2}\) denotes the Kaluza-Klein metric on \(\mathbb{R}^{1+3}\times\mathbb{S}^{1}\); * \(g\) denotes a solution of the Einstein equations (1.1) on \(\mathbb{R}^{1+3}\times\mathbb{S}^{1}\); * \(\overline{g}^{\alpha\beta}\) and \(g^{\alpha\beta}\) denote the inverse of the metrics \(\overline{g}_{\alpha\beta}\) and \(g_{\alpha\beta}\) respectively. For any other arbitrary \(n\)-tensor tensor \(\pi_{\alpha_{1}\ldots\alpha_{n}}\), indices are raised and lowered using \(\overline{g}\), e.g \(\pi^{\alpha_{1}}{}_{\alpha_{2}\ldots\alpha_{n}}=\overline{g}^{\alpha_{1}\mu} \pi_{\mu\alpha_{2}\ldots\alpha_{n}}\); * \(H^{\alpha\beta}=g^{\alpha\beta}-\overline{g}^{\alpha\beta}\) corresponds to the formal inverse of \(h_{\alpha\beta}=g_{\alpha\beta}-\overline{g}_{\alpha\beta}\). When \(h\) is sufficiently small we have \(H^{\alpha\beta}=-h^{\alpha\beta}+\mathscr{O}^{\alpha\beta}(h^{2})\). Null Frame and Decomposition: * \(L=\partial_{t}+\partial_{r}\) denotes the vector field tangent to the outgoing null cones in \(\mathbb{R}^{1+3}\times\mathbb{S}^{1}\). In components, \(L^{0}=1\), \(L^{i}=x^{i}/|x|\) and \(L^{4}=0\); * \(\underline{L}=\partial_{t}-\partial_{r}\) denotes the vector field tangent to the incoming null cones in \(\mathbb{R}^{1+3}\times\mathbb{S}^{1}\). In components, \(\underline{L}^{0}=1\), \(\underline{L}^{i}=-x^{i}/|x|\) and \(\underline{L}^{4}=0\); * \(S^{1}\) and \(S^{2}\) denote orthogonal vector fields spanning the tangent space of the spheres \(t=const\), \(r=const\), \(y\in\mathbb{S}^{1}\); * \(\mathscr{U}=\{L,\underline{L},S^{1},S^{2},\partial_{y}\}\) denotes the full null frame in \(\mathbb{R}^{1+3}\times\mathbb{S}^{1}\); * \(\mathscr{T}=\{L,S^{1},S^{2},\partial_{y}\}\) denotes the tangent frame in \(\mathbb{R}^{1+3}\times\mathbb{S}^{1}\); * \(\mathscr{L}=\{L\}\); * For any vector field \(X\) and frame vector \(U\), \(X_{U}=X_{\alpha}U^{\alpha}\) where \(X_{\alpha}=\overline{g}_{\alpha\beta}X^{\beta}\); * For any arbitrary vector field \(X=X^{\alpha}\partial_{\alpha}=X^{L}L+X^{L}\underline{L}+X^{S^{1}}S^{1}+X^{S^{ 2}}S^{2}+X^{\partial_{y}}\partial_{y}\) where \(X^{L}=-(1/2)X_{\underline{L}}\), \(X^{L}=-(1/2)X_{L}\), \(X^{A}=X_{A}\) for \(A=S^{1},S^{2},\partial_{y}\); * For any \((0,2)\) tensor \(\pi\) and two vector fields \(X,Y\) \[\pi_{XY}=\pi_{\alpha\beta}X^{\alpha}Y^{\beta}.\] For any two families \(\mathscr{V},\mathscr{W}\) of vector fields, \(|\pi|_{\mathscr{V}\mathscr{W}}:=\sum_{V\in\mathscr{V},W\in\mathscr{W}}|\pi_{VW}|\); * The metric \(\overline{g}\) has the following form relative to the null frame, note \(A,B\in\{S^{1},S^{2},\partial_{y}\}\), \[\overline{g}_{LL}=\overline{g}_{\underline{L}\underline{L}}=\overline{g}_{ \underline{L}A}=\overline{g}_{\underline{L}A}=0,\quad\overline{g}_{LL}= \overline{g}_{\underline{L}L}=-2,\quad\overline{g}_{AB}=\delta_{AB}.\] As concerns the inverse metric, we have \[\overline{g}^{LL}=\overline{g}^{\underline{L}\underline{L}}=\overline{g}^{ \underline{L}A}=\overline{g}^{\underline{L}A}=0,\quad\overline{g}^{\underline {L}\underline{L}}=\overline{g}^{\underline{L}\underline{L}}=-1/2,\quad \overline{g}^{AB}=\delta^{AB}.\] Admissible Vector Fields: * \(\{\Gamma\}=\{\Omega_{\mathbf{ij}},\Omega_{0i}\}\) is the family of Klainerman vector fields, where \(\Omega_{\mathbf{ij}}=x_{i}\partial_{\mathbf{j}}-x_{j}\partial_{i}\), and \(\Omega_{0i}=t\partial_{i}+x_{i}\partial_{t}\); * \(\{Z\}=\{\partial_{\mu},\Omega_{\mathbf{ij}},\Omega_{0j},\partial_{y}\}\) is the family of admissible vector fields; * For any multi-index \(K=(I,J)\), we set \(Z^{K}=\partial^{I}\Gamma^{J}\). If \(|I|+|J|=n\) and \(|J|=k\), we say that \(K\) is a multi-index of type \((n,k)\). Commutators with the null frame: * \([\Omega_{0j},\partial_{t}+\partial_{r}]=-\mathbf{\partial}_{\mathbf{j}}-\frac{x_{j}}{r} (\partial_{t}+\partial_{r}),\quad[\Omega_{0j},\mathbf{\partial}_{\mathbf{k}}]=\big{(}- \delta_{\mathbf{jk}}+\frac{x_{j}x_{k}}{r^{2}}\big{)}\big{[}(\partial_{t}+\partial_{ r})+\frac{1}{r}\Omega_{0r}\big{]}\) * \([\Omega_{\mathbf{ij}},\partial_{t}+\partial_{r}]=0\), \([\Omega_{\mathbf{ij}},\partial_{\mathbf{k}}]=-\delta_{\mathbf{ik}}\mathbf{\partial}_{\mathbf{j}}+ \delta_{\mathbf{jk}}\mathbf{\partial}_{\mathbf{i}}\) * \([\partial_{k},\partial_{t}+\partial_{r}]=\frac{\delta_{jk}}{r}\partial_{j}- \frac{x_{j}x_{k}}{r^{3}}\partial_{j}\) * \([\Omega_{0j},\partial_{y}]=[\Omega_{\mathbf{ij}},\partial_{y}]=[\partial_{\alpha}, \partial_{y}]=0\) Commutators with the hyperbolic derivatives: * \([\Omega_{0j},\partial_{t}]=-\partial_{j}\), \([\Omega_{0j},\partial_{\mathbf{a}}]=-\frac{x_{\mathbf{a}}}{t}\partial_{j}\), \([\Omega_{0j},\partial_{\mathbf{a}}]=0\) * \([\Omega_{\boldsymbol{ij}},\partial_{t}]=[\Omega_{\boldsymbol{ij}},\underline{ \partial}_{4}]=0\), \([\Omega_{\boldsymbol{ij}},\underline{\partial}_{a}]=\delta_{\boldsymbol{aj}} \underline{\partial}_{i}-\delta_{\boldsymbol{i}\boldsymbol{a}}\underline{ \partial}_{j}\) Exterior Region: * \(\widetilde{\mathscr{H}}=\{(t,x):(t-1)^{2}-r^{2}=1\}\times\mathbb{S}^{1}\) denotes the hyperboloid that separates the interior and exterior region. It asymptotically approaches the cone \(\{t=r+1\}\times\mathbb{S}^{1}\); * \(\mathscr{D}^{c}:=\{(t,x):2\leq t\leq 1+\sqrt{1+r^{2}}\}\times\mathbb{S}^{1}\) denotes the exterior region; * \(\mathscr{D}^{c}_{T}\) denotes the portion of exterior region in the time slab \([2,T)\); * \(\Sigma^{c}_{t}:=\{x\in\mathbb{R}^{3}:|x|\geq\sqrt{(t-1)^{2}-1}\}\times\mathbb{ S}^{1}\) denotes a constant time slice in the exterior region; Interior Region: * \(\mathscr{D}^{i}:=\{(t,x):t\geq 1+\sqrt{1+r^{2}}\}\times\mathbb{S}^{1}\) denotes the interior region; * \(\mathscr{H}_{s}:=\{t^{2}-r^{2}=s^{2}\text{ and }t\geq r+1\}\times\mathbb{S}^{1}\) denotes a truncated hyperboloid in \(\mathbb{R}^{1+3}\times\mathbb{S}^{1}\); * \(S_{s,r}:=\mathscr{H}_{s}\cap\{|x|=r\}\) is the two-sphere of radius \(r\) on the hyperboloid \(\mathscr{H}_{s}\); * \(\mathscr{H}_{[s_{0},s]}:=\{(t,x,y)\in\mathscr{D}^{i}:s_{0}^{2}\leq t^{2}-|x|^ {2}\leq s^{2}\}\) denotes the hyperbolic slab in the interior region between \(\mathscr{H}_{s_{0}}\) and \(\mathscr{H}_{s}\) when \(s>2\); * \(\mathscr{H}_{[s_{0},\infty)}:=\{(t,x,y)\in\mathscr{D}^{i}:2\leq t^{2}-|x|^{2}\}\) is the unbounded portion of interior region above some hyperboloid \(\mathscr{H}_{s_{0}}\). ### From the null frame to hyperbolic derivatives Below are some useful formulas relating the null framework \(\mathscr{U}\) to the hyperbolic derivatives \(\underline{\partial}_{\boldsymbol{a}}\). We recall that \(s=\sqrt{t^{2}-r^{2}}\). We have that \[L=\Big{(}1-\frac{r}{t}\Big{)}\partial_{t}+\frac{x^{\boldsymbol{j}}}{r} \underline{\partial}_{\boldsymbol{j}},\quad\underline{L}=\Big{(}1+\frac{r}{ t}\Big{)}\partial_{t}-\frac{x^{\boldsymbol{j}}}{r}\underline{\partial}_{ \boldsymbol{j}},\quad\dot{\boldsymbol{\phi}}_{\boldsymbol{j}}=\underline{ \partial}_{\boldsymbol{j}}-\frac{x_{\boldsymbol{j}}x^{\boldsymbol{i}}}{r^{2}} \underline{\partial}_{\boldsymbol{i}} \tag{1.14}\] and \[UV=c_{UV}^{00}\partial_{t}^{2}+c_{UV}^{a0}\underline{\partial}_{a}\partial_{t} +c_{UV}^{0b}\partial_{t}\underline{\partial}_{b}+c_{UV}^{ab}\underline{ \partial}_{a}\underline{\partial}_{b}+d_{UV}^{0}\partial_{t}+d_{UV}^{c} \underline{\partial}_{c},\qquad U,V\in\mathscr{U} \tag{1.15}\] where \[c_{LL}^{00}=(1-r/t)^{2},\ c_{L\underline{L}}^{00}=(1-r^{2}/t^{2}),\ c_{L \underline{L}}^{00}=(1+r/t)^{2},\ c_{AU}^{00}=0\text{ for }A=\{S^{1},S^{2},\partial_{y}\}, \tag{1.16}\] \[c_{L\partial_{y}}^{04}=1-r/t,\ c_{L\underline{\partial}_{y}}^{04}=(1+r/t),\ c_{UV}^{04}=0\text{ otherwise}\] \[c_{\partial_{y}\partial_{y}}^{44}=1,\ c_{UV}^{44}=0\text{ otherwise}\] and \[|\partial^{I}\Gamma^{J}c_{L\underline{L}}^{\alpha\beta}|\lesssim_{IJ}(1+t+r) ^{-|I|},\quad|\partial^{I}\Gamma^{J}d_{UV}^{\gamma}|\lesssim_{IJ}(1+t+r)^{-1-| I|},\qquad|I|\geq 0 \tag{1.17}\] For any tensor \(\pi\) we have the following relations \[4(t/s)^{2}\pi^{UV}c_{UV}^{00}=\pi_{\underline{L}L}\frac{s^{2}}{(t+r)^{2}}+\pi_ {LL}\frac{(t+r)^{2}}{s^{2}}+\pi_{L\underline{L}} \tag{1.18}\] and \[\begin{split}\pi^{\mu\nu}\partial_{\mu}\partial_{\nu}& =\pi^{UV}c_{UV}^{\mu\nu}\underline{\partial}_{\mu}\underline{ \partial}_{\nu}+\pi^{UV}d_{UV}^{\mu}\underline{\partial}_{\mu}\\ &=\pi^{UV}\big{[}c_{UV}^{00}\partial_{t}^{2}+c_{UV}^{\boldsymbol{ \alpha}\beta}\underline{\partial}_{\boldsymbol{a}}\underline{\partial}_{ \beta}+c_{UV}^{\boldsymbol{\alpha}\boldsymbol{b}}\underline{\partial}_{ \boldsymbol{\alpha}}\underline{\partial}_{\boldsymbol{b}}+c_{UV}^{4\beta} \partial_{y}\underline{\partial}_{\beta}+d_{UV}^{\mu}\underline{\partial}_{ \mu}\big{]}\\ \pi^{\boldsymbol{\mu}\boldsymbol{\nu}}\partial_{\boldsymbol{\mu}} \partial_{\boldsymbol{\nu}}&=\pi^{UV}\big{[}c_{UV}^{00} \partial_{t}^{2}+c_{UV}^{\boldsymbol{\alpha}\boldsymbol{\beta}}\underline{ \partial}_{\boldsymbol{a}}\underline{\partial}_{\boldsymbol{\beta}}+c_{UV}^{ \boldsymbol{\alpha}\boldsymbol{b}}\underline{\partial}_{\boldsymbol{\alpha}} \underline{\partial}_{\boldsymbol{b}}+d_{UV}^{\boldsymbol{\mu}}\underline{ \partial}_{\boldsymbol{\mu}}\big{]}.\end{split} \tag{1.19}\] For any smooth function \(u=u(t,x)\), we have the following inequalities \[|\partial\!\!\!/u|\lesssim\frac{1}{t}|\partial_{x}Z^{\leq 1}u|,\qquad| \overline{\partial}\partial u|\lesssim\Big{(}\frac{s}{t}\Big{)}^{4}|\partial^{ 2}u|+\Big{(}\frac{s}{t}\Big{)}^{2}\frac{1}{t}|\partial Z^{\leq 1}u|+\frac{1}{t}| \partial_{x}Z^{\leq 1}u|. \tag{1.20}\] ### Outline of the paper The rest of the paper is organized in three main sections. Section 2 introduces some properties the metric coefficients inherit from the wave condition and which will be used throughout. Section 3 is devoted to perform the bootstrap argument in the exterior region and hence to prove the global existence of the solution to (1.8) there. In section 4 we perform the bootstrap argument in the interior region and conclude the proof of the main theorem. Two appendix sections follow: in section A we state and prove the exterior and interior energy inequalities, while section B contains a list of weighted Sobolev and Hardy inequalities. ## 2. The wave condition The metric solution \(g\) to (1.1) satisfies, when written in harmonic coordinates \(\{x^{\mu}\}_{\mu}\), the _wave coordinate condition_ \[g^{\mu\nu}\Gamma^{\lambda}_{\mu\nu}=0,\quad\lambda=\overline{0,4}\] where \(\Gamma^{\lambda}_{\mu\nu}\) are the Christoffel symbols of \(g\) in the coordinates \(\{x^{\mu}\}_{\mu}\). The above equations are equivalent to each of the following \[\partial_{\mu}\big{(}g^{\mu\nu}\sqrt{|\det g|}\big{)}=0,\quad g^{\alpha\beta} \partial_{\alpha}g_{\beta\nu}=\frac{1}{2}g^{\alpha\beta}\partial_{\nu}g_{ \alpha\beta},\quad\partial_{\alpha}g^{\alpha\nu}=\frac{1}{2}g_{\alpha\beta}g^{ \nu\mu}\partial_{\mu}g^{\alpha\beta},\quad\nu=\overline{0,4}. \tag{2.1}\] These relations are particularly useful when written with respect to the null framework, as they allow us to recover additional information on metric coefficients \(H_{LT}\) (and hence on \(h_{LT}\)) for any \(T\in\mathscr{T}\), and to show that their derivatives have a special behavior compared to those of general coefficients \(H^{\mu\nu}\). This is the content of the following Lemmas, which are presented in a slightly different form than the ones in [40]. **Lemma 2.1**.: _Let \(g\) be a Lorentzian metric satisfying the wave coordinate condition relative to a coordinate system \(\{x^{\mu}\}_{\mu=0}^{4}\). Let \(K=(I,J)\) be any multi-index with positive length and assume that the perturbation tensor \(H^{\mu\nu}=g^{\mu\nu}-\bar{g}^{\mu\nu}\) satisfies the following_ \[|Z^{K^{\prime}}H|\lesssim C,\quad\forall\,|K^{\prime}|\leq\lfloor K/2\rfloor.\] _Then_ \[|\partial H|_{\mathscr{LT}}\lesssim|\overline{\partial}H|+|H||\partial H| \lesssim\Big{(}\frac{s}{t}\Big{)}^{2}|\partial H|+|\underline{\partial}_{xy} H|+|H||\partial H| \tag{2.2}\] _and in any region where \(r\gtrsim t\gtrsim 1\)_ (2.3) \[|Z^{K}\partial H|_{\mathscr{LT}}+|\partial Z^{K}H|_{\mathscr{LT}} \lesssim\sum_{|K^{\prime}|\leq|K|}(|\overline{\partial}Z^{K^{\prime}}H|+r^{-1 }|Z^{K^{\prime}}H|)+\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! _Similar estimates hold for the metric tensor \(h_{\mu\nu}=g_{\mu\nu}-\bar{g}_{\mu\nu}\)._ Proof.: We write \(g^{\mu\nu}\) in terms of the perturbation metric \(H^{\mu\nu}\). From the following equality \[g^{\mu\nu}\sqrt{|\mathrm{det}g|}=(\bar{g}^{\mu\nu}+H^{\mu\nu})\Big{(}1-\frac{1} {2}\mathrm{tr}H+\mathscr{O}(H^{2})\Big{)}\] and the wave condition (2.1) we obtain that \[\partial_{\mu}\Big{(}H^{\mu\nu}-\frac{1}{2}\bar{g}^{\mu\nu}\mathrm{tr}H+ \mathscr{O}^{\mu\nu}(H^{2})\Big{)}=0,\quad\text{where }\mathscr{O}^{\mu\nu}(H^{2})=\mathscr{O}(|H|^{2}). \tag{2.4}\] The divergence of a vector field can be expressed relative to the null frame as follows \[\partial_{\mu}F^{\mu}=L_{\mu}\partial_{\underline{u}}F^{\mu}-\underline{L}_{ \mu}\partial_{u}F^{\mu}+A_{\mu}\partial_{A}F^{\mu},\quad A\in\{S_{1},S_{2}, \partial_{y}\} \tag{2.5}\] so setting \(\tilde{H}^{\mu\nu}:=H^{\mu\nu}-\frac{1}{2}\bar{g}^{\mu\nu}\mathrm{tr}H\) and contracting (2.4) with any \(T\in\mathscr{T}\) we deduce that \[\partial_{\underline{u}}H_{LT}=\partial_{\underline{u}}\tilde{H}_{LT}= \partial_{u}\tilde{H}_{\underline{LT}}-\partial_{A}\tilde{H}_{AT}+\mathscr{O} (H\cdot\partial H). \tag{2.6}\] The first of the above equalities follows from the fact that \(\bar{g}^{LL}=g^{LA}=0\). Relation (2.6) and the first two inequalities in (1.20) imply immediately (2.2). We now recall the commutators between any admissible vector field \(Z\) and the null frame, which can be summarized in the following formula \[[Z,\overline{\partial}_{\alpha}]=\sum_{\boldsymbol{\beta}=0}^{3}c^{ \boldsymbol{\beta}}_{Z\alpha}\overline{\partial}_{\boldsymbol{\beta}}+\sum_{ \boldsymbol{i}=1}^{3}d^{\boldsymbol{i}}_{Z\alpha}\frac{\partial_{\boldsymbol{ i}}}{r}+e_{Z\alpha}\frac{\Omega_{0r}}{r},\qquad c^{\boldsymbol{\beta}}_{Z \alpha},d^{\boldsymbol{i}}_{Z\alpha},e_{Z\alpha}=\mathscr{O}\big{(}\frac{x}{r} \big{)}\] where \(c^{\boldsymbol{\beta}}_{Z\alpha},d^{\boldsymbol{i}}_{Z\alpha},e_{Z\alpha}\) are smooth homogeneous functions of \(x\) such that \(c^{\boldsymbol{\beta}}_{\partial\alpha}=e_{\partial\alpha}=0\), \(d^{\boldsymbol{i}}_{\Gamma\alpha}=0\) and \[|\partial^{I}_{x}c^{\boldsymbol{\beta}}_{Z\alpha}|+|\partial^{I}d^{ \boldsymbol{i}}_{Z\alpha}|+|\partial^{I}e_{Z\alpha}|\lesssim r^{-|I|},\qquad| I|\geq 0.\] Using an induction argument on \(|K|\), one can show that for any sufficiently smooth function \(w\) the following inequality holds true whenever \(r\gtrsim t\) \[|[Z^{K},\overline{\partial}]w|\lesssim\sum_{|K^{\prime}|<|K|}(|\overline{ \partial}Z^{K^{\prime}}w|+r^{-1}|\partial Z^{K^{\prime}}w|)+\sum_{|K^{\prime}| \leq|K|}r^{-1}|Z^{K^{\prime}}w|. \tag{2.7}\] As concerns the commutators with the transverse vector field, we simply have \[|[Z^{K},\partial_{t}-\partial_{r}]w|\lesssim\sum_{|K^{\prime}|<|K|}|\partial Z ^{K^{\prime}}w|\lesssim\sum_{|K^{\prime}|<|K|}|\partial_{\underline{u}}Z^{K^{ \prime}}w|+|\overline{\partial}Z^{K^{\prime}}w|. \tag{2.8}\] In order to obtain (2.3), we apply \(Z^{K}\) vector fields to both sides of equality (2.6). Using (2.7) we find that \[|Z^{K}\partial_{\underline{u}}H|_{\mathscr{LT}}\lesssim\sum_{|K^{\prime}| \leq|K|}(|\overline{\partial}Z^{K^{\prime}}H|+r^{-1}|Z^{K^{\prime}}H|)+\sum_{ |K_{1}|+|K_{2}|\leq|K|}|Z^{K_{1}}H||\partial Z^{K_{2}}H|\] which, together with (2.8), yields \[|\partial_{\underline{u}}Z^{K}H|_{\mathscr{LT}}\lesssim \sum_{|K^{\prime}|\leq|K|}(|\overline{\partial}Z^{K^{\prime}}H|+r^{- 1}|Z^{K^{\prime}}H|)+\sum_{|K_{1}|+|K_{2}|\leq|K|}|Z^{K_{1}}H||\partial Z^{K_{2 }}H|\] \[+\sum_{|K^{\prime}|<|K|}|\partial_{\underline{u}}Z^{K^{\prime}}H|_ {\mathscr{LT}}.\] The conclusion of the proof of the first inequality in (2.3) then follows by induction on \(|K|\). The latter follows using also (1.20). Finally, inequalities (2.2) and (2.3) for \(h\) simply follow from the equality \(H^{\mu\nu}=-h^{\mu\nu}+\mathscr{O}(h^{2})\). Inequalities (2.2) and (2.3) hold true also for the tensor \(H^{1,\mu\nu}\) introduced in (1.12). **Lemma 2.2**.: _Under the same assumptions of the previous lemma, we have that_ \[\begin{split}|Z^{K}\partial H^{1}|_{\mathscr{L}\mathscr{T}}+| \partial Z^{K}H^{1}|_{\mathscr{L}\mathscr{T}}\lesssim\sum_{|K^{\prime}|\leq| K|}(|\overline{\partial}Z^{K^{\prime}}H^{1}|+r^{-1}|Z^{K^{\prime}}H^{1}|)\\ +\sum_{|K_{1}|+|K_{2}|\leq|K|}\hskip-14.226378pt|Z^{K_{1}}H^{1} ||\partial Z^{K_{2}}H^{1}|+\frac{M\chi_{0}\big{(}t/2\leq r\leq 3t/4\big{)}}{(1+t+r)^{ 2}}\end{split} \tag{2.9}\] _and_ \[\begin{split}|Z^{K}\partial H^{1}|_{\mathscr{L}\mathscr{T}}+| \partial Z^{K}H^{1}|_{\mathscr{L}\mathscr{T}}\lesssim\sum_{|K^{\prime}|\leq| K|}\Big{(}\frac{s}{t}\Big{)}^{2}|\partial Z^{K^{\prime}}H^{1}|+|\underline{ \partial}Z^{K^{\prime}}H^{1}|+r^{-1}|Z^{K^{\prime}}H^{1}|\\ +\sum_{|K_{1}|+|K_{2}|\leq|K|}\hskip-14.226378pt|Z^{K_{1}}H^{1} ||\partial Z^{K_{2}}H^{1}|+\frac{M\chi_{0}\big{(}t/2\leq r\leq 3t/4\big{)}}{(1+t+r)^{ 2}}\end{split} \tag{2.10}\] _where \(\chi_{0}\big{(}t/2\leq r\leq 3t/4\big{)}\) is a cut-off function supported for \(t/2\leq r\leq 3t/4\). Similar estimates hold true for \(h^{1}\)._ Proof.: We set \(\tilde{H}^{0,\mu\nu}:=H^{0,\mu\nu}-\frac{1}{2}\bar{g}^{\mu\nu}\mathrm{tr}(H^{0})\) and derive from the definition of \(H^{0}\) that \[\partial_{\mu}\tilde{H}^{0,\mu\nu}=2\chi^{\prime}\Big{(}\frac{r}{t}\Big{)} \chi(r)\frac{M}{t^{2}}\delta^{\nu 0}. \tag{2.11}\] We inject the above formula into (2.4) and obtain that \[Z^{K}\partial_{\mu}(H^{1,\mu\nu})=-Z^{K}\partial_{\mu}\mathscr{O}^{\mu\nu}(H^ {2})-Z^{K}\Big{(}2\chi^{\prime}\Big{(}\frac{r}{t}\Big{)}\chi(r)\frac{M}{t^{2}} \delta^{\nu 0}\Big{)}.\] Then the result of the statement follows using the same argument as in previous lemma's proof. Furthermore, from (1.12) a similar inequality can be proved for \(h^{1}_{\mu\nu}\). ## 3. The Exterior Region The goal of this section is to prove the existence in the exterior region \(\mathscr{D}^{\mathrm{e}}\) of the solution \(h^{1}_{\alpha\beta}\) of (1.8) with data satisfying the hypothesis of theorem 1.2. The proof is based on a bootstrap argument in which the a-priori assumptions are bounds on the higher order weighted energies of \(h^{1}_{\alpha\beta}\), introduced below. For any fixed \(\kappa>0\), we define the exterior weighted energy functional of \(h^{1}_{\alpha\beta}\) as \[E^{\mathrm{e},\kappa}(t,h^{1}_{\alpha\beta})=\iint_{\{|x|\geq t- 1\}\times\mathbb{S}^{1}}(2+|x|-t)^{1+2\kappa}|\nabla_{txy}h^{1}_{\alpha\beta}( t,x,y)|^{2}dxdy\\ +\int_{2}^{t}\iint_{\{|x|\geq\tau-1\}\times\mathbb{S}^{1}}(2+|x|- t)^{2\kappa}|\overline{\nabla}h^{1}_{\alpha\beta}(\tau,x,y)|^{2}dxdyd\tau\] and denote \(E^{\mathrm{e},\kappa}(t,h^{1})=\sum_{\alpha,\beta}E^{\mathrm{e},\kappa}(t,h^{1}_{ \alpha\beta})\). We fix \(N\in\mathbb{N}\) with \(N\geq 7\) and assume the existence of a positive constant \(C_{0}\) and of some small parameters \(0<\sigma<\kappa/3\ll 1\) such that the solution \(h^{1}\) of (1.8) exists in \(\mathscr{D}^{\mathrm{e}}_{T_{0}}\) and for all \(t\in[2,T_{0})\) it satisfies \[E^{\mathrm{e},\kappa}(t,Z^{\leq N}h^{1})^{1/2}\leq 2C_{0}\epsilon t ^{\sigma} \tag{3.2}\] \[E^{\mathrm{e},1+\kappa}(t,\partial Z^{\leq N}h^{1})^{1/2}\leq 2C_ {0}\epsilon t^{\sigma}. \tag{3.1}\] The result we want to prove here affirms the following **Proposition 3.1**.: _Let \(N\in\mathbb{N}\) with \(N\geq 6\) be fixed. There exists a constant \(C_{0}\) sufficiently large, \(0<\epsilon_{0}\ll 1\) sufficiently small and a universal positive constant \(C\) such that, for every \(0<\epsilon<\epsilon_{0}\) if \(h^{1}\) is a solution of (1.8) in the time interval \([2,T_{0})\) and satisfies the bounds (3.1)-(3.2) for all \(t\in[2,T_{0})\), then in the same interval it actually satisfies_ \[E^{\mathrm{e},\kappa}(t,Z^{\leq N}h^{1})^{1/2}\leq C_{0}\epsilon t ^{\frac{\sigma}{2}+CC_{0}\epsilon} \tag{3.4}\] \[E^{\mathrm{e},1+\kappa}(t,\partial Z^{\leq N}h^{1})^{1/2}\leq C_ {0}\epsilon t^{\frac{\sigma}{2}+CC_{0}\epsilon}. \tag{3.3}\] The time \(T_{0}\) in the statement of the above proposition is arbitrary and one can hence infer that the solution exists globally in \(\mathscr{D}^{\mathrm{e}}\). We also observe that, as a consequence of the energy assumptions (3.1)-(3.2), there exists an integrable function \(l\in L^{1}([2,T_{0}))\) such that \[\left\|(2+r-t)^{\frac{1}{2}+\kappa}\ \partial Z^{\leq N}h^{1} \right\|_{L^{2}(\Sigma^{\mathrm{e}}_{t})}\leq 2C_{0}\epsilon t^{\sigma} \tag{3.6}\] \[\left\|(2+r-t)^{\kappa}\ \overline{\partial}Z^{\leq N}h^{1} \right\|_{L^{2}(\Sigma^{\mathrm{e}}_{t})}\leq 2C_{0}\epsilon\sqrt{l(t)}t^{\sigma}\] (3.7) \[\left\|(2+r-t)^{\frac{3}{2}+\kappa}\ \partial^{2}Z^{\leq N}h^{1} \right\|_{L^{2}(\Sigma^{\mathrm{e}}_{t})}\leq 2C_{0}\epsilon t^{\sigma}\] (3.8) \[\left\|(2+r-t)^{1+\kappa}\ \overline{\partial}\partial Z^{\leq N}h^{1} \right\|_{L^{2}(\Sigma^{\mathrm{e}}_{t})}\leq 2C_{0}\epsilon\sqrt{l(t)}t^{ \sigma}. \tag{3.5}\] The first step to recover the enhanced bounds (3.3)-(3.4) is to compare the equation satisfied by the differentiated unknown \(Z^{K}h^{1}_{\alpha\beta}\) for any \(K=(I,J)\) with \(|K|\leq N+1\), with the linear inhomogeneous equation (A.1). The commutation of \(Z^{K}\) with equation (1.8) shows that \(Z^{K}h^{1}_{\alpha\beta}\) solves \[\tilde{\square}_{g}Z^{K}h^{1}_{\alpha\beta}=F^{K}_{\alpha\beta}+F^{0,K}_{ \alpha\beta},\qquad F^{0,K}_{\alpha\beta}=Z^{K}\tilde{\square}_{g}h^{0}_{ \alpha\beta} \tag{3.9}\] with source term \(F^{K}_{\alpha\beta}\) given by \[F^{K}_{\alpha\beta}=Z^{K}F_{\alpha\beta}(h)(\partial h,\partial h)-[Z^{K},H^{ \mu\nu}\partial_{\mu}\partial_{\nu}]h^{1}_{\alpha\beta}. \tag{3.10}\] The second step consists in recovering suitable pointwise decay estimates and \(L^{2}\) estimates for tensors \(h_{\alpha\beta}\) and \(H^{\alpha\beta}\) and their derivatives. Our aim is in fact to apply energy inequality (A.2) with \(\mathbf{W}=Z^{K}h^{1}_{\alpha\beta}\), \(\mathbf{F}=F^{K}_{\alpha\beta}+F^{0,K}_{\alpha\beta}\) and \(w(q)=(2+r-t)^{\frac{1}{2}+i+\kappa}\) with \(i=0,1\) depending on \(K\). Such estimates allow us, on the one hand, to justify the use of (A.2) and, on the other, to suitably estimate the different contributions to the right hand side of such energies, and hence to propagate (3.1)-(3.2). The derivation of these bounds is the content of the following subsections. ### Pointwise bounds A first set of pointwise decay bounds for the metric perturbation \(h^{1}_{\alpha\beta}\), as well as for tensor \(H^{1,\alpha\beta}\), are obtained from the a-priori energy assumptions (3.1)-(3.2) via the weighted Sobolev and Hardy embeddings stated in appendix B. As concerns the mass term, a straightforward computation using directly the expression of \(h^{0}_{\alpha\beta}\) in (1.7) shows that for all \((t,x,y)\in\mathbb{R}^{1+3}\times\mathbb{S}^{1}\) \[\sup_{\mathbb{S}^{1}}|\partial^{I}Z^{J}h^{0}|\lesssim\epsilon(1+r)^{-1-|I|}. \tag{3.11}\] **Proposition 3.2**.: _Let us define the weighted pointwise norm_ \[|u(t)|_{\lambda}:=\sup_{(x,y)\in\Sigma^{e}_{t}}(1+t+r)(2+r-t)^{1+\lambda}|u(t, x,y)|.\] _Assume that the solution \(h^{1}_{\alpha\beta}\) of (1.8) exists in the time interval \([2,T_{0})\) and satisfies (3.1)-(3.2) for all \(t\in[2,T_{0})\). Then the following estimates hold true in \(\mathscr{D}^{e}_{T_{0}}\)_ \[|\partial Z^{\leq N-3}h^{1}|_{\kappa}+|\partial^{2}Z^{\leq N-3}h^{1}|_{1/2+ \kappa}\lesssim C_{0}\epsilon t^{\sigma} \tag{3.12}\] \[|\overline{\partial}Z^{\leq N-3}h^{1}|_{\kappa-1/2}+|\overline{\partial} \partial Z^{\leq N-3}h^{1}|_{\kappa}\lesssim C_{0}\epsilon t^{\sigma}\sqrt{l( t)} \tag{3.13}\] \[|Z^{\leq N-2}h^{1}|_{\kappa-1}\lesssim C_{0}\epsilon t^{\sigma} \tag{3.14}\] \[|Z^{\leq N-3}h^{1,\natural}|_{\kappa-1/2}\lesssim C_{0}\epsilon t^{\sigma}. \tag{3.15}\] _Estimates (3.12)-(3.14) hold true also for tensor \(H^{1,\alpha\beta}\)._ Proof.: Estimate (3.12) (resp. (3.13)) for the second order derivatives \(\partial^{2}\) (resp. \(\overline{\partial}\partial\)) of \(Z^{\leq N-3}h^{1}\) follows from the energy bound (3.7) (resp. (3.8)) and from inequality (B.2) applied with \(\beta=1+\lambda\) and \(\lambda=1/2+\kappa\) (resp. \(\lambda=\kappa\)). Estimate (3.12) (resp. (3.13)) for the first order derivatives \(\partial\) (resp. \(\overline{\partial}\)) of \(Z^{\leq N-3}h^{1}\) follows from the energy bound (3.7) (resp. (3.8)) and inequality (B.5) applied with \(\beta=1+\kappa\) (resp. \(\beta=1/2+\kappa\)), and estimate (3.15) follows using in addition the Poincare inequality. Estimate (3.14) follows from the energy bound (3.5) and inequality (B.5) with \(\beta=\kappa\). Finally, one can show that estimates (3.12)-(3.14) hold true for \(H^{1,\alpha\beta}\) using (1.9). As a result of inequality (2.9) and the pointwise bounds we just obtained, we can show that the metric coefficients \(h^{1}_{LT}\) satisfy enhanced pointwise decay estimates compared to those in Proposition 3.2. **Proposition 3.3**.: _Under the assumptions of Proposition 3.2, we have_ \[|\partial Z^{\leq N-3}h^{1}_{LT}(t,x,y)|\lesssim C_{0}\epsilon \big{[}(1+t+r)^{-1+\sigma}\sqrt{l(t)}(2+r-t)^{-\frac{1}{2}-\kappa}+(1+t+r)^{- 2+2\sigma}(2+r-t)^{-\kappa}\big{]} \tag{3.16}\] \[|Z^{\leq N-4}h^{1}_{LT}(t,x,y)|\lesssim C_{0}\epsilon(1+t+r)^{-\frac{3}{2}+2 \sigma}(2+r-t)^{\frac{1}{2}-\kappa} \tag{3.17}\] _and_ \[\|Z^{\leq N}h^{1}_{LT}(t,r)\|_{L^{2}(\mathbb{S}^{2}\times\mathbb{S}^{1})} \lesssim\frac{C_{0}\epsilon t^{\sigma}}{t^{\kappa-\mu}(2+r-t)^{\mu}}\Big{(} \frac{\sqrt{l(t)}}{r^{1/2}}+\frac{1}{r}\Big{)}. \tag{3.18}\] _The same bounds are satisfied by \(H^{1}_{LT}\)._ Proof.: From relation (1.9) and pointwise bounds (3.11), (3.12), (3.14), it is clear that the estimates (3.16) and (3.17) for \(h^{1}_{LT}\) are also satisfied by \(H^{1}_{LT}\). Bound (3.16) follows immediately from inequality (2.9) coupled with (3.12)-(3.14). The proof of (3.17) requires more work because a naive integration of (3.16) along the integral curves of \(\partial_{t}-\partial_{r}\) does not produce the required result due to the factor \(\sqrt{l(t)}\) (it would if this was replaced by the explicit decay \(t^{-1/2}\)). Of course, the estimate is satisfied in the region where \(r\geq 2t\) simply after (3.14). We then restrict our attention to the portion of exterior region for which \(r<2t\) and proceed as follows: * first, we recover a better bound for \(\partial Z^{\leq N-4}h^{1}_{LT}\) than the one in (3.16), in which \(\sqrt{l(t)}\) is replaced by a decay \(t^{-\frac{1}{2}+}\). This is obtained by the integration of \(\underline{\partial}_{r}\partial Z^{\leq N-4}h^{1}_{LT}\) along hyperboloids in some dyadic time slab, where \(\underline{\partial}_{r}=\frac{x^{j}}{tr}\Omega_{0\boldsymbol{j}}\); * then, we deduce the desired estimate on \(Z^{\leq N-4}h^{1}_{LT}\) by integration of the bounds obtained in step 1 along the integral curves of \(\partial_{\underline{u}}\). _Step 1._ From the relation \(\underline{\partial}_{r}=\frac{x^{j}}{tr}\Omega_{0\boldsymbol{j}}\) and inequality (3.16), we see that in fact \[|\underline{\partial}_{r}\partial Z^{\leq N-4}h^{1}_{LT}|\lesssim( 1+t+r)^{-1}|\partial Z^{\leq N-3}h^{1}_{LT}|\\ \lesssim C_{0}\epsilon(1+t+r)^{-2+\sigma}\sqrt{l(t)}(2+r-t)^{- \frac{1}{2}-\kappa}+C_{0}\epsilon(1+t+r)^{-3+2\sigma}(2+r-t)^{-\kappa}.\] Moreover, thanks to the pointwise bound (3.12) we have that on the cone \(r=2t\) \[|\partial Z^{\leq N-4}h^{1}_{LT}(t,x,y)|\lesssim C_{0}\epsilon(1+t+r)^{-2- \kappa+\sigma}.\] We dyadically decompose the time interval \([2,T_{0})=\cup_{k=1}^{k_{0}}[2^{k},2^{k+1})\cap[2,T_{0})\) where \(k_{0}\sim\ln_{2}T_{0}\) and denote \(\mathscr{C}^{e}_{k}\) the portion of the exterior region in the time slab \([2^{k},2^{k+1})\), so that \(\mathscr{C}^{e}=\cup_{k=1}^{k_{0}}\mathscr{C}^{e}_{k}\). Inequality (3.16) and the fact that \(l\in L^{1}([2,T_{0}))\) imply the existence, for every fixed \(k\), of a time \(\tau_{k}\in[2^{k},2^{k+1})\cap[2,T_{0})\) such that \[|\partial Z^{\leq N-4}h^{1}_{LT}(\tau_{k},x,y)|\lesssim C_{0}\epsilon\tau_{k}^ {-\frac{3}{2}+\sigma}(2+r-\tau_{k})^{-\frac{1}{2}-\kappa}+C_{0}\epsilon\tau_{ k}^{-2+2\sigma}(2+r-\tau_{k})^{-\kappa}.\] For every fixed \((t,x,y)\in\mathscr{C}^{e}_{k}\), we then integrate \(\underline{\partial}_{r}\partial Z^{\leq N-4}h^{1}_{LT}\) along the integral curve \(\tau\mapsto\gamma(\tau)\) of \(\underline{\partial}_{r}\) passing through \((t,x,y)\)9 until its first intersection with \(\{\tau=\tau_{k}\}\cup\{|w|=2\tau\}\). We denote \((\tau_{k}^{*},x_{k}^{*},y)\) the point at which such an intersection occurs first and observe that \(\tau_{k}^{*}\sim 2^{k}\sim t\). We deduce that Footnote 9: These are the hyperboloids \(\{\tau^{2}-|w|^{2}=t^{2}-r^{2}\}\). If \(t=r\) they degenerate into the cone \(\{\tau-|w|=t-r\}\). \[|\partial Z^{\leq N-4}h^{1}_{LT}(t,x,y)|\leq|\partial Z^{N-4}h^{1}_{LT}( \tau_{k}^{*},x_{k}^{*},y)|+\int_{\tau_{k}^{*}}^{t}|\underline{\partial}_{r} \partial Z^{\leq N-4}h^{1}_{LT}(\gamma(\tau))|d\tau\\ \lesssim C_{0}\epsilon(1+t+r)^{-\frac{3}{2}+\sigma}(2+r-t)^{- \frac{1}{2}-\kappa}+C_{0}\epsilon(1+t+r)^{-2+2\sigma}(2+r-t)^{-\kappa}. \tag{3.19}\] _Step 2._ We now integrate (3.19) along the integral lines of \(\partial_{\underline{u}}\), up to \(\mathscr{B}=\{r=2t\}\cup\{t=2\}\). After (3.14), we have that \[|Z^{\leq N-2}h^{1}|_{\mathscr{B}}\lesssim C_{0}\epsilon(1+t+r)^{-2}(2+r-t).\] Therefore, from (3.19) we get that \[|Z^{\leq N-4}h^{1}_{LT}(t,x,y)|\leq|Z^{\leq N-4}h^{1}_{LT}(\lambda^{ *}_{k},x^{*}_{k},y)|+\int_{\lambda^{*}_{k}}^{t}|\partial_{\underline{u}}Z^{\leq N -4}h^{1}_{LT}(\gamma(\tau))|d\tau\] \[\lesssim C_{0}\epsilon(1+t+r)^{-2}(2+r-t)+(1+t+r)^{-\frac{3}{2}+ \sigma}\int_{\lambda^{*}_{k}}^{t}C_{0}\epsilon(2+t+r-2\tau)^{-\frac{1}{2}- \kappa}d\tau\] \[+(1+t+r)^{-2+2\sigma}\int_{\lambda^{*}_{k}}^{t}C_{0}\epsilon(2+t+ r-2\tau)^{-\kappa}d\tau\] and \[|Z^{\leq N-4}h^{1}_{LT}(t,x,y)|\lesssim C_{0}\epsilon(1+t+r)^{-\frac{3}{2}+2 \sigma}(2+r-t)^{\frac{1}{2}-\kappa}.\] As concerns the proof of (3.18), we begin by applying inequality (B.6) with \(\beta=\mu\) to \(Z^{\leq N}h^{1}_{LT}\). We get that \[(2+r-t)^{2\mu}r^{2}\|Z^{\leq N}h^{1}_{LT}\|^{2}_{L^{2}(\mathbb{S}^{2}\times \mathbb{S}^{1})}\lesssim\iint_{\Sigma^{\rm e}_{t}}(2+r-t)^{1+2\mu}(\partial Z ^{\leq N}h^{1}_{LT})^{2}dxdy.\] We decompose the above right hand side into \[\iint_{\Sigma^{\rm e}_{t}\setminus\Sigma^{\rm e}_{2t}}(2+r-t)^{1+2\mu}( \partial Z^{\leq N}h^{1}_{LT})^{2}dxdy+\iint_{\Sigma^{\rm e}_{2t}}(2+r-t)^{1+2 \mu}(\partial Z^{\leq N}h^{1}_{LT})^{2}dxdy.\] The integral over \(\Sigma^{\rm e}_{2t}\) is simply estimated using the energy bound (3.5) as follows \[\iint_{\Sigma^{\rm e}_{2t}}(2+r-t)^{1+2\mu}(\partial Z^{\leq N}h^{1}_{LT})^{2 }dxdy\lesssim t^{2(\mu-\kappa)}\iint_{\Sigma^{\rm e}_{t}}(2+r-t)^{1+2\kappa}| \partial Z^{\leq N}h^{1}|^{2}dxdy\lesssim\epsilon^{2}t^{2(\mu-\kappa+\sigma)}.\] The integral over \(\Sigma^{\rm e}_{t}\setminus\Sigma^{\rm e}_{2t}\) is estimated using (2.9) for \(h^{1}\) \[\iint_{\Sigma^{\rm e}_{t}\setminus\Sigma^{\rm e}_{2t}}(2+r-t)^{1+ 2\mu}(\partial Z^{\leq N}h^{1}_{LT})^{2}dxdy\] \[\lesssim\iint_{\Sigma^{\rm e}_{t}\setminus\Sigma^{\rm e}_{2t}}(2+ r-t)^{1+2\mu}M^{2}\chi^{2}_{0}(t/2\leq r\leq 3t/4)r^{-4}dxdy\] \[+\iint_{\Sigma^{\rm e}_{t}\setminus\Sigma^{\rm e}_{2t}}(2+r-t)^{1+ 2\mu}\Big{(}|\overline{\partial}Z^{\leq N}h^{1}|^{2}+r^{-2}|Z^{\leq N}h^{1}|^{2 }+\hskip-11.381102pt\sum_{|K_{1}|+|K_{2}|\leq N}\hskip-11.381102pt|Z^{K_{1}}h^{ 1}|^{2}|\partial Z^{K_{2}}h^{1}|^{2}\Big{)}dxdy,\] where \(\chi_{0}(t/2\leq r\leq 3t/4)\) is a smooth cut-off function supported in \(t/2\leq r\leq 3t/4\). We observe that the portion of such support contained in the exterior region is bounded. Therefore we get the following: - from the smallness assumption on \(M\) and the above observation \[\iint_{\Sigma^{\rm e}_{t}\setminus\Sigma^{\rm e}_{2t}}(2+r-t)^{1+2\mu}M^{2} \chi^{2}_{0}(t/2\leq r\leq 3t/4)r^{-4}dxdy\lesssim\epsilon^{2}t^{-4};\] - from the energy bound (3.6) \[\iint_{\Sigma^{\rm e}_{t}\setminus\Sigma^{\rm e}_{2t}}(2+r-t)^{1+2 \mu}|\overline{\partial}Z^{\leq N}h^{1}|^{2}dxdy\\ \lesssim t^{1+2(\mu-\kappa)}\iint_{\Sigma^{\rm e}_{t}}(2+r-t)^{2 \kappa}|\overline{\partial}Z^{\leq N}h^{1}|^{2}dxdy\lesssim\epsilon^{2}t^{1+2( \mu-\kappa+\sigma)}l(t);\] - from inequality (B.4) with \(\beta=2\kappa-1\) and the energy bound (3.5) \[\iint_{\Sigma^{\rm e}_{t}\setminus\Sigma^{\rm e}_{2t}}(2+r-t)^{1+2 \mu}r^{-2}|Z^{\leq N}h^{1}|^{2}dxdy\lesssim t^{2(\mu-\kappa)}\iint_{\Sigma^{ \rm e}_{t}\setminus\Sigma^{\rm e}_{2t}}(2+r-t)^{2\kappa-1}|Z^{\leq N}h^{1}|^{ 2}dxdy\\ \lesssim t^{2(\mu-\kappa)}\iint_{\Sigma^{\rm e}_{t}}(2+r-t)^{1+2 \kappa}|\partial Z^{\leq N}h^{1}|^{2}dxdy\lesssim\epsilon^{2}t^{2(\mu-\kappa+ \sigma)};\] - from the energy bound (3.5) and the pointwise bound (3.14) that \[\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|\leq N\\ |K_{1}|\leq|N/2|\end{subarray}}\iint_{\Sigma^{\rm e}_{t}\setminus\Sigma^{\rm e }_{2t}}(2+r-t)^{1+2\mu}|Z^{K_{1}}h^{1}|^{2}|\partial Z^{K_{2}}h^{1}|^{2}dxdy\\ \lesssim\epsilon^{2}t^{2\sigma}\iint_{\Sigma^{\rm e}_{t}\setminus \Sigma^{\rm e}_{2t}}(2+r-t)^{1+2(\mu-\kappa)}r^{-2}|\partial Z^{\leq N}h^{1}|^{ 2}dxdy\lesssim\epsilon^{4}t^{-2+4\sigma};\] - finally, from the decay bound (3.12), inequality (B.4) and the energy bound (3.5) \[\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|\leq N\\ |K_{2}|\leq|N/2|\end{subarray}}\iint_{\Sigma^{\rm e}_{t}\setminus\Sigma^{\rm e }_{2t}}(2+r-t)^{1+2\mu}|Z^{K_{1}}h^{1}|^{2}|\partial Z^{K_{2}}h^{1}|^{2}dxdy\\ \lesssim\epsilon^{2}t^{2\sigma}\iint_{\Sigma^{\rm e}_{t}\setminus \Sigma^{\rm e}_{2t}}(2+r-t)^{1+2(\mu-\kappa)}r^{-2}|Z^{\leq N}h^{1}|^{2}dxdy\\ \lesssim\epsilon^{2}t^{-2+2\sigma}\iint_{\Sigma^{\rm e}_{t}}(2+r-t )^{1+2\kappa}|\partial Z^{\leq N}h^{1}|^{2}dxdy\lesssim\epsilon^{4}t^{-2+4 \sigma}.\] Summing up, \[\iint_{\Sigma^{\rm e}_{t}}(2+r-t)^{1+2\mu}(\partial Z^{\leq N}h^{1}_{LT})^{2} dxdy\lesssim C^{2}_{0}\epsilon^{2}t^{2(\mu-\kappa+\sigma)}(1+t\,l(t))\] which concludes the proof of (3.18). ### The null and cubic terms The combination of the energy assumptions and the decay bounds obtained in proposition 3.2 yield easily the following weighted \(L^{2}(\Sigma^{\rm e}_{t})\) estimates of the differentiated null and cubic terms. **Proposition 3.4**.: _Fix \(i=0,1\). Under the a-priori energy assumptions (3.1)-(3.2) we have_ \[\|(2+r-t)^{\frac{1}{2}+i+\kappa}\partial^{i}Z^{\leq N}\mathbf{Q}_{\alpha\beta }(\partial h,\partial h)\|_{L^{2}(\Sigma^{\rm e}_{t})}\lesssim C^{2}_{0} \epsilon^{2}t^{-1+2\sigma}\sqrt{l(t)}+C^{2}_{0}\epsilon^{2}t^{-2+2\sigma} \tag{3.20}\] _and_ \[\|(2+r-t)^{\frac{1}{2}+i+\kappa}\partial^{i}Z^{\leq N}G_{\alpha\beta}(h)( \partial h,\partial h)\|_{L^{2}(\Sigma^{\rm e}_{t})}\lesssim C^{3}_{0} \epsilon^{3}t^{-2+3\sigma}. \tag{3.21}\] Proof.: We write \(h=h^{1}+h^{0}\) and inject this decomposition into \(\mathbf{Q}_{\alpha\beta}\) and \(G_{\alpha\beta}\). We prove estimates (3.20) and (3.21) for null and cubic interactions involving only \(h^{1}\)-factors. The remaining interactions, i.e. those involving at least one \(h^{0}\)-factor, can be easily treated thanks to (3.11) so we leave the details to the reader. It is well-known that the admissible vector fields \(Z\) preserve the null structure, in the sense that for any null form \(Q\) \[ZQ(\partial\phi,\partial\psi)=Q(\partial Z\phi,\partial\psi)+Q(\partial\phi, \partial Z\psi)+\tilde{Q}(\partial\phi,\partial\psi)\] where \(\tilde{Q}\) is also a null form. Together with the fundamental property \[|Q(\partial\phi,\partial\psi)|\lesssim|\overline{\partial}\phi||\partial \psi|+|\partial\phi||\overline{\partial}\psi|,\] it implies that for \(i=0,1\) \[|\partial^{i}Z^{\leq N}\mathbf{Q}_{\alpha\beta}(\partial h^{1},\partial h^{1 })|\lesssim\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|\leq N\\ |I_{1}|+|I_{2}|=i\end{subarray}}|\overline{\partial}\partial^{I_{1}}Z^{K_{1}} h^{1}||\partial\partial^{I_{2}}Z^{K_{2}}h^{1}|.\] We observe that at least one of the two indexes in the above summation has length smaller than \(\lfloor N/2\rfloor\). Therefore, if \(N\) is sufficiently large (e.g. \(N\geq 6\)) so that \(\lfloor N/2\rfloor\leq N-3\) we deduce the following: - from (3.13) and (3.5) - from (3.12) and (3.6) \[\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|\leq N\\ |K_{2}|\leq\lfloor N/2\rfloor\end{subarray}}\|(2+r-t)^{\frac{1}{2}+i+\kappa} \overline{\partial}Z^{K_{1}}h^{1}\ \partial\partial^{i}Z^{K_{2}}h^{1}\|_{L^{2}(\Sigma^{ \mathrm{e}}_{t})}\] \[\lesssim C_{0}\epsilon t^{-1+\sigma}\sqrt{l(t)}\sum_{j\leq i}\|(2 +r-t)^{\frac{(i+j)}{2}}\partial\partial^{j}Z^{\leq N}h^{1}\|_{L^{2}(\Sigma^{ \mathrm{e}}_{t})}\lesssim C_{0}^{2}\epsilon^{2}t^{-1+2\sigma}\sqrt{l(t)};\] - from (3.12) and (3.8) \[\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|\leq N-1\\ |K_{2}|\leq\lfloor(N-1)/2\rfloor\end{subarray}}\|(2+r-t)^{\frac{3}{2}+\kappa} \overline{\partial}\partial Z^{K_{1}}h^{1}\ \partial Z^{K_{2}}h^{1}\|_{L^{2}(\Sigma^{ \mathrm{e}}_{t})}\] \[\lesssim C_{0}\epsilon t^{-1+\sigma}\|(2+r-t)^{\frac{1}{2}} \overline{\partial}\partial Z^{\leq N-1}h^{1}\|_{L^{2}(\Sigma^{\mathrm{e}}_{t} )}\lesssim C_{0}^{2}\epsilon^{2}t^{-1+2\sigma}\sqrt{l(t)}.\] As concerns the cubic terms, we have that \[|\partial^{i}Z^{\leq N}G_{\alpha\beta}(h^{1})(\partial h^{1},\partial h^{1})| \lesssim\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|+|K_{3}|\leq N\\ |I_{1}|+|I_{2}|+|I_{3}|=i\end{subarray}}|\partial^{I_{1}}Z^{K_{1}}h^{1}||\partial \partial^{I_{2}}Z^{K_{2}}h^{1}||\partial\partial^{I_{3}}Z^{K_{3}}h^{1}|.\] From (3.12), inequality (B.4) with \(\beta=2\kappa-1\) and (3.5), we deduce \[\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|+|K_{3}|\leq N\\ |K_{2}|+|K_{3}|\leq|N/2|\\ |I_{2}|+|I_{3}|=i\end{subarray}}\|(2+r-t)^{\frac{1}{2}+i+\kappa}Z^{K_{1}}h^{1} \,\partial\partial^{I_{2}}Z^{K_{2}}h^{1}\,\partial\partial^{I_{3}}Z^{K_{3}}h^ {1}\|_{L^{2}(\Sigma^{\mathrm{e}}_{t})}\] \[\lesssim C_{0}^{2}\epsilon^{2}t^{-2+2\sigma}\|(2+r-t)^{-\frac{3- i}{2}-\kappa}Z^{\leq N}h^{1}\|_{L^{2}(\Sigma^{\mathrm{e}}_{t})}\] \[\lesssim C_{0}^{2}\epsilon^{2}t^{-2+2\sigma}\|(2+r-t)^{\frac{1}{ 2}+\kappa}\partial Z^{\leq N}h^{1}\|_{L^{2}(\Sigma^{\mathrm{e}}_{t})}\lesssim C _{0}^{3}\epsilon^{3}t^{-2+3\sigma}.\] The other interactions are easier to treat and their estimates are obtained similarly to what has been done above for the quadratic terms. We leave the details to the reader. An immediate consequence of the pointwise bounds (3.12)-(3.14) is the following: **Proposition 3.5**.: _Under the assumptions (3.1)-(3.2) we have that_ \[|Z^{\leq N-3}\mathbf{Q}_{\alpha\beta}(\partial h^{1},\partial h^{ 1})(t)|_{\frac{1}{2}+2\kappa}\lesssim C_{0}^{2}\epsilon^{2}t^{-1+2\sigma}\sqrt{ l(t)}, \tag{3.23}\] \[|Z^{\leq N-3}G_{\alpha\beta}(h^{1})(\partial h^{1},\partial h^{1} )(t)|_{1+3\kappa}\lesssim C_{0}^{3}\epsilon^{3}t^{-2+3\sigma}. \tag{3.22}\] ### The commutator terms The goal of this section is to get suitable weighted \(L^{2}(\Sigma^{\mathrm{e}}_{t})\) estimates of the commutator terms \([Z^{K},H^{\mu\nu}\partial_{\mu}\partial_{\nu}]h^{1}_{\alpha\beta}\) for \(|K|\leq N\). Such terms have a remarkable property when written in the null frame, which was first highlighted in [40]. We present below a slightly different version of this, which involves expanding first in the null frame before evaluating the commutators. **Lemma 3.6**.: _Let \(K\) be any fixed multi-index and assume that \(\pi^{\mu\nu}\) is a tensor satisfying_ \[|Z^{K^{\prime}}\pi|\leq C,\quad\forall\ |K^{\prime}|\leq\lfloor|K|/2\rfloor.\] _Then for any smooth function \(\phi\)_ \[|[Z^{K},\pi^{\mu\nu}\partial_{\mu}\partial_{\nu}]\phi|\lesssim \sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|\leq|K|\\ |K_{2}|<|K|\end{subarray}}|Z^{K_{1}}\pi|_{\mathscr{L}\mathscr{L}}|\partial^{2 }Z^{K_{2}}\phi|+|Z^{K_{1}}\pi||\overline{\partial}\partial Z^{K_{2}}\phi|\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\sum_{|K_{1}|+|K_ {2}|\leq|K|}r^{-1}|Z^{K_{1}}\pi||\partial Z^{K_{2}}\phi| \tag{3.24}\] _and_ \[\left|[Z^{K},\pi^{\mu\nu}\partial_{\mu}\partial_{\nu}]\phi- \sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|\leq|K|\\ |K_{2}|<|K|\end{subarray}}\left(Z^{K_{1}}\pi_{LL}\cdot\partial_{t}^{2}Z^{K_{2} }\phi+Z^{K_{1}}\pi_{4L}\cdot\partial_{y}\partial_{t}Z^{K_{2}}\phi+Z^{K_{1}} \pi_{44}\cdot\partial_{y}^{2}Z^{K_{2}}\phi\right)\right|\] \[\lesssim\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|\leq|K|\\ |K_{2}|<|K|\end{subarray}}\frac{|t^{2}-r^{2}|}{t^{2}}|Z^{K_{1}}\pi||\partial^{2 }Z^{K_{2}}\phi|+|Z^{K_{1}}\pi||\partial\underline{\partial}_{x}Z^{K_{2}}\phi|+ \sum_{|K_{1}|+|K_{2}|\leq|K|}\frac{|Z^{K_{1}}\pi||\partial Z^{K_{2}}\phi|}{1+t +r}. \tag{3.25}\] Proof.: Let \(U,V\) denote any vector field in \(\mathscr{U}\). Inequality (3.24) follows from the following decomposition \[[Z^{K},\pi^{\mu\nu}\partial_{\mu}\partial_{\nu}]\phi=[Z^{K},\pi^{UV}UV]\phi= \hskip-14.226378pt\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|\leq|K|\\ |K_{2}|<|K|\end{subarray}}\hskip-14.226378pt(Z^{K_{1}}\pi^{UV})UVZ^{K_{2}}\phi+ Z^{K_{1}}\pi^{UV}\ [Z^{K_{2}},UV]\phi \tag{3.26}\] and the fact that \[|[Z^{J},TU]\phi|\lesssim\sum_{|J^{\prime}|<|J|}|\overline{\partial}\partial Z ^{J^{\prime}}\phi|+\sum_{|J^{\prime\prime}|\leq|J|}r^{-1}|\partial Z^{J^{\prime \prime}}\phi|. \tag{3.27}\] Using instead (1.19) we find that10 Footnote 10: We recall that \(\underline{\partial}_{0}=\partial_{t}\) \[[Z^{K},\pi^{\mu\nu}\partial_{\mu}\partial_{\nu}]\phi=[Z^{K},\pi^{\mathbf{\mu \nu}}\partial_{\mathbf{\mu}}\partial_{\mathbf{\nu}}]\phi+[Z^{K},\pi^{4\nu}\partial_{ 4}\partial_{\nu}]\phi+[Z^{K},\pi^{\nu 4}\partial_{\nu}\partial_{4}]\phi\] \[=[Z^{K},\pi^{UV}c_{UV}^{00}\partial_{t}^{2}]\phi+[Z^{K},\pi^{UV}(c_{UV}^{\mathbf{ a}\mathbf{\beta}}\underline{\partial}_{\mathbf{a}}\underline{\partial}_{\mathbf{\beta}}+c_{UV}^{ \mathbf{\alpha}\mathbf{b}}\underline{\partial}_{\mathbf{\alpha}}\underline{\partial}_{\bm {b}}+d_{UV}^{\mu}\underline{\partial}_{\mu})]\phi\] \[+[Z^{K},\pi^{4U}\partial_{y}U]\phi+[Z^{K},\pi^{4U}U\partial_{4}]\phi\] where, thanks to the fact that \(|\partial^{I}\Gamma^{J}c_{LL}^{00}|\lesssim_{I,J}(t^{2}-r^{2})/t^{2}\) and \(\partial_{\mathbf{i}}=\underline{\partial}_{\mathbf{i}}-\frac{x_{i}}{t}\partial_{t}\), and up to homogeneous zero-order coefficients, we schematically have the following equalities \[[Z^{K},\pi^{UV}c_{UV}^{00}\partial_{t}^{2}]\phi=\hskip-14.226378pt\sum_{ \begin{subarray}{c}|K_{1}|+|K_{2}|\leq|K|\\ |K_{2}|<|K|\end{subarray}}\hskip-14.226378ptZ^{K_{1}}\pi_{LL}\cdot\partial_{t} ^{2}Z^{K_{2}}\phi+\frac{|t^{2}-r^{2}|}{t^{2}}Z^{K_{1}}\pi\cdot\partial^{2}Z^{ K_{2}}\phi+Z^{K_{1}}\pi\cdot\partial\underline{\partial}_{x}Z^{K_{2}}\phi,\] \[[Z^{K},\pi^{UV}(c_{UV}^{\mathbf{a}\mathbf{\beta}}\underline{\partial}_{\mathbf{a}} \underline{\partial}_{\mathbf{\beta}}+c_{UV}^{\mathbf{\alpha}\mathbf{b}}\underline{ \partial}_{\mathbf{\alpha}}\underline{\partial}_{\mathbf{b}}+d_{UV}^{\mu}\underline{ \partial}_{\mu})]\phi=\hskip-14.226378pt\sum_{\begin{subarray}{c}|K_{1}|+|K_{2 }|\leq|K|\\ |K_{2}|<|K|\end{subarray}}\hskip-14.226378ptZ^{K_{1}}\pi\cdot\partial\underline{ \partial}_{x}Z^{K_{2}}\phi+\frac{Z^{K_{1}}\pi\cdot\partial Z^{K_{2}}\phi}{1+t +r},\] \[[Z^{K},\pi^{4U}\partial_{y}U]\phi=\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|\leq| K|\\ |K_{2}|<|K|\end{subarray}}Z^{K_{1}}\pi_{4L}\cdot\partial_{y}\partial_{t}Z^{K_{2}} \phi+Z^{K_{1}}\pi_{44}\cdot\partial_{y}^{2}Z^{K_{2}}\phi+Z^{K_{1}}\pi\cdot \partial_{y}\underline{\partial}_{x}Z^{K_{2}}\phi.\] **Proposition 3.7**.: _Under the energy assumptions (3.1)-(3.2) we have for \(i=0,1\)_ \[\begin{split}\left\|(2+r-t)^{\frac{1}{2}+i+\kappa}[\partial^{i}Z ^{\leq N},H^{\mu\nu}\partial_{\mu}\partial_{\nu}]h^{1}_{\alpha\beta}\right\| _{L^{2}(\Sigma^{c}_{t})}&\lesssim\epsilon t^{-1}E^{e,i+\kappa}(t, \partial^{i}Z^{\leq N-i}h^{1}_{\alpha\beta})^{1/2}\\ +&\epsilon^{2}t^{-(\kappa-\rho)+2\sigma}\big{(}t^{-1/ 2}\sqrt{l(t)}+t^{-1}\big{)}.\end{split} \tag{3.28}\] Proof.: We set \(\phi=h^{1}_{\alpha\beta}\) in (3.24) and begin by observing that, for every \(K\) with \(|K|\leq N\), the terms in the last line of the right hand side have already been estimated. In fact, the cubic terms satisfy (3.21) and the following bound was obtained in the proof of proposition 3.14 \[\sum_{\begin{subarray}{c}i=0,1\\ |K_{1}|+|K_{2}|\leq N\\ |I_{1}|+|I_{2}|=i\end{subarray}}\hskip-14.226378pt\left\|(2+r-t)^{\frac{1}{2}+ i+\kappa}\,r^{-1}\partial^{I_{1}}Z^{K_{1}}h\cdot\partial\partial^{I_{2}}Z^{K_{2}}h \right\|_{L^{2}(\Sigma^{c}_{t})}\lesssim\epsilon^{2}t^{-\frac{3}{2}+2\sigma}.\] Using (3.11) and (3.7), (3.8), it is straightforward to prove that for \(i=0,1\) \[\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|\leq N\\ |I_{1}|+|I_{2}|=i\end{subarray}}\left\|(2+r-t)^{\frac{1}{2}+i+\kappa}\partial^{ I_{1}}Z^{K_{1}}h^{0}\cdot\partial^{2}\partial^{I_{2}}Z^{K_{2}}h^{1}\right\|_{L^{2}( \Sigma^{\mathrm{e}}_{t})}\lesssim\epsilon t^{-1}E^{\mathrm{e},i+\kappa}( \partial^{i}Z^{\leq N}h^{1})^{1/2},\] hence we only focus on estimating the terms of the first line in the right hand side of (3.24) with \(h\) replaced by \(h^{1}\). We choose exponents \((p_{1},p_{2})\) such that \[(p_{1},p_{2})=\begin{cases}(2,\infty),&\text{ if }|K_{1}|=N\\ (\infty,2),&\text{ if }|K_{2}|=N-1\\ (4,4),&\text{ otherwise.}\end{cases}\] From the Sobolev's injections \(H^{2}(\mathbb{S}^{2}\times\mathbb{S}^{1})\subset L^{\infty}(\mathbb{S}^{2} \times\mathbb{S}^{1})\) and \(H^{1}(\mathbb{S}^{2}\times\mathbb{S}^{1})\subset L^{4}(\mathbb{S}^{2}\times \mathbb{S}^{1})\) \[\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|\leq N\\ |I_{2}|=i\end{subarray}}\left\|(2+r-t)^{\frac{1}{2}+i+\kappa}Z^{K_{1}}h^{1} \cdot\overline{\partial}\partial\partial^{I_{2}}Z^{K_{2}}h^{1}\right\|_{L^{2 }(\Sigma^{\mathrm{e}}_{t})}^{2}\\ \leq\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|\leq N\\ |I_{2}|=i\end{subarray}}\int_{r\geq t-1}(2+r-t)^{1+2i+2\kappa}\|Z^{K_{1}}h^{1} \|_{L^{p_{1}}(\mathbb{S}^{2}\times\mathbb{S}^{1})}^{2}\|\overline{\partial} \partial\partial^{I_{2}}Z^{K_{2}}h^{1}\|_{L^{p_{2}}(\mathbb{S}^{2}\times \mathbb{S}^{1})}^{2}r^{2}dr\\ \lesssim\int_{r\geq t-1}(2+r-t)^{1+2i+2\kappa}\|Z^{\leq N}h^{1} \|_{L^{2}(\mathbb{S}^{2}\times\mathbb{S}^{1})}^{2}\|\overline{\partial} \partial Z^{\leq N-1}h^{1}\|_{L^{2}(\mathbb{S}^{2}\times\mathbb{S}^{1})}^{2} r^{2}dr\] so using the inequality (B.6) with \(\beta=\kappa\) and the energy assumptions (3.5), (3.6) we get \[\lesssim\left\|(2+r-t)^{\frac{1}{2}+\kappa}\partial Z^{\leq N}h^ {1}\right\|_{L^{2}(\Sigma^{\mathrm{e}}_{t})}^{2}\int_{r\geq t-1}(2+r-t)^{1+2i} r^{-2}\|\overline{\partial}\partial Z^{\leq N-1}h^{1}\|_{L^{2}(\mathbb{S}^{2} \times\mathbb{S}^{1})}^{2}r^{2}dr\] \[\lesssim t^{-1-2\kappa}\left\|(2+r-t)^{\frac{1}{2}+\kappa} \partial Z^{\leq N}h^{1}\right\|_{L^{2}(\Sigma^{\mathrm{e}}_{t})}^{2}\left\|(2 +r-t)^{1+\kappa}\overline{\partial}\partial Z^{\leq N-1}h^{1}\right\|_{L^{2} (\Sigma^{\mathrm{e}}_{t})}^{2}\lesssim\epsilon^{4}t^{-1-2\kappa+4\sigma}l(t).\] The same Sobolev's embeddings, coupled with the decay bound (3.18) and the energy assumption (3.5), also yield for \(i=0,1\) \[\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|\leq N\\ |I_{2}|=i\end{subarray}}\left\|(2+r-t)^{\frac{1}{2}+i+\kappa}Z^{K_{1}}h^{1}_{ LL}\cdot\partial^{2}\partial^{I_{2}}Z^{K_{2}}h^{1}\right\|_{L^{2}(\Sigma^{\mathrm{e}}_{t })}^{2}\] \[\lesssim\int_{r\geq t-1}(2+r-t)^{1+2i+2\kappa}\|Z^{\leq N}h^{1}_ {LL}\|_{L^{2}(\mathbb{S}^{2}\times\mathbb{S}^{1})}^{2}\|\partial\partial Z^{ \leq N-1}h^{1}\|_{L^{2}(\mathbb{S}^{2}\times\mathbb{S}^{1})}^{2}r^{2}dr\] \[\lesssim\epsilon^{2}t^{-2(\kappa-\rho)+2\sigma}(t^{-1}l(t)+t^{-2} )\left\|(2+r-t)^{\frac{1}{2}+i+\kappa}\partial\partial Z^{\leq N-1}h^{1}\right\| _{L^{2}(\Sigma^{\mathrm{e}}_{t})}^{2}\] \[\lesssim\epsilon^{4}t^{-2(\kappa-\rho)+4\sigma}(t^{-1}l(t)+t^{-2 }).\] Finally, when \(i=1\) the remaining terms to discuss are of the form \(\partial Z^{K_{1}}h^{1}_{LL}\cdot\partial^{2}Z^{K_{2}}h^{1}\) and \(\partial Z^{K_{1}}h^{1}\cdot\overline{\partial}\partial Z^{K_{2}}h^{1}\) but those behave like null terms (the former thanks to (2.9) for \(h^{1}\)) and hence satisfy (3.20). The details are left to the reader. We also have the following pointwise estimate of the commutator terms involving a smaller number of vector fields. It will be useful in the proof of Lemma 4.12. **Lemma 3.8**.: _Under the energy assumptions (3.1)-(3.2), there exists \(\delta^{\prime}>0\) such that_ \[\big{|}[Z^{\leq N-4},H^{1,\mu\nu}\partial_{\mu}\partial_{\nu}]h^{1}_{\alpha \beta}\big{|}_{\kappa}\lesssim C_{0}^{2}\epsilon^{2}t^{-2+2\sigma}\sqrt{l(t)}+ \epsilon^{2}t^{-2-\delta^{\prime}}(2+r-t)^{-\frac{1}{2}}. \tag{3.29}\] Proof.: We use (3.24) with \(\pi=H^{1}\) and \(\phi=h^{1}_{\alpha\beta}\). Pointwise bounds (3.12) and (3.17) yield \[\sum_{|K_{1}|+|K_{2}|\leq N-4}\big{|}Z^{K_{1}}H^{1}_{LL}\cdot\partial^{2}Z^{K_{ 2}}h^{1}_{\alpha\beta}\big{|}\lesssim C_{0}^{2}\epsilon^{2}t^{-2-\delta+\sigma }(2+r-t)^{-\frac{3}{2}-\kappa}\] for some \(\delta>\sigma\), bounds (3.13) and (3.14) give \[\sum_{|K_{1}|+|K_{2}|\leq N-4}\big{|}Z^{K_{1}}H^{1}\cdot\partial\overline{ \partial}Z^{K_{2}}h^{1}_{\alpha\beta}\big{|}\lesssim C_{0}^{2}\epsilon^{2}t^{ -2+2\sigma}\sqrt{l(t)}(2+r-t)^{-1-2\kappa}\] and finally (3.12) and (3.14) imply \[\sum_{|K_{1}|+|K_{2}|\leq N-4}r^{-1}\big{|}Z^{K_{1}}H^{1}\cdot\partial Z^{K_{2 }}h^{1}_{\alpha\beta}\big{|}\lesssim C_{0}^{2}\epsilon^{2}t^{-3+2\sigma}(2+r-t )^{-1-2\kappa}.\] The result of the statement follows by setting \(\delta^{\prime}=\delta-\sigma\). ### The \(h^{1}_{tu}\) coefficients In this subsection we show that, for any \(T\in\mathscr{T}\) and \(U\in\mathscr{U}\), the coefficients \(h^{1}_{TU}\) satisfy better energy bounds than (3.1), more precisely that for any fixed \(0<\rho<\kappa\) there exists some positive constant \(C\) such that \[E^{\mathrm{e},\kappa-\rho}(t,Z^{\leq N}h^{1}_{TU})^{1/2}\lesssim C_{0} \epsilon t^{C\epsilon},\qquad t\in[2,T_{0}). \tag{3.30}\] This estimate essentially follows from the fact that no weak null terms appear among the source terms in the equation satisfied by \(h^{1}_{TU}\). This can be simply seen by applying \(T^{\alpha}U^{\beta}\) to (1.8) and then commuting with \(Z^{K}\), which shows that \(Z^{K}h^{1}_{TU}\) is solution to \[\tilde{\Box}_{g}Z^{K}h^{1}_{TU}=F^{K}_{TU}+F^{0,K}_{TU},\qquad F^{0,K}_{TU}= F^{0,K}_{\alpha\beta}T^{\alpha}U^{\beta} \tag{3.31}\] with source term \(F^{K}_{TU}\) given by \[\begin{split} F^{K}_{TU}&=-[Z^{K},H^{\mu\nu} \partial_{\mu}\partial_{\nu}]h^{1}_{TU}+Z^{K}F_{TU}(h)(\partial h,\partial h) \\ &+\sum_{|K^{\prime}|\leq|K|}C^{\boldsymbol{i}\alpha\beta}_{TU,K^ {\prime}}\not{\partial_{i}}Z^{K^{\prime}}h^{1}_{\alpha\beta}+D^{\alpha\beta }_{TU,K^{\prime}}Z^{K^{\prime}}h^{1}_{\alpha\beta}\\ &+\sum_{|K_{1}|+|K_{2}|\leq|K|}E^{\boldsymbol{i}\alpha\beta}_{TU \mu,K_{1}K_{2}}Z^{K_{1}}H^{\mu\nu}\cdot\not{\partial_{i}}Z^{K_{2}}h^{1}_{ \alpha\beta}+F^{\alpha\beta}_{TU\mu,K_{1}K_{2}}Z^{K_{1}}H^{\mu\nu}\cdot Z^{K_{ 2}}h^{1}_{\alpha\beta}\end{split} \tag{3.32}\] and smooth coefficients \(C^{i\alpha\beta}_{TU,K^{\prime}},E^{i\alpha\beta}_{TU\mu,K_{1}K_{2}}=O(r^{-1})\), \(D^{\alpha\beta}_{TU,K^{\prime}},F^{\alpha\beta}_{TU\mu,K_{1}K_{2}}=O(r^{-2})\). Besides the additional terms arising from the commutation of vector fields \(T\) and \(U\) with the reduced wave operator, the main difference between the source terms \(F^{K}_{\alpha\beta}\) and \(F^{K}_{TU}\) lies in the fact that the latter is a linear combination of quadratic null terms and cubic terms only, as \(P_{TU}=P_{\alpha\beta}T^{\alpha}U^{\beta}\) and \[|P_{TU}(\phi,\psi)|\lesssim|\overline{\partial}\phi||\partial\psi|+|\partial \psi||\overline{\partial}\psi|. \tag{3.33}\] We compare (3.31) with equation (A.1). Thanks to the smallness of \(H\) provided by (3.11) and (3.17), we can apply the result of proposition A.1 with \(\mathbf{W}=Z^{K}h^{1}_{TU}\), \(\mathbf{F}=F^{K}_{TU}+F^{0,K}_{TU}\), \(w(q)=(2+r-t)^{1+2(\kappa-\rho)}\) and \(t_{1}=2,t_{2}=t\). We obtain that \[E^{\mathrm{e},\kappa-\rho}(t,Z^{K}h^{1}_{TU})\] \[+\int_{\widetilde{\mathscr{R}}_{2,t}}\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! From (3.11), the Hardy inequality (B.4) with \(\beta=2\kappa-1\) and estimate (3.5), we have that \[\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|\leq N\\ |I_{2}|=i\end{subarray}}\|(2+r-t)^{\frac{1}{2}+i+\kappa}Z^{K_{1}}h^{1,\mu\nu} \partial_{\mu}\partial_{\nu}\partial^{I_{2}}Z^{K_{2}}h^{0}_{\alpha\beta}\|_{L ^{2}(\Sigma^{e}_{t})}\] \[\lesssim\epsilon\|(2+r-t)^{\frac{1}{2}+i+\kappa}r^{-3-i}Z^{K_{1}}h^{1,\mu\nu} \|_{L^{2}(\Sigma^{e}_{t})}\lesssim\epsilon t^{-2}\|(2+r-t)^{\frac{1}{2}+\kappa }\partial Z^{K_{1}}h^{1,\mu\nu}\|_{L^{2}(\Sigma^{e}_{t})}\lesssim\epsilon^{2}t ^{-2+\sigma}.\] The cubic terms \(\mathscr{O}^{\mu\nu}(h^{2})\partial_{\mu}\partial_{\nu}h^{0}_{\alpha\beta}\) verify similar estimates, the details are left to the reader. In order to estimate the contributions due to the curved background, we first highlight the following relations. **Lemma 3.10**.: _For any sufficiently smooth function \(\phi\) we have_ \[\partial^{\alpha}H_{\alpha}^{\ \sigma}\,\partial_{\sigma}\phi=- \frac{1}{2}(\partial_{\underline{u}}H_{LL}-\partial_{u}H_{\underline{L}L}+ \partial_{A}H_{AL})L\phi+(\partial^{\alpha}H_{\alpha}^{\ \ T})T\phi \tag{3.38}\] \[\partial_{t}H_{\alpha}^{\ \sigma}\,\partial_{\sigma}\phi\, \partial^{\alpha}\phi=\frac{1}{4}\partial_{t}H_{LL}(\underline{L}\phi)^{2}+ \partial_{t}H^{T\alpha}\,(\partial_{\alpha}\phi)(T\phi)\] (3.39) \[H^{\rho\sigma}\partial_{\rho}\phi\,\partial_{\sigma}\phi=\frac{1 }{4}H_{LL}(\underline{L}\phi)^{2}+H^{T\alpha}(T\phi)(\partial_{\alpha}\phi)\] (3.40) \[(-H^{0\sigma}+\omega_{\boldsymbol{j}}H^{j\sigma})\partial_{\sigma }\phi=-\frac{1}{2}H_{LL}\,\underline{L}\phi+H_{L}^{\ \ \ T}(T\phi) \tag{3.37}\] Proof.: The proof of the above equalities follows after expressing all vector fields relative to the null frame \(\mathscr{U}=\{L,\underline{L},S^{1},S^{2},\partial_{y}\}\) and observing that \[-H^{0\sigma}+\omega_{\boldsymbol{j}}H^{j\sigma}=\bar{g}_{\mu\nu}L^{\mu}H^{\nu \sigma}=H_{L}^{\ \sigma}.\] **Lemma 3.11**.: _Under the a-priori assumptions (3.1)-(3.2) we have that for \(i=0,1\) and any multi-index \(K\) with \(|K|\leq N\)_ (3.41) \[\begin{split}&\iint_{\mathscr{D}^{e}_{[2,t]}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Proof.: We start by remarking that from inequality (2.2) and bounds (3.11) to (3.14) \[|\partial H_{LL}|+|\overline{\partial}H|\lesssim|\overline{\partial}h^{1}|+| \partial h^{0}|+|h||\partial h|\lesssim C_{0}\epsilon\Big{(}\frac{t^{\sigma} \sqrt{l(t)}}{r(2+r-t)^{\kappa}}+\frac{t^{2\sigma}}{r^{2}}\Big{)},\] which together with the energy bounds (3.5) and (3.7) implies \[\|(2+r-t)^{\frac{1}{2}+i+\kappa}(|\partial H_{LL}|+|\overline{\partial}H|) \partial\partial^{i}Z^{\leq N}h^{1}\|_{L^{2}(\Sigma^{\rm e}_{t})}\lesssim C_{0} ^{2}\epsilon^{2}t^{-1+2\sigma}\sqrt{l(t)}+C_{0}^{2}\epsilon^{2}t^{-2+3\sigma}.\] From the pointwise estimates (3.11), (3.12) and energy bounds (3.6), (3.8) we also have \[\|(2+r-t)^{\frac{1}{2}+i+\kappa}\partial H\cdot\overline{\partial}\partial^{ i}Z^{\leq N}h^{1}\|_{L^{2}(\Sigma^{\rm e}_{t})}\lesssim C_{0}^{2}\epsilon^{2}t^{-1+2 \sigma}\sqrt{l(t)}.\] The Cauchy-Schwartz inequality, relation (3.37) and the above estimates yield that for \(t\in[2,T_{0})\) \[\iint_{\mathscr{D}^{\rm e}_{[2,t]}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! (3.20) and (3.21) and the smallness of \(\epsilon\) it follows that \[\|(2+r-t)^{\frac{1}{2}+\kappa-\rho}Z^{\leq N}F_{TU}\|_{L^{2}(\Sigma^{\rm e}_{t} )}\lesssim C_{0}^{2}\epsilon^{2}t^{-1+2\sigma}\sqrt{l(t)}+C_{0}^{2}\epsilon^{2} t^{-2+3\sigma}.\] The only terms that still need to be addressed are the contributions to (3.32) arising from the commutation of the null frame with the reduced wave operator. Using the energy bounds (3.6) we see that \[\|(2+r-t)^{\frac{1}{2}+\kappa-\rho}C^{\boldsymbol{i}\alpha\beta}_{TU,K^{ \prime}}\cdot\not{\partial}_{\boldsymbol{i}}Z^{\leq N}h^{1}_{\alpha\beta}\|_{L ^{2}(\Sigma^{\rm e}_{t})}\lesssim\|(2+r-t)^{\frac{1}{2}+\kappa-\rho}r^{-1} \overline{\nabla}Z^{\leq N}h^{1}_{\alpha\beta}\|_{L^{2}(\Sigma^{\rm e}_{t})}\] \[\lesssim t^{-\frac{1}{2}-\rho}\|(2+r-t)^{\kappa}\overline{\nabla}Z^{\leq N}h^ {1}_{\alpha\beta}\|_{L^{2}(\Sigma^{\rm e}_{t})}\lesssim C_{0}\epsilon t^{- \frac{1}{2}-\rho+\sigma}\sqrt{l(t)}\] while from (3.5) and the weighted Hardy inequality (B.4) with \(\beta=2\kappa-1\) we get \[\|(2+r-t)^{\frac{1}{2}+\kappa-\rho}D^{\alpha\beta}_{TU,K^{\prime }}\cdot Z^{\leq N}h^{1}_{\alpha\beta}\|_{L^{2}(\Sigma^{\rm e}_{t})}\lesssim\| (2+r-t)^{\frac{1}{2}+\kappa-\rho}r^{-2}Z^{\leq N}h^{1}_{\alpha\beta}\|_{L^{2} (\Sigma^{\rm e}_{t})}\] \[\lesssim t^{-1-\rho}\|(2+r-t)^{\kappa-\frac{1}{2}}Z^{\leq N}h^{1} _{\alpha\beta}\|_{L^{2}(\Sigma^{\rm e}_{t})}\lesssim t^{-1-\rho}\|(2+r-t)^{ \frac{1}{2}+\kappa}\partial Z^{\leq N}h^{1}_{\alpha\beta}\|_{L^{2}(\Sigma^{\rm e }_{t})}\] \[\lesssim C_{0}\epsilon t^{-1-\rho+\sigma}.\] We then recall the decomposition (1.12) of the tensor \(H\), with \(H^{0,\mu\nu}\) satisfying (3.11) and \(H^{1,\mu\nu}\) verifying the bounds (3.12)-(3.14). We similarly get \[\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|\leq N\\ |K_{1}|\leq[N/2]\end{subarray}}\|(2+r-t)^{\frac{1}{2}+\kappa-\rho}E^{ \boldsymbol{i}\alpha\beta}_{TU\mu\nu,K_{1}K_{2}}\cdot Z^{K_{1}}H^{1,\mu\nu} \cdot\not{\partial}_{\boldsymbol{i}}Z^{K_{2}}h^{1}_{\alpha\beta}\|_{L^{2}( \Sigma^{\rm e}_{t})}\] \[+\sum_{|K_{1}|+|K_{2}|\leq|K|}\|(2+r-t)^{\frac{1}{2}+\kappa-\rho}E^ {\boldsymbol{i}\alpha\beta}_{TU\mu\nu,K_{1}K_{2}}\cdot Z^{K_{1}}H^{0,\mu\nu} \cdot\not{\partial}_{\boldsymbol{i}}Z^{K_{2}}h^{1}_{\alpha\beta}\|_{L^{2}( \Sigma^{\rm e}_{t})}\] \[\qquad\lesssim C_{0}\epsilon t^{\sigma}\|(2+r-t)^{\frac{1}{2}+ \kappa-\rho}r^{-2}\overline{\nabla}Z^{\leq N}h^{1}_{\alpha\beta}\|_{L^{2}( \Sigma^{\rm e}_{t})}\lesssim C_{0}^{2}\epsilon^{2}t^{-\frac{3}{2}-\rho+2 \sigma}\sqrt{l(t)}\] and \[\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|\leq N\\ |K_{2}|\leq[N/2]\end{subarray}}\|(2+r-t)^{\frac{1}{2}+\kappa-\rho}E^{ \boldsymbol{i}\alpha\beta}_{TU\mu\nu,K_{1}K_{2}}\cdot Z^{K_{1}}H^{1,\mu\nu} \cdot\not{\partial}_{\boldsymbol{i}}Z^{K_{2}}h^{1}_{\alpha\beta}\|_{L^{2}( \Sigma^{\rm e}_{t})}\] \[\lesssim C_{0}\epsilon t^{\sigma}\sqrt{l(t)}\|(2+r-t)^{\frac{1}{2} -\rho}r^{-2}Z^{\leq N}H^{1,\mu\nu}\|_{L^{2}(\Sigma^{\rm e}_{t})}\] \[\lesssim C_{0}\epsilon t^{-1-\rho-\kappa+\sigma}\sqrt{l(t)}\|(2+r -t)^{\frac{1}{2}+\kappa}\partial Z^{\leq N}H^{1,\mu\nu}\|_{L^{2}(\Sigma^{\rm e }_{t})}\lesssim C_{0}^{2}\epsilon^{2}t^{-1-\rho-\kappa+2\sigma}\sqrt{l(t)}.\] Finally, \[\sum_{|K_{1}|+|K_{2}|\leq N}\|(2+r-t)^{\frac{1}{2}+\kappa-\rho}F^{\alpha\beta}_ {TU\mu\nu,K_{1}K_{2}}\cdot Z^{K_{1}}H^{\mu\nu}\cdot Z^{K_{2}}h^{1}_{\alpha \beta}\|_{L^{2}(\Sigma^{\rm e}_{t})}\] \[\lesssim C_{0}\epsilon t^{\sigma}\|(2+r-t)^{\frac{1}{2}+\kappa-\rho}r^{-3}Z^{ \leq N}h^{1}\|_{L^{2}(\Sigma^{\rm e}_{t})}\lesssim C_{0}^{2}\epsilon^{2}t^{-2- \rho+2\sigma}.\] By substituting the above estimates together with (3.28), (3.35), (3.41) and (3.42) into (3.34) and choosing \(\epsilon_{0}\ll 1\) sufficiently small so that \(C_{0}\epsilon<1\) we finally find the existence of a universal constant \(C\) such that \[E^{{\rm e},\kappa-\rho}(t,Z^{K}h^{1}_{TU})\leq CE^{{\rm e},\kappa-\rho}(2,Z^{K }h^{1}_{TU})+CC_{0}^{2}\epsilon^{2}+\int_{2}^{t}\frac{C\epsilon E^{{\rm e}, \kappa-\rho}(\tau,Z^{K}h^{1}_{TU})}{\tau}\,d\tau.\] Observe that \(E^{{\rm e},\kappa-\rho}(2,Z^{K}h^{1}_{TU})\lesssim E^{{\rm e},\kappa}(t,h^{1})\). Gronwall's inequality and the energy assumption (3.1) allow us to obtain \[E^{{\rm e},\kappa-\rho}(t,Z^{K}h^{1}_{TU})\leq C(E^{{\rm e},\kappa}(2,Z^{K}h^ {1})+C_{0}^{2}\epsilon^{2})t^{C\epsilon}\leq 2CC_{0}^{2}\epsilon^{2}t^{C\epsilon}\] and hence conclude the proof. An immediate consequence of (3.30) are the following weighted \(L^{2}\) bounds \[\left\|(2+r-t)^{\frac{1}{2}+\kappa-\rho}\;\partial Z^{\leq N}h_{TU} ^{1}\right\|_{L^{2}(\Sigma_{t}^{\mathrm{e}})}\lesssim C_{0}\epsilon t^{C\epsilon} \tag{3.44}\] \[\left\|(2+r-t)^{\kappa-\rho}\;\overline{\partial}Z^{\leq N}h_{TU} ^{1}\right\|_{L^{2}(\Sigma_{t}^{\mathrm{e}})}\lesssim C_{0}\epsilon t^{C \epsilon}\sqrt{l(t)} \tag{3.43}\] for all \(t\in[2,T_{0})\), where \(l\in L^{1}([2,T_{0}))\). The weighted Sobolev injection (B.2) with \(\beta=1+2(\kappa-\rho)\) also yields the following pointwise bound \[|\partial Z^{\leq N-3}h_{TU}^{1}|_{\kappa-\rho-\frac{1}{2}}\lesssim C_{0} \epsilon t^{C\epsilon}. \tag{3.45}\] ### The weak null terms The goal of this subsection is to recover suitable higher order weighted \(L^{2}(\Sigma_{t}^{\mathrm{e}})\) estimates for the quadratic weak null terms \(P_{\alpha\beta}(\partial h,\partial h)\) defined as \[P_{\alpha\beta}(\partial h,\partial h)=\frac{1}{4}\bar{g}^{\mu\rho}\bar{g}^{ \nu\sigma}\left(\partial_{\alpha}h_{\mu\rho}\partial_{\beta}h_{\nu\sigma}-2 \partial_{\alpha}h_{\mu\nu}\partial_{\beta}h_{\rho\sigma}\right).\] These estimates are based on the following remarkable property, highlighted in the works of Lindblad and Rodnianski [38, 39, 40], on Lemma 2.2 and on the bounds (3.43), (3.45) satisfied by \(h_{TU}^{1}\). **Lemma 3.13**.: _Let \(\pi,\theta\) be arbitrary 2-tensors and \(P\) be the quadratic form defined by_ \[P(\pi,\theta)=\frac{1}{4}\bar{g}^{\mu\rho}\bar{g}^{\nu\sigma}\left(\pi_{\mu \rho}\theta_{\nu\sigma}-2\pi_{\mu\nu}\theta_{\rho\sigma}\right).\] _Then_ \[|P(\pi,\theta)|\lesssim|\pi|_{\mathscr{T}\mathscr{U}}|\theta|_{\mathscr{T} \mathscr{U}}+|\pi|_{\mathscr{L}\mathscr{L}}|\theta|+|\pi||\theta|_{\mathscr{L }\mathscr{L}}.\] **Proposition 3.14**.: _Fix \(i=0,1\). There exists some constant \(C>0\) such that, under the a-priori energy assumptions (3.1)-(3.2), we have_ \[\left\|(2+r-t)^{\frac{1}{2}+i+\kappa}\partial^{i}Z^{\leq N}P_{\alpha\beta} \right\|_{L^{2}(\Sigma_{t}^{\mathrm{e}})}\lesssim C_{0}^{2}\epsilon^{2}\big{[} t^{-1+C\epsilon}+t^{-1+2\sigma}\sqrt{l(t)}+\epsilon t^{-\frac{3}{2}+2\sigma} \big{]}. \tag{3.46}\] Proof.: We write \(h=h^{1}+h^{0}\) and plug this decomposition into \(P_{\alpha\beta}(\partial h,\partial h)\). Using (3.11) and the energy bounds (3.5), (3.7) it is straightforward to prove that there exists some small \(\delta>0\) such that \[\sum_{\begin{subarray}{c}i,j=0,1\\ |K_{1}|+|K_{2}|\leq N\\ |I_{1}|+|I_{2}|=i\end{subarray}}\|(2+r-t)^{\frac{1}{2}+i+\kappa}\,\partial^{ I_{1}}Z^{K_{1}}h^{0}\cdot\partial\partial^{I_{2}}Z^{K_{2}}h^{j}\|_{L^{2}(\Sigma_{t}^{ \mathrm{e}})}\lesssim C_{0}^{2}\epsilon^{2}t^{-2+\delta}.\] Hence we focus on proving that estimate (3.46) holds true for \(P_{\alpha\beta}(\partial h^{1},\partial h^{1})\). We start by noticing that for any multi-index \(K\), \(Z^{K}P_{\alpha\beta}(\partial h^{1},\partial h^{1})\) is a linear combination of terms of the form \(P_{\mu\nu}(\partial Z^{K_{1}}h^{1},\partial Z^{K_{2}}h^{1})\) for some multi-indexes \(K_{1},K_{2}\) such that \(|K_{2}|\leq|K|\) and \(\mu,\nu=0,\ldots,4\). Applying Lemma 3.13 and Lemma 2.2 we see that for \(i=0,1\) \[\begin{split}&|\partial^{i}Z^{\leq N}P_{\alpha\beta}(\partial h^{1},\partial h^{1})|\lesssim\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|\leq N\\ |I_{1}|+|I_{2}|=i\end{subarray}}|\partial\partial^{I_{1}}Z^{K_{1}}h^{1}|_{ \mathscr{T}\mathscr{U}}|\partial\partial^{I_{2}}Z^{K_{2}}h^{1}|_{\mathscr{T} \mathscr{U}}\\ &+\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|\leq N\\ |I_{1}|+|I_{2}|=i\end{subarray}}|\overline{\partial}\partial^{I_{1}}Z^{K_{1}}h^ {1}||\partial\partial^{I_{2}}Z^{K_{2}}h^{1}|+r^{-1}|\partial^{I_{1}}Z^{K_{1}}h^ {1}||\partial\partial^{I_{2}}Z^{K_{2}}h^{1}|\\ &+\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|+|K_{3}|\leq N\\ |I_{1}|+|I_{2}|+|I_{3}|=i\end{subarray}}|\partial^{I_{1}}Z^{K_{1}}h^{1}|| \partial\partial^{I_{2}}Z^{K_{2}}h^{1}||\partial\partial^{I_{3}}Z^{K_{3}}h^{1}| \\ &+\frac{M\chi_{0}(t/2\leq r\leq 3t/4)}{(1+t+r)^{2}}\sum_{j\leq i }|\partial\partial^{j}Z^{\leq N}h^{1}|,\end{split} \tag{3.47}\] where \(\chi_{0}(t/2\leq r\leq 3t/4)\) is supported for \(t/2\leq r\leq 3t/4\). Since the intersection of this support with the exterior region in bounded, it is immediate to see that the weighted \(L^{2}(\Sigma_{t}^{\rm e})\) norm of the last term in the above right hand side is bounded by \(C_{0}\epsilon^{2}t^{-2}\). The cubic terms and the quadratic terms involving a tangential derivative have been estimated in proposition 3.4 and satisfy (3.21) and (3.20) respectively. The weighted \(L^{2}\) norm of the quadratic term with the extra \(r^{-1}\) factor is bounded by \(C_{0}^{2}\epsilon^{2}t^{-3/2+2\sigma}\), we leave the details to the reader. Finally, from (3.43) and (3.45) with \(\rho>0\) such that \(k>2\rho\) \[\begin{split}\sum_{\begin{subarray}{c}i=0,1\\ |K_{1}|+|K_{2}|\leq N\\ |I_{1}|+|I_{2}|=i\end{subarray}}\Big{\|}(2+r-t)^{\frac{1}{2}+i+\kappa}| \partial\partial^{I_{1}}Z^{K_{1}}h^{1}|_{\mathscr{T}\mathscr{U}}|\partial \partial^{I_{2}}Z^{K_{2}}h^{1}|_{\mathscr{T}\mathscr{U}}\Big{\|}_{L^{2}( \Sigma_{t}^{\rm e})}\\ \lesssim C_{0}\epsilon t^{-1+C\epsilon}\sum_{i=0,1}\Big{\|}(2+r-t) ^{i-\frac{1}{2}+\rho}|\partial Z^{\leq N}h^{1}|_{\mathscr{T}\mathscr{U}}\Big{\|} _{L^{2}(\Sigma_{t}^{\rm e})}\lesssim C_{0}^{2}\epsilon^{2}t^{-1+2C\epsilon}. \end{split}\] From Lemma 3.13 and bounds (3.11), (3.12), (3.16), (3.45) we also get the following pointwise estimate for the differentiated weak null terms. **Proposition 3.15**.: _There exists a constant \(C>0\) such that, under the a-priori assumptions (3.5)-(3.8), we have that_ \[\big{|}Z^{\leq N-3}P_{\alpha\beta}(\partial h,\partial h)(t)\big{|}_{\frac{1}{ 2}}\lesssim C_{0}^{2}\epsilon^{2}\big{(}t^{-1+2C\epsilon}+t^{-1+2\sigma}\sqrt{ l(t)}\big{)}. \tag{3.48}\] ### Propagation of the energy estimates We now proceed to the proof of proposition 3.1. We recall that for any multi-index \(K\), the differentiated coefficients \(Z^{K}h^{1}_{\alpha\beta}\) solve (3.9) with source term (3.10). We set \(i=1\) if \(Z^{K}=\partial Z^{K^{\prime}}\) and \(|K^{\prime}|\leq N\), \(i=0\) if simply \(|K|\leq N\). Thanks to the smallness of \(H\) provided by (3.11) and (3.17), we apply (A.2) with \(\mathbf{W}=Z^{K}h^{1}_{\alpha\beta}\), \(\mathbf{F}=F^{K}_{\alpha\beta}+F^{0,K}_{\alpha\beta}\), \(w(q)=(2+r-t)^{1+2(i+\kappa)}\) and \(\omega=x/|x|\). For every \(t\in[2,T_{0})\) we get the following energy inequality (3.49) \[E^{e,i+\kappa}(t,Z^{K}h^{1}_{\alpha\beta})\] \[+\int_{\mathscr{H}_{2,t}}(2+r-t)^{1+2(i+\kappa)}\Big{[}\Big{(}\frac {1}{2(1+r^{2})}+\chi\left(\frac{r}{t}\right)\chi(r)\frac{M}{2r}\Big{)}|\partial _{t}Z^{K}h^{1}_{\alpha\beta}|^{2}+|\underline{\nabla}Z^{K}h^{1}_{\alpha\beta}|^ {2}\Big{]}dxdy\] \[\lesssim E^{e,i+\kappa}(2,Z^{K}h^{1}_{\alpha\beta})+\iint_{ \mathscr{D}^{\rm e}_{[2,t]}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! By Gronwall's inequality we then deduce that \[E^{\mathrm{e},i+\kappa}(t,Z^{K}h^{1}_{\alpha\beta})\leq\tilde{C}\big{(}E^{ \mathrm{e},i+\kappa}(2,Z^{K}h^{1}_{\alpha\beta})+C_{0}\epsilon^{2}+C_{0}^{3} \epsilon^{3}t^{C\epsilon+\sigma}\big{)}t^{\tilde{C}\epsilon}.\] We denote the sum \(C+\tilde{C}\) simply by \(C\). Finally, we choose \(C_{0}\gg 1\) sufficiently large so that \(3\tilde{C}E^{\mathrm{e},i+\kappa}(2,Z^{K}h^{1}_{\alpha\beta})\leq(C_{0} \epsilon)^{2}\) and \(3\tilde{C}<C_{0}\), then \(\epsilon_{0}>0\) sufficiently small so that \(3\tilde{C}C_{0}^{3}\epsilon_{0}<1\) and \(2C\epsilon_{0}<\sigma\) to infer that \[E^{\mathrm{e},i+\kappa}(t,Z^{K}h^{1}_{\alpha\beta})\leq C_{0}^{2}\epsilon^{2} t^{\sigma+C\epsilon}.\] As a byproduct, we also deduce that (3.50) \[\int_{\mathscr{H}_{2,t}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \(\bullet\) for any \(s\in[s_{0},S_{0})\), any multi-index \(K=(I,J)\) of type \((N,k)\)11, it satisfies the following energy bounds Footnote 11: We recall that a multi-index \(K=(I,J)\) is said to be of type \((N,k)\) if \(|I|+|J|\leq N\) and \(|J|\leq k\) \[E^{\mathrm{i}}(s,\partial Z^{K}h^{1}_{\alpha\beta})^{\frac{1}{2}}+E^{ \mathrm{i}}(s,Z^{K}h^{1}_{\alpha\beta})^{\frac{1}{2}}\leq 2C_{1}\epsilon s^{ \frac{1}{2}+\zeta_{k}} \tag{4.4}\] \[E^{\mathrm{i}}(s,Z^{K}h^{1,\flat}_{\alpha\beta})^{\frac{1}{2}} \leq 2C_{1}\epsilon s^{\zeta_{k}} \tag{4.3}\] and for multi-indexes \(K\) of type \((N,k)\) with \(k\leq N_{1}\) \[E^{\mathrm{i}}(s,Z^{K}h^{1}_{\alpha\beta})^{\frac{1}{2}}\leq 2C_{1}\epsilon s^{ \delta_{k}} \tag{4.5}\] \(\bullet\) for any \(s\in[s_{0},S_{0})\), it satisfies the following pointwise bounds \[\|t\,\Gamma^{J}h^{1,\flat}_{\alpha\beta}\|_{L^{\infty}_{x}( \mathscr{H}_{s})}\lesssim 2C_{2}\epsilon s^{\gamma_{k}}\text{ with }|J|=k\leq N_{1}, \tag{4.7}\] \[\|t^{\frac{1}{2}}s\,\partial_{tx}(\partial^{I}\Gamma^{J}h^{1, \natural}_{\alpha\beta})\|_{L^{\infty}_{x}L^{2}_{y}(\mathscr{H}_{s})} +\|t^{\frac{3}{2}}\partial_{y}^{\leq 1}(\partial^{I}\Gamma^{J}h^{1, \natural}_{\alpha\beta})\|_{L^{\infty}_{x}L^{2}_{y}(\mathscr{H}_{s})}\] \[\leq\begin{cases}2C_{2}\epsilon,&\text{ if }|I|\leq N_{1},\ |J|=0\\ 2C_{2}\epsilon s^{\gamma_{k}},&\text{ if }|I|+|J|\leq N_{1}+1,\ |J|=k\leq N_{1}. \end{cases} \tag{4.6}\] The result we aim to prove states the following **Proposition 4.1**.: _There exist two constants \(1\ll C_{1}\ll C_{2}\) sufficiently large, a finite and increasing sequence of parameters \(0\leq\zeta_{k},\gamma_{k},\delta_{k}\ll 1\) satisfying (4.2) and \(0<\epsilon_{0}\ll 1\) sufficiently small such that for every \(0<\epsilon<\epsilon_{0}\), if \(h^{1}_{\alpha\beta}\) is solution to (1.8) in the hyperbolic strip \(\mathscr{H}_{[s_{0},S_{0})}\) that satisfies the bounds (4.3)-(4.7) for all \(s\in[s_{0},S_{0})\) and the energy bounds (3.1)-(3.2) globally in the exterior region, then for every \(s\in[s_{0},S_{0})\) it actually satisfies the following: for multi-indexes \(K\) of type \((N,k)\)_ \[E^{\mathrm{i}}(s,\partial Z^{K}h^{1}_{\alpha\beta})^{\frac{1}{2 }}+E^{\mathrm{i}}(s,Z^{K}h^{1}_{\alpha\beta})^{\frac{1}{2}}\leq C_{1}\epsilon s ^{\frac{1}{2}+\zeta_{k}} \tag{4.9}\] \[E^{\mathrm{i}}(s,Z^{K}h^{1,\flat}_{\alpha\beta})^{\frac{1}{2}} \leq C_{1}\epsilon s^{\zeta_{k}}; \tag{4.8}\] _for multi-indexes \(K\) of type \((N,k)\) with \(k\leq N_{1}\)_ \[E^{\mathrm{i}}(s,Z^{K}h^{1}_{\alpha\beta})^{\frac{1}{2}}\leq C_{1}\epsilon s ^{\delta_{k}} \tag{4.10}\] _and finally_ \[\|t\,\Gamma^{J}h^{1,\flat}_{\alpha\beta}\|_{L^{\infty}_{x}( \mathscr{H}_{s})}\leq C_{2}\epsilon s^{\gamma_{k}}\quad\text{if }|J|=k\leq N_{1}, \tag{4.12}\] \[\|t^{\frac{1}{2}}s\,\partial_{tx}(\partial^{I}\Gamma^{J}h^{1, \natural}_{\alpha\beta})\|_{L^{\infty}_{x}L^{2}_{y}(\mathscr{H}_{s})} +\|t^{\frac{3}{2}}\partial_{y}^{\leq 1}(\partial^{I}\Gamma^{J}h^{1, \natural}_{\alpha\beta})\|_{L^{\infty}_{x}L^{2}_{y}(\mathscr{H}_{s})}\] \[\leq\begin{cases}C_{2}\epsilon,&\text{ if }|I|\leq N_{1},\ |J|=0\\ C_{2}\epsilon s^{\gamma_{k}},&\text{ if }|I|+|J|\leq N_{1}+1,\ |J|=k\leq N_{1} \end{cases} \tag{4.11}\] _Remark 4.2_.: The a-priori assumptions (4.3)-(4.7) are satisfied when \(s=s_{0}\) as a consequence of the assumptions on the initial data and the local existence result for the Einstein equations. The hyperbolic time \(S_{0}\) in the above proposition is arbitrary. This implies the existence of the solution in the unbounded region \(\mathscr{H}_{[s_{0},\infty)}\), hence in the full interior region. _Remark 4.3_.: The result stated above builds upon the energy and pointwise estimates the solution has been proved to satisfy in the exterior region. This can be already seen in the energy inequality (4.16) below, where the energy flux through the separating hypersurface \(\mathscr{H}_{[s_{0},s]}\), which is controlled by the exterior energies, appears in the right hand side of the inequality. Constants \(C_{1},\gamma_{k},\delta_{k}\) in proposition 4.1 will in particular be chosen relative to \(C_{0},\sigma,\kappa\) so that \(C_{1}\gg C_{0}\), \(\sigma\ll\gamma_{k}\ll\delta_{k}\ll\kappa\) and \(\delta_{k}\ll\kappa-\sigma\) for all \(k=0,\ldots,N\). For this reason and throughout the rest of this section, we will often replace \(C_{0}\) by \(C_{1}\) in the inequalities obtained using bounds recovered in the exterior region. In order to recover the enhanced energy bounds (4.8)-(4.10), we compare the equation satisfied by \(Z^{K}h^{1}_{\alpha\beta}\) and \(Z^{K}h^{1,\flat}_{\alpha\beta}\) respectively with (A.1) and apply the energy inequality of proposition A.2. We recall that \(Z^{K}h^{1}_{\alpha\beta}\) satisfies the following quasilinear wave equation \[\tilde{\Box}_{g}Z^{K}h^{1}_{\alpha\beta}=F^{K}_{\alpha\beta}+F^{0,K}_{\alpha \beta} \tag{4.13}\] with source terms \[F^{0,K}_{\alpha\beta}=Z^{K}\tilde{\Box}_{g}h^{0}_{\alpha\beta},\qquad F^{K}_{ \alpha\beta}=Z^{K}F_{\alpha\beta}(h)(\partial h,\partial h)-[Z^{K},H^{\mu\nu} \partial_{\mu}\partial_{\nu}]h^{1}_{\alpha\beta} \tag{4.14}\] and that the equation of \(Z^{K}h^{1,\flat}_{\alpha\beta}\) is obtained by averaging (4.13) over \(\mathbb{S}^{1}\) \[\Box_{x}Z^{K}h^{1,\flat}_{\alpha\beta}+(H^{\boldsymbol{\mu\nu}})^{\flat}\cdot \partial_{\boldsymbol{\mu}}\partial_{\boldsymbol{\nu}}Z^{K}h^{1,\flat}_{ \alpha\beta}+\big{(}(H^{\mu\nu})^{\natural}\cdot\partial_{\mu}\partial_{\nu}Z ^{K}h^{1,\natural}_{\alpha\beta}\big{)}^{\flat}=F^{K,\flat}_{\alpha\beta}+F^{0,K}_{\alpha\beta} \tag{4.15}\] where \(F^{K,\flat}_{\alpha\beta}=\oint_{\mathbb{S}^{1}}F^{K}_{\alpha\beta}dy\). If the tensor \(H\) satisfies suitable decay bounds in \(\mathscr{H}_{[s_{0},S_{0})}\), e.g. if for some \(\delta>0\) \[|H(t,x,y)|\lesssim\frac{\epsilon}{(1+t+r)^{\frac{3}{4}}},\quad|H^{1}_{LL}(t,x,y)|\lesssim\frac{\epsilon}{(1+t+r)^{1+\delta}}\] we derive the following two energy inequalities, which hold for any \(s\in[s_{0},S_{0})\) (4.16) \[E^{\mathrm{i}}(s,Z^{K}h^{1}_{\alpha\beta})\lesssim E^{\mathrm{i} }(s_{0},Z^{K}h^{1}_{\alpha\beta})\] \[+\int_{\mathscr{H}_{\hat{\varkappa}_{0}s}}\Big{(}\frac{1}{2(1+r^{2 })}+\chi\left(\frac{r}{t}\right)\chi(r)\frac{M}{2r}\Big{)}|\partial_{t}Z^{K}h ^{1}_{\alpha\beta}|^{2}+|\underline{\nabla}Z^{K}h^{1}_{\alpha\beta}|^{2}\,dxdy\] \[+\iint_{\mathscr{H}_{[s_{0},s]}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! The energy flux through the boundary \(\hat{\mathscr{H}}_{s_{0}s}\), which appears in the right hand side of both of the above inequalities, is suitably controlled using (3.50) with \(t=t_{s}=s^{2}/2\) \[\int_{\partial\hat{\mathscr{H}}_{s_{0}s}}\Big{(}\frac{1}{2(1+r^{2})}+\chi \left(\frac{r}{t}\right)\chi(r)\frac{M}{2r}\Big{)}|\partial_{t}Z^{K}h^{1}_{ \alpha\beta}|^{2}+|\underline{\nabla}_{xy}Z^{K}h^{1}_{\alpha\beta}|^{2}\,dxdy \lesssim C_{0}^{2}\epsilon^{2}s^{2\sigma+C\epsilon}. \tag{4.18}\] The current section is therefore mainly devoted to estimating the remaining integrals in the right hand side of (4.16) and (4.17). ### First sets of bounds Below is a list of \(L^{2}\) and \(L^{\infty}\) bounds for \(h^{1}_{\alpha\beta},h^{1,\flat}_{\alpha\beta}\) and \(h^{1,\natural}_{\alpha\beta}\), which are a straightforward consequence of the a-priori bounds. All bounds stated below hold true also for tensor coefficients \(H^{1,\mu\nu}\), after decomposition (1.12) and bound (3.11). #### 4.1.1. \(L^{2}_{xy}\) bounds on hyperboloids From a-priori energy assumptions (4.3)-(4.4), the Parseval identity and Poincare inequality applied to the zero-average components \(h^{1,\natural}_{\alpha\beta}\), we derive the following \(L^{2}\) bounds on \(\mathscr{H}_{s}\), for any multi-index of type \((N,k)\) and \(i=0,1\), \[\big{\|}(s/t)\partial\partial^{i}Z^{K}h^{1}_{\alpha\beta}\big{\|} _{L^{2}(\mathscr{H}_{s})}+\big{\|}\partial\partial^{i}Z^{K}h^{1}_{\alpha\beta }\big{\|}_{L^{2}(\mathscr{H}_{s})}\leq 2C_{1}\epsilon s^{\frac{1}{2}+ \zeta_{k}} \tag{4.20}\] \[\big{\|}(s/t)\partial Z^{K}h^{1,\natural}_{\alpha\beta}\big{\|} _{L^{2}(\mathscr{H}_{s})}+\big{\|}\partial Z^{K}h^{1,\natural}_{\alpha\beta} \big{\|}_{L^{2}(\mathscr{H}_{s})}\leq 2C_{1}\epsilon s^{\zeta_{k}}\] (4.21) \[\big{\|}(s/t)\partial Z^{K}h^{1,\natural}_{\alpha\beta}\big{\|} _{L^{2}(\mathscr{H}_{s})}+\big{\|}\partial Z^{K}h^{1,\natural}_{\alpha\beta} \big{\|}_{L^{2}(\mathscr{H}_{s})}+\big{\|}Z^{K}h^{1,\natural}_{\alpha\beta} \big{\|}_{L^{2}(\mathscr{H}_{s})}\leq 2C_{1}\epsilon s^{\frac{1}{2}+ \zeta_{k}} \tag{4.19}\] and \[\Big{\|}t^{-1}\mathscr{S}Z^{K}h^{1,\flat}_{\alpha\beta}\Big{\|} _{L^{2}(\mathscr{H}_{s})}+\Big{\|}t^{-1}\Gamma Z^{K}h^{1,\flat}_{\alpha\beta} \Big{\|}_{L^{2}(\mathscr{H}_{s})}\leq 2C_{1}\epsilon s^{\zeta_{k}} \tag{4.23}\] \[\Big{\|}t^{-1}\mathscr{S}Z^{K}h^{1,\natural}_{\alpha\beta}\Big{\|} _{L^{2}(\mathscr{H}_{s})}+\Big{\|}t^{-1}\Gamma Z^{K}h^{1,\natural}_{\alpha\beta }\Big{\|}_{L^{2}(\mathscr{H}_{s})}\leq 2C_{1}\epsilon s^{\frac{1}{2}+ \zeta_{k}}. \tag{4.22}\] For multi-indexes \(K\) of type \((N,k)\) with \(k\leq N_{1}\) we have \[\big{\|}(s/t)\partial Z^{K}h^{1}_{\alpha\beta}\big{\|}_{L^{2}( \mathscr{H}_{s})}+\big{\|}\partial Z^{K}h^{1}_{\alpha\beta}\big{\|}_{L^{2}( \mathscr{H}_{s})}\leq 2C_{1}\epsilon s^{\delta_{k}} \tag{4.25}\] \[\Big{\|}(s/t)\partial Z^{K}h^{1,\natural}_{\alpha\beta}\Big{\|} _{L^{2}(\mathscr{H}_{s})}+\Big{\|}\partial Z^{K}h^{1,\natural}_{\alpha\beta} \Big{\|}_{L^{2}(\mathscr{H}_{s})}+\Big{\|}Z^{K}h^{1,\natural}_{\alpha\beta} \Big{\|}_{L^{2}(\mathscr{H}_{s})}\leq 2C_{1}\epsilon s^{\delta_{k}} \tag{4.24}\] and \[\big{\|}t^{-1}\mathscr{S}Z^{K}h^{1}_{\alpha\beta}\big{\|}_{L^{2}( \mathscr{H}_{s})}+\big{\|}t^{-1}\Gamma Z^{K}h^{1}_{\alpha\beta}\big{\|}_{L^{2}( \mathscr{H}_{s})}\leq 2C_{1}\epsilon s^{\delta_{k}}. \tag{4.26}\] Moreover, provided that \(\sigma,\epsilon\ll 1\) are sufficiently small so that \(\sigma+C\epsilon\leq\zeta_{k}\) for all \(1\leq k\leq N\), from the Hardy inequality (B.7), energy assumption (4.19) and the exterior energy bound (3.3) (recall that \(t_{s}=s^{2}/2\)) we also deduce the following bound when \(|J|=k\leq N\) \[\big{\|}r^{-1}\Gamma^{J}h^{1}_{\alpha\beta}\big{\|}_{L^{2}_{xy}( \mathscr{H}_{s})} \lesssim 2C_{1}\epsilon s^{\frac{1}{2}+\zeta_{k}}+C_{0}\epsilon s^{ \sigma+C\epsilon}\lesssim 2C_{1}\epsilon s^{\frac{1}{2}+\zeta_{k}} \tag{4.28}\] \[\Big{\|}r^{-1}\Gamma^{J}h^{1,\flat}_{\alpha\beta}\Big{\|}_{L^{2} _{x}(\mathscr{H}_{s})} \lesssim 2C_{1}\epsilon s^{\zeta_{k}}+C_{0}\epsilon s^{\sigma+C \epsilon}\lesssim 2C_{1}\epsilon s^{\zeta_{k}}. \tag{4.27}\] For \(|J|=k\leq N_{1}\), we instead get from (4.24) that \[\big{\|}r^{-1}\Gamma^{J}h^{1}_{\alpha\beta}\big{\|}_{L^{2}_{xy}( \mathscr{H}_{s})}\lesssim 2C_{1}\epsilon s^{\delta_{k}}+C_{0}\epsilon s^{ \sigma+C\epsilon}\lesssim 2C_{1}\epsilon s^{\delta_{k}}. \tag{4.29}\] #### 4.1.2. \(L^{\infty}_{x}L^{2}_{y}\) bounds on hyperboloids These are obtained using the Poincare inequality, lemma B.5, relation \(\underline{\partial_{i}}=t^{-1}\Omega_{0i}\) and energy assumption (4.21). For multi-indexes \(K\) of type \((N-2,k)\) \[\left\|t^{\frac{3}{2}}\,\partial_{y}^{\leq 1}Z^{K}h^{1,\natural}_{\alpha \beta}\right\|_{L^{\infty}_{x}L^{2}_{y}(\mathscr{H}_{s})}+\left\|t^{\frac{1}{2 }}s\,\partial_{tx}Z^{K}h^{1,\natural}_{\alpha\beta}\right\|_{L^{\infty}_{x}L^{ 2}_{y}(\mathscr{H}_{s})}\lesssim C_{1}\epsilon s^{\frac{1}{2}+\zeta_{k+2}}; \tag{4.30}\] for multi-indices \(K\) of type \((N-3,k)\) \[\left\|t^{\frac{5}{2}}\,\partial_{y}^{\leq 1}\underline{\partial}_{x}Z^{K}h^{1, \natural}_{\alpha\beta}\right\|_{L^{\infty}_{x}L^{2}_{y}(\mathscr{H}_{s})}+ \left\|t^{\frac{3}{2}}s\,\partial_{tx}\underline{\partial}_{x}Z^{K}h^{1, \natural}_{\alpha\beta}\right\|_{L^{\infty}_{x}L^{2}_{y}(\mathscr{H}_{s})} \lesssim C_{1}\epsilon s^{\frac{1}{2}+\zeta_{k+3}}; \tag{4.31}\] for multi-indexes \(K\) of type \((N-4,k)\) \[\left\|t^{\frac{7}{2}}\,\partial_{y}^{\leq 1}\underline{\partial}_{x}^{2}Z^{K}h^{ 1,\natural}_{\alpha\beta}\right\|_{L^{\infty}_{x}L^{2}_{y}(\mathscr{H}_{s})}+ \left\|t^{\frac{5}{2}}s\,\partial_{tx}\underline{\partial}_{x}^{2}Z^{K}h^{1, \natural}_{\alpha\beta}\right\|_{L^{\infty}_{x}L^{2}_{y}(\mathscr{H}_{s})} \lesssim C_{1}\epsilon s^{\frac{1}{2}+\zeta_{k+4}}. \tag{4.32}\] Moreover, for multi-indexes \(K\) of type \((N-2,k)\) with \(k\leq N_{1}-2\) \[\left\|t^{\frac{3}{2}}\,\partial_{y}^{\leq 1}Z^{K}h^{1,\natural}_{\alpha \beta}\right\|_{L^{\infty}_{x}L^{2}_{y}(\mathscr{H}_{s})}+\left\|t^{\frac{1} {2}}s\,\partial_{tx}Z^{K}h^{1,\natural}_{\alpha\beta}\right\|_{L^{\infty}_{x} L^{2}_{y}(\mathscr{H}_{s})}\lesssim C_{1}\epsilon s^{\delta_{k+2}}; \tag{4.33}\] for multi-indices \(K\) of type \((N-3,k)\) with \(k\leq N_{1}-3\) \[\left\|t^{\frac{5}{2}}\,\partial_{y}^{\leq 1}\underline{\partial}_{x}Z^{K}h^{1, \natural}_{\alpha\beta}\right\|_{L^{\infty}_{x}L^{2}_{y}(\mathscr{H}_{s})}+ \left\|t^{\frac{3}{2}}s\,\partial_{tx}\underline{\partial}_{x}Z^{K}h^{1, \natural}_{\alpha\beta}\right\|_{L^{\infty}_{x}L^{2}_{y}(\mathscr{H}_{s})} \lesssim C_{1}\epsilon s^{\delta_{k+3}}; \tag{4.34}\] for multi-indexes \(K\) of type \((N-4,k)\) with \(k\leq N_{1}-4\) \[\left\|t^{\frac{7}{2}}\,\partial_{y}^{\leq 1}\underline{\partial}_{x}^{2}Z^{K}h^{ 1,\natural}_{\alpha\beta}\right\|_{L^{\infty}_{x}L^{2}_{y}(\mathscr{H}_{s})}+ \left\|t^{\frac{5}{2}}s\,\partial_{tx}\underline{\partial}_{x}^{2}Z^{K}h^{1, \natural}_{\alpha\beta}\right\|_{L^{\infty}_{x}L^{2}_{y}(\mathscr{H}_{s})} \lesssim C_{1}\epsilon s^{\delta_{k+4}}. \tag{4.35}\] #### 4.1.3. \(L^{\infty}_{xy}\) bounds on hyperboloids These are obtained from the energy assumptions using Poincare inequality, lemma B.5 and Sobolev embedding on \(\mathbb{S}^{1}\). For any multi-index \(K\) of type \((N-3,k)\) and \(i=0,1\) \[\left\|t^{\frac{1}{2}}s^{\frac{1}{2}}\,\partial\partial^{i}Z^{K}h^{1}_{\alpha \beta}\right\|_{L^{\infty}_{xy}(\mathscr{H}_{s})}+\left\|t^{\frac{3}{2}}s^{- \frac{1}{2}}\,\underline{\partial}\partial^{i}Z^{K}h^{1}_{\alpha\beta}\right\| _{L^{\infty}_{xy}(\mathscr{H}_{s})}\lesssim C_{1}\epsilon s^{\zeta_{k+3}} \tag{4.37}\] \[\left\|t^{\frac{1}{2}}s\,\partial Z^{K}h^{1,\natural}_{\alpha \beta}\right\|_{L^{\infty}_{xy}(\mathscr{H}_{s})}+\left\|t^{\frac{3}{2}}\, \underline{\partial}Z^{K}h^{1,\natural}_{\alpha\beta}\right\|_{L^{\infty}_{xy} (\mathscr{H}_{s})}+\left\|t^{\frac{3}{2}}Z^{K}h^{1,\natural}_{\alpha\beta} \right\|_{L^{\infty}_{xy}(\mathscr{H}_{s})}\lesssim C_{1}\epsilon s^{\frac{1 }{2}+\zeta_{k+3}}\] (4.38) \[\left\|t^{\frac{1}{2}}s\,\partial Z^{K}h^{1,\flat}_{\alpha\beta} \right\|_{L^{\infty}_{x}(\mathscr{H}_{s})}+\left\|t^{\frac{3}{2}}\,\underline{ \partial}Z^{K}h^{1,\flat}_{\alpha\beta}\right\|_{L^{\infty}_{x}(\mathscr{H}_{s} )}\lesssim C_{1}\epsilon s^{\zeta_{k+2}} \tag{4.36}\] and \[\|t^{\frac{1}{2}}\mathscr{S}\partial^{i}Z^{K}h^{1}_{\alpha\beta} \|_{L^{\infty}_{xy}(\mathscr{H}_{s})}+\|t^{\frac{1}{2}}\,\Gamma\partial^{i}Z^{K}h^ {1}_{\alpha\beta}\|_{L^{\infty}_{xy}(\mathscr{H}_{s})}\lesssim C_{1}\epsilon s^{ \frac{1}{2}+\zeta_{k+3}} \tag{4.40}\] \[\|t^{\frac{1}{2}}\mathscr{S}Z^{K}h^{1,\flat}_{\alpha\beta}\|_{L^{ \infty}_{x}(\mathscr{H}_{s})}+\|t^{\frac{1}{2}}\,\Gamma Z^{K}h^{1,\flat}_{ \alpha\beta}\|_{L^{\infty}_{x}(\mathscr{H}_{s})}\lesssim C_{1}\epsilon s^{\zeta_ {k+2}}. \tag{4.39}\] From the pointwise bounds (4.7) and the Sobolev embedding on \(\mathbb{S}^{1}\) we also have that \[\left\|t^{\frac{3}{2}}\,\partial^{I}\Gamma^{J}h^{1,\natural}_{\alpha\beta} \right\|_{L^{\infty}_{xy}(\mathscr{H}_{s})}\lesssim\begin{cases}C_{2}\epsilon& \text{ if }|I|\leq N_{1},\ |J|=0\\ C_{2}\epsilon s^{\gamma_{k}}&\text{ if }|I|+|J|\leq N_{1}+1,\ |J|\leq N_{1}\end{cases} \tag{4.41}\] which coupled to (4.38) gives that, for any \(|I|+|J|\leq N_{1}\leq N-3,|J|=k\geq 0\), \[\|t^{\frac{1}{2}}s\,\partial(\partial^{I}\Gamma^{J}h^{1}_{\alpha\beta})\|_{L^{ \infty}_{xy}(\mathscr{H}_{s})}+\|t^{\frac{3}{2}}\underline{\partial}( \partial^{I}\Gamma^{J}h^{1}_{\alpha\beta})\|_{L^{\infty}_{xy}(\mathscr{H}_{s})} \lesssim C_{2}\epsilon s^{\max(\zeta_{k+2},\gamma_{k})}. \tag{4.42}\] For \(|J|=k\leq N-3\), we also have the following bound on coefficients without derivatives \[\|t^{\frac{1}{2}}\,\Gamma^{J}h^{1,{\flat}}_{\alpha\beta}\|_{L^{\infty}_{xy}( \mathscr{H}_{s})}\lesssim C_{1}\epsilon s^{\delta_{k+2}}. \tag{4.43}\] Such a bound is satisfied by \(\Gamma^{J}h^{1,{\flat}}_{\alpha\beta}\) thanks to (4.37), while for \(\Gamma^{K}h^{1,{\flat}}_{\alpha\beta}\) it is obtained by integration. More precisely, on the initial truncated hyperboloid \(\mathscr{H}_{s_{0}}\) such an estimate is obtained by integrating (4.38) along the hyperboloid itself and up to the boundary \(\partial\mathscr{H}_{s_{0}}=S_{s_{0},r_{0}}\) where \(r_{0}:=\max\{r>0:S_{s_{0},r}\subset\mathscr{H}_{s_{0}}\}=\mathscr{O}(1)\). In fact, for any \(r\leq r_{0}\) and \(\omega=x/|x|\) \[\big{|}\Gamma^{J}h^{1,{\flat}}_{\alpha\beta}(\sqrt{s_{0}^{2}+|x| ^{2}},r\omega)\big{|} \leq\big{|}\Gamma^{J}h^{1,{\flat}}_{\alpha\beta}(\sqrt{s_{0}^{2}+ r_{0}^{2}},r_{0}\omega)\big{|}+\int_{r}^{r_{0}}\big{|}\partial\Gamma^{J}h^{1,{ \flat}}_{\alpha\beta}(\sqrt{s_{0}^{2}+\rho^{2}},\rho\omega)\big{|}d\rho\] \[\lesssim\big{|}\Gamma^{J}h^{1,{\flat}}_{\alpha\beta}(\sqrt{s_{0} ^{2}+r_{0}^{2}},r_{0}\omega)\big{|}+\int_{r}^{\infty}C_{1}\epsilon(s_{0}^{2}+ \rho^{2})^{-\frac{3}{4}+\frac{\zeta_{k+2}}{2}}d\rho\] \[\lesssim C_{1}\epsilon(s_{0}^{2}+r^{2})^{-\frac{1}{4}+\frac{\zeta _{k+2}}{2}}\] where we estimated the first term in the above right hand side using the exterior bound (3.14). For all other points \((t,x)\in\mathscr{H}_{(s_{0},S_{0})}\), the decay bound (4.43) is instead obtained by integrating (4.38) along the rays with \(t+r\) and \(\omega\) fixed, i.e. along \[\delta:\lambda\in[r,\lambda^{*}]\mapsto\delta(\lambda)=(t+r-\lambda,\lambda\omega)\] where \(\lambda^{*}\) is the first time \(\delta(\lambda)\) intersects the lateral boundary \(\tilde{\mathscr{H}}\) (in which case \(\lambda^{*}=\frac{(t+r)(t+r-2)}{2(t+r-1)}\)) or the initial hyperboloid \(\mathscr{H}_{s_{0}}\) (in which case \(\lambda^{*}=\frac{(t+r)^{2}-s_{0}^{2}}{2(t+r)}\)). In both cases \(\lambda^{*}=O(t+r)\), so from the estimates on \(\tilde{\mathscr{H}}\) following from (3.14) or on the initial truncated hyperboloid \(\mathscr{H}_{s_{0}}\) derived above, we get \[|\Gamma^{J}h^{1,{\flat}}_{\alpha\beta}(t,x)|\lesssim|\Gamma^{J}h^{1,{\flat}}_{ \alpha\beta}(t+r-\lambda^{*},\lambda^{*}\omega)|+\int_{r}^{\lambda^{*}}|( \partial\Gamma^{J}h^{1,{\flat}}_{\alpha\beta})(\delta(\lambda))|d\lambda\] \[\lesssim(C_{0}+C_{1})\epsilon\,(1+t+r)^{-\frac{1}{2}}s^{\zeta_{k+2}}+C_{1} \epsilon(1+t+r)^{-1+\zeta_{k+2}}\int_{r}^{\lambda^{*}}(t+r-2\lambda)^{-\frac{1 }{2}+\zeta_{k+2}}d\lambda\] \[\lesssim(C_{0}+C_{1})\epsilon\,(1+t+r)^{-\frac{1}{2}}s^{\zeta_{k+2}}.\] #### 4.1.4. \(L^{\infty}_{xy}\) bounds for the good metric coefficients These refer to the enhanced bounds satisfied by the metric coefficients \(H^{1}_{LT}\) as a consequence of the wave condition, more precisely of inequality (2.10), and of the pointwise bounds obtained above. As remarked above, these bounds are also satisfied by the \(h^{1}_{LT}\) metric coefficients. **Proposition 4.4**.: _Under the assumptions of proposition 4.1, we have that for any \(s\in[s_{0},S_{0})\), any multi-index \(K\) of type \((N-3,k)\) and \(i=0,1\)_ \[\|t\,\partial\partial^{i}Z^{K}H^{1}_{LT}\|_{L^{\infty}(\mathscr{H}_{s})} \lesssim C_{1}\epsilon s^{\zeta_{k+3}} \tag{4.44}\] _and for any multi-index \(K\) of type \((N_{1},k)\)_ \[\|t^{\frac{3}{2}}\partial Z^{K}H^{1}_{LT}\|_{L^{\infty}(\mathscr{H}_{s})} \lesssim C_{1}\epsilon s^{\delta_{k+2}}. \tag{4.45}\] _Furthermore, for any multi-index \(K\) of type \((N-2,k)\)_ \[\|t^{\frac{3}{2}}\partial Z^{K}(H^{1}_{LT})^{\flat}\|_{L^{\infty}( \mathscr{H}_{s})}\lesssim C_{1}\epsilon s^{\zeta_{k+2}} \tag{4.47}\] \[\|t^{\frac{1}{2}}(t/s)^{2}Z^{K}(H^{1}_{LT})^{\flat}\|_{L^{\infty}( \mathscr{H}_{s})}\lesssim C_{1}\epsilon s^{\zeta_{k+2}}. \tag{4.46}\] Proof.: The proof of the above estimates is based on inequality (2.10). Estimate (4.44) (resp. (4.45)) is in fact obtained using (4.36) (resp. (4.42)) and (4.43). Estimate (4.46) is deduced similarly, after taking the zero norm of both left and right hand side of (2.10). We recall, in particular, that for any two integrable functions \(f\) and \(g\) defined on \(\mathbb{S}^{1}\), we have \[(fg)^{\flat}=f^{\flat}g^{\flat}+\big{(}f^{\natural}g^{\natural}\big{)}^{\flat},\qquad(fg)^{\natural}=f^{\flat}g^{\natural}+f^{\natural}g^{\flat}+(f^{ \natural}g^{\natural})^{\natural}. \tag{4.48}\] Therefore, (4.46) follows from (4.37), (4.38) and (4.43). Finally, estimate (4.47) is satisfied in the interior of the cone \(t=2r\) after (4.43). In the portion of interior region where \(t<2r\), it is instead obtained from the integration of (4.46) along the rays with \(t+r=const\) and \(\omega=const\) and up to the boundary of the interior region. From (3.17) we derive that \[|Z^{K}(H^{1}_{LT})^{\flat}(t,x)| \lesssim|Z^{K}(H^{1}_{LT})^{\flat}(\delta(\lambda^{*}))|+\int_{ r}^{\lambda^{*}}|\partial Z^{K}(H^{1}_{LT})^{\flat}(\zeta(\lambda))|d\lambda\] \[\lesssim|Z^{K}(H^{1}_{LT})^{\flat}(\delta(\lambda^{*}))|+\int_{ r}^{\lambda^{*}}C_{1}\epsilon(t+r)^{-\frac{3}{2}+\frac{\zeta_{k+2}}{2}}(t+r-2 \lambda)^{\frac{\zeta_{k+2}}{2}}d\lambda\] \[\lesssim C_{0}\epsilon(1+t+r)^{-\frac{3}{2}+2\sigma}(t-r)^{\frac {1}{2}-\kappa}+C_{1}\epsilon(t+r)^{-\frac{3}{2}+\frac{\zeta_{k+2}}{2}}(t-r)^{ 1+\frac{\zeta_{k+2}}{2}}\] \[\lesssim C_{1}\epsilon\frac{(t^{2}-r^{2})^{1+\frac{\zeta_{k+2}}{2 }}}{t^{2}}t^{-\frac{1}{2}}.\] _Remark 4.5_.: By combining together the wave gauge estimate (2.10), with the energy bounds (4.20) and (4.28) (respectively (4.19) and (4.27)) and the pointwise bounds (4.38) and (4.43) (respectively (4.42) and (4.43)), we obtain the following estimate (resp. the second) \[\big{\|}\partial Z^{K}H^{1,\flat}_{LT}\big{\|}_{L^{2}(\mathscr{H}_{s})} \lesssim C_{1}\epsilon\begin{cases}s^{2\zeta_{k}}&\quad\text{if $K$ is of type $(N,k)$},\\ s^{\frac{1}{2}+2\zeta_{k}}&\quad\text{if $K$ is of type $(N+1,k)$ with $k\leq N$}.\end{cases} \tag{4.49}\] ### The null and cubic terms The \(L^{2}\) and \(L^{\infty}\) bounds deduced in subsection 4.1 from the a-priori energy bounds, coupled with the a-priori pointwise bounds, allow us to suitably estimate the null and cubic contributions appearing in the equations for \(Z^{K}h^{1}_{\alpha\beta}\) and \(Z^{K}h^{1,\flat}_{\alpha\beta}\). Quadratic and cubic interactions involving a \(h^{0}\) factor are the simplest ones to analyze. They satisfy the following estimates, which follow from a straightforward application of the energy bounds (4.19), (4.27) and the pointwise bounds (3.11), (4.42) and (4.43). **Lemma 4.6**.: _Let \(Q=Q(\psi,\phi)\) and \(C=C(\delta)(\phi,\psi)\) denote a quadratic and a cubic form respectively. Under the a-priori assumptions (4.3)-(4.7), there exists some small constant \(0<\eta\ll 1\) depending linearly on \(\gamma_{k},\delta_{k},\zeta_{k}\), such that for \(i=0,1\)_ \[\sum_{0\leq l+m\leq 1}\|\partial^{i}Z^{\leq N}Q(\partial h^{l}, \partial h^{m})\|_{L^{2}(\mathscr{H}_{s})}\lesssim C_{1}\epsilon^{2}s^{-3/2+\eta} \tag{4.51}\] \[\sum_{\begin{subarray}{c}0\leq l+m+n\leq 2\\ l,m,n\leq 1\end{subarray}}\|\partial^{i}Z^{\leq N}C(h^{l})(\partial h^{m}, \partial h^{n})\|_{L^{2}(\mathscr{H}_{s})}\lesssim C_{1}^{2}\epsilon^{3}s^{-2+ \eta}. \tag{4.50}\] **Proposition 4.7**.: _Under the a-priori assumptions (4.3)-(4.7) there exists some small constant \(0<\eta\leq 3\delta_{N}\ll 1\) depending linearly on \(\zeta_{k},\gamma_{k},\delta_{k}\), such that for \(i=0,1\)_ \[\|\partial^{i}Z^{\leq N}\mathbf{Q}_{\alpha\beta}(\partial h, \partial h)\|_{L^{2}(\mathscr{H}_{s})} \lesssim(C_{1}\epsilon)^{2}s^{-1+\eta} \tag{4.53}\] \[\|\partial^{i}Z^{\leq N}G_{\alpha\beta}(h)(\partial h,\partial h) \|_{L^{2}(\mathscr{H}_{s})} \lesssim(C_{1}\epsilon)^{3}s^{-3/2+\eta}. \tag{4.52}\] _and multi-indexes \(K\) of type \((N,k)\) with \(k\leq N_{1}\)_ \[\|Z^{K}\mathbf{Q}_{\alpha\beta}(\partial h,\partial h)\|_{L^{2}( \mathscr{H}_{s})} \lesssim(C_{1}\epsilon)^{2}s^{-3/2+\eta} \tag{4.55}\] \[\|Z^{K}G_{\alpha\beta}(h)(\partial h,\partial h)\|_{L^{2}( \mathscr{H}_{s})} \lesssim(C_{1}\epsilon)^{3}s^{-2+\eta}. \tag{4.54}\] _Moreover, for multi-indexes \(K\) of type \((N,k)\)_ \[\|Z^{K}\mathbf{Q}_{\alpha\beta}^{\flat}(\partial h,\partial h)\|_{L^{2}( \mathscr{H}_{s})} \lesssim(C_{1}\epsilon)^{2}\left(\sum_{i=1}^{4}s^{-1+\gamma_{i}+\zeta_{k-i}}+ s^{-1+\zeta_{k}}.\right) \tag{4.56}\] Proof.: Throughout the proof, \(\eta\) will denote a small positive constant that depends linearly on \(\gamma_{k}\) and \(\delta_{k}\). We do not need to keep track of the explicit value of \(\eta\), which may change from line to line. We start by decomposing each occurrence of \(h\) in \(\mathbf{Q}_{\alpha\beta}\) and \(G_{\alpha\beta}\) into \(h^{0}+h^{1}\). Owing to lemma 4.6, we only need to prove that the above estimates are satisfied for null and cubic interactions involving \(h^{1}\) factors only. We recall that the admissible vector fields \(Z\) preserve the null structure and that for any null form \(Q\) \[|Q(\partial\phi,\partial\psi)|\leq|\underline{\partial\phi}||\partial\psi|+| \partial\phi||\underline{\partial\psi}|+\frac{|t^{2}-r^{2}|}{t^{2}}|\partial \phi||\partial\psi|. \tag{4.57}\] For any \(M\in\mathbb{N}\), we then have \[|\partial^{i}Z^{\leq M}\mathbf{Q}_{\alpha\beta}(\partial h^{1},\partial h^{1} )|\lesssim\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Similarly, when \(K\) is of type \((N,k)\) with \(k\leq N_{1}\), we get from energy bound (4.24) and pointwise bound (4.42) that \[\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|\leq N\\ |I_{1}|+|I_{2}|=i\end{subarray}}\left\|\partial\partial^{I_{1}}Z^{K_{1}}h^{1} \cdot\partial\partial^{I_{2}}Z^{K_{2}}h^{1}\right\|_{L^{2}_{xy}(\mathscr{H}_{s })}+\left\|(s/t)^{2}\partial\partial^{I_{1}}Z^{K_{1}}h^{1}\cdot\partial \partial^{I_{2}}Z^{K_{2}}h^{1}\right\|_{L^{2}_{xy}(\mathscr{H}_{s})}\] \[\lesssim(C_{1}\epsilon)^{2}s^{-\frac{3}{2}+\eta}.\] Estimate (4.52) can be slightly improved if we only consider the zero mode of the quadratic null interactions. We recall that the zero-mode of a product decomposes as in (4.48), hence \[\mathbf{Q}^{\flat}_{\alpha\beta}(\partial h^{1},\partial h^{1})=\mathbf{Q}_{ \alpha\beta}(\partial h^{1,\flat},\partial h^{1,\flat})+\left(\mathbf{Q}_{ \alpha\beta}(\partial h^{1,\natural},\partial h^{1,\natural})\right)^{\flat}.\] The pure zero-mode interactions are treated using the null structure. From the energy bound (4.20) and the pointwise bound (4.38) we derive that \[\left\|Z^{\leq N}\mathbf{Q}_{\alpha\beta}(\partial h^{1,\flat},\partial h^{1,\flat})\right\|_{L^{2}_{x}(\mathscr{H}_{s})}\lesssim\epsilon^{2}s^{-\frac{3 }{2}+\eta}.\] The null structure is instead irrelevant when estimating the quadratic interactions of pure non-zero modes. Using the Cauchy-Schwartz and Poincare inequalities and assuming \(N,N_{1}\) are such that \(\lfloor N/2\rfloor+1\leq N_{1}\), we derive from the pointwise bound (4.7), the energy bounds (4.19), (4.21) and (4.25) (recall that \(N_{1}=N-5\)) and relation (4.2) that \[\left\|\left(Z^{K}\mathbf{Q}_{\alpha\beta}(\partial h^{1,\natural},\partial h^{1,\natural})\right)^{\flat}\right\|_{L^{2}_{x}(\mathscr{H}_{s})} \lesssim\sum_{|K_{1}|+|K_{2}|\leq|K|}\left\|\partial Z^{K_{1}}h^{1,\natural}\cdot\partial Z^{K_{2}}h^{1,\natural}\right\|_{L^{1}_{y}L^{2}_{x}( \mathscr{H}_{s})}\] \[\lesssim\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|\leq|K|\\ |K_{1}|\leq\lfloor|K|/2\rfloor\end{subarray}}\left\|\partial Z^{K_{1}}h^{1, \natural}\right\|_{L^{\infty}_{x}L^{2}_{y}(\mathscr{H}_{s})}\left\|\partial Z ^{K_{2}}h^{1,\natural}\right\|_{L^{2}_{xy}(\mathscr{H}_{s})}\] \[\lesssim C_{1}C_{2}\epsilon^{2}(s^{-\frac{3}{2}+\eta}+s^{-1+\zeta _{k}}+\sum_{i=0}^{4}s^{-1+\gamma_{i}+\zeta_{k-i}}).\] As concerns the cubic terms, we have that \[|\partial^{i}Z^{\leq M}G_{\alpha\beta}(h^{1})(\partial h^{1},\partial h^{1})| \lesssim\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|+|K_{3}|\leq M\\ |I_{1}|+|I_{2}|+|I_{3}|=i\end{subarray}}|\partial^{I_{1}}Z^{K_{1}}h^{1}|| \partial\partial^{I_{2}}Z^{K_{2}}h^{1}||\partial\partial^{I_{3}}Z^{K_{3}}h^{1}|.\] When \(M=N\), we get from energy bound (4.19) and pointwise bounds (4.42), (4.43) that \[\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|+|K_{3}|\leq N\\ |K_{1}|+|K_{2}|\leq\lfloor N/2\rfloor\\ |I_{1}|+|I_{2}|+|I_{3}|=i\end{subarray}}\left\|\partial^{I_{1}}Z^{K_{1}}h^{1} \cdot\partial\partial^{I_{2}}Z^{K_{2}}h^{1}\cdot\partial\partial^{I_{3}}Z^{K_{ 3}}h^{1}\right\|_{L^{2}_{xy}(\mathscr{H}_{s})}\] \[\lesssim\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|+|K_{3}|\leq N\\ |K_{1}|+|K_{2}|\leq\lfloor N/2\rfloor\\ |I_{1}|+|I_{2}|+|I_{3}|=i\end{subarray}}\left\|(t/s)\partial^{I_{1}}Z^{K_{1}}h^{1 }\cdot\partial\partial^{I_{2}}Z^{K_{2}}h^{1}\right\|_{L^{\infty}_{xy}(\mathscr{H }_{s})}\left\|(s/t)\partial\partial^{I_{3}}Z^{K_{3}}h^{1}\right\|_{L^{2}_{xy}( \mathscr{H}_{s})}\] \[\lesssim(C_{1}\epsilon)^{3}s^{-\frac{3}{2}+\eta}\] and from (4.27), (4.36) and (4.42) \[\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|+|K_{3}|\leq N\\ |K_{2}|+|K_{3}|\leq\lfloor N/2\rfloor\\ |I_{2}|+|I_{3}|=i\end{subarray}}\left\|Z^{K_{1}}h^{1}\cdot\partial\partial^{I_ {2}}Z^{K_{2}}h^{1}\cdot\partial\partial^{I_{3}}Z^{K_{3}}h^{1}\right\|_{L^{2}_{ xy}(\mathscr{H}_{s})}\] \[\lesssim\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|+|K_{3}|\leq N\\ |K_{2}|+|K_{3}|\leq\lfloor N/2\rfloor\\ |I_{2}|+|I_{3}|=i\end{subarray}}\left\|r^{-1}Z^{K_{1}}h^{1}\right\|_{L^{2}_{ xy}(\mathscr{H}_{s})}\left\|r\partial\partial^{I_{2}}Z^{K_{2}}h^{1}\cdot \partial\partial^{I_{3}}Z^{K_{3}}h^{1}\right\|_{L^{\infty}_{xy}(\mathscr{H}_{ s})}\lesssim(C_{1}\epsilon)^{3}s^{-\frac{3}{2}+\eta}.\] Similarly, we deduce from (4.24), (4.42), (4.43) that when \(K\) is of type \((N,k)\) with \(k\leq N_{1}\) \[\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|+|K_{3}|\leq N\\ |K_{1}|+|K_{2}|\leq\lfloor N/2\rfloor\\ |I_{1}|+|I_{2}|+|I_{3}|=i\end{subarray}}\left\|\partial^{I_{1}}Z^{K_{1}}h^{1} \cdot\partial\partial^{I_{2}}Z^{K_{2}}h^{1}\cdot\partial\partial^{I_{3}}Z^{K_ {3}}h^{1}\right\|_{L^{2}_{xy}(\mathscr{H}_{s})}\lesssim(C_{1}\epsilon)^{3}s^{- 2+\eta}\] while from (4.29) and (4.42) \[\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|+|K_{3}|\leq N\\ |K_{2}|+|K_{3}|\leq\lfloor N/2\rfloor\\ |I_{2}|+|I_{3}|=i\end{subarray}}\left\|Z^{K_{1}}h^{1}\cdot\partial\partial^{I_ {2}}Z^{K_{2}}h^{1}\cdot\partial\partial^{I_{3}}Z^{K_{3}}h^{1}\right\|_{L^{2}_{ xy}(\mathscr{H}_{s})}\lesssim(C_{1}\epsilon)^{3}s^{-2+\eta}.\] **Lemma 4.8**.: _There exists some small constant \(0<\eta\leq 3\delta_{N}\ll 1\) depending linearly on \(\zeta_{k},\gamma_{k},\delta_{k}\) such that for multi-indexes \(K\) of type \((N_{1},k)\) we have_ \[\|t^{2}Z^{K}\mathbf{Q}_{\alpha\beta}(\partial h,\partial h)\|_{L^ {\infty}_{xy}(\mathscr{H}_{s})} \lesssim C_{2}^{2}\epsilon^{2}s^{-1+\eta} \tag{4.59}\] \[\|t^{\frac{3}{2}}Z^{K}G_{\alpha\beta}(h)(\partial h,\partial h)\| _{L^{\infty}_{xy}(\mathscr{H}_{s})} \lesssim C_{1}C_{2}^{2}\epsilon^{3}s^{-2+\eta}. \tag{4.58}\] _while for any quadratic form \(N\) we have_ \[\left\|t\,\partial Z^{K}N(\partial h,\partial h)\right\|_{L^{\infty}_{xy}( \mathscr{H}_{s})} \lesssim C_{1}^{2}\epsilon^{2}s^{-2+\eta}. \tag{4.60}\] Proof.: This is a direct consequence of bounds (4.42) and (4.43). ### Second order derivatives of the zero modes As expected for solutions to wave equations on \(\mathbb{R}^{1+3}\), the second order derivatives of the differentiated coefficients \(Z^{K}h^{1,\flat}_{\alpha\beta}\) enjoy better decay estimates compared to (4.20) and (4.42) respectively. **Lemma 4.9**.: _There exists some small constant \(0<\eta\leq 2\delta_{N}\ll 1\) depending linearly on \(\zeta_{k},\gamma_{k},\delta_{k}\) such that for any multi-index \(K\) of type \((N_{1},k)\)_ \[\left\|t^{\frac{3}{2}}(s/t)^{2}\partial_{t}^{2}Z^{K}h^{1,\flat}_{\alpha\beta} \right\|_{L^{\infty}(\mathscr{H}_{s})}\lesssim C_{2}\epsilon s^{-1+\eta}. \tag{4.61}\] Proof.: The flat wave operator can be expressed in terms of the hyperbolic derivatives as \[-\Box_{tx}=(s/t)^{2}\partial_{t}^{2}+2(x^{\boldsymbol{a}}/t)\underline{ \partial}_{\boldsymbol{a}}\partial_{t}-\underline{\partial}^{\boldsymbol{a} }\underline{\partial}_{\boldsymbol{a}}+\frac{r^{2}}{t^{3}}\partial_{t}+\frac{3 }{t}\partial_{t} \tag{4.62}\] and from (1.15) the curved part can be written as \[(H^{\boldsymbol{\mu}\boldsymbol{\nu}})^{\flat}\partial_{\boldsymbol{\mu}} \partial_{\boldsymbol{\nu}}=(H^{UV})^{\flat}c^{\boldsymbol{\alpha}\boldsymbol{ \beta}}_{\boldsymbol{U}\boldsymbol{\alpha}}\underline{\partial}_{\boldsymbol{ \beta}}+(H^{UV})^{\flat}d^{\boldsymbol{\mu}}_{UV}\underline{\partial}_{ \boldsymbol{\mu}}.\] Equation (4.15) for \(Z^{K}h^{1,\flat}_{\alpha\beta}\) becomes \[\begin{split}\left((s/t)^{2}+(H^{UV})^{\flat}c^{00}_{UV}\right) \partial_{t}^{2}Z^{K}h^{1,\flat}_{\alpha\beta}&=S_{1}(Z^{K}h^{1, \flat}_{\alpha\beta})+S_{2}(Z^{K}h^{1,\flat}_{\alpha\beta})\\ &+F^{K,\flat}_{\alpha\beta}+(F^{0,K}_{\alpha\beta})^{\flat}- \left((H^{\mu\nu})^{\natural}\cdot\partial_{\mu}\partial_{\nu}Z^{K}h^{1, \natural}_{\alpha\beta}\right)^{\flat}\end{split} \tag{4.63}\] where \(F^{K,\flat}_{\alpha\beta}\) is the average over \(\mathbb{S}^{1}\) of the source term in (4.14), and \(S_{1}(p),S_{2}(p)\) are defined as follows for an arbitrary two tensor \(p\) \[\begin{split} S_{1}(p):=-\big{(}2(x^{\boldsymbol{a}}/t)\underline {\partial}_{\boldsymbol{a}}\partial_{t}-\underline{\partial}^{\boldsymbol{ a}}\underline{\partial}_{\boldsymbol{a}}+\frac{r^{2}}{t^{3}}\partial_{t}+\frac{3}{t} \partial_{t}\big{)}p\\ S_{2}(p):=-\big{(}(H^{UV})^{\flat}c^{\boldsymbol{a}\boldsymbol{ \beta}}_{UV}\underline{\partial}_{\boldsymbol{a}}\underline{\partial}_{ \boldsymbol{\beta}}+(H^{UV})^{\flat}c^{\boldsymbol{a}\boldsymbol{b}}_{UV} \underline{\partial}_{\boldsymbol{\alpha}}\underline{\partial}_{\boldsymbol{ b}}+(H^{UV})^{\flat}d^{\boldsymbol{\mu}}_{UV}\underline{\partial}_{\boldsymbol{\mu}} \big{)}p.\end{split} \tag{4.64}\] We note that, if \(\epsilon\) is sufficiently small, relation (1.18) with \(\pi=(Z^{K}H^{UV})^{\flat}\), bounds (3.11), (4.43) and (4.47) yield \[\big{|}(Z^{K}H^{UV})^{\flat}c^{00}_{UV}\big{|}\lesssim C_{1}\epsilon t^{-1/2} (s/t)^{2}s^{\delta_{k+2}}\lesssim(1/2)(s/t)^{2}, \tag{4.65}\] hence it is enough to prove that the right hand side in (4.63) is bounded by \(C_{1}\epsilon t^{-3/2}s^{-1+2\zeta_{k+3}}\). This is the case for the \(S_{1},S_{2}\) terms with \(p=Z^{K}h^{1,\flat}_{\alpha\beta}\), as follows by using the pointwise bounds (4.38), (4.43) and (1.17) together with the fact that \(\underline{\partial}_{\boldsymbol{a}}=\Omega_{0\boldsymbol{a}}/t\). All quadratic and cubic terms in \(F^{K,\flat}_{\alpha\beta}\), except for the commutator terms, are estimated using (4.60) and (4.59) respectively, while from (3.36), (3.11) and (4.38) we have \[\|t^{3}(F^{0,K}_{\alpha\beta})^{\flat}\|_{L^{\infty}(\mathscr{H}_{\ast})} \lesssim\epsilon.\] From the pointwise bounds (4.7), (4.37) and relation (4.2), we have that for multi-indexes \(K\) of type \((N_{1},k)\) \[\begin{split}&\big{|}Z^{K}\big{(}(H^{\mu\nu})^{\natural}\cdot \partial_{\mu}\partial_{\nu}h^{1,\natural}_{\alpha\beta}\big{)}^{\flat}\big{|} \\ &\lesssim\sum_{|K^{\prime}|\leq|K|}\big{\|}H^{1,\natural}\big{\|} _{L^{\infty}_{x}L^{2}_{y}}\ \big{\|}\partial_{\mu}\partial_{\nu}Z^{K^{\prime}}h^{1,\natural}_{\alpha\beta }\big{\|}_{L^{\infty}_{x}L^{2}_{y}}+\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! **Lemma 4.11**.: _There exists some small constant \(0<\eta\leq 2\delta_{N}\ll 1\) depending linearly on \(\zeta_{k},\gamma_{k},\delta_{k}\) such that for any multi-index \(K\) of the type \((N-1,k)\) we have_ \[\left\|(s/t)^{2}\partial_{t}^{2}Z^{K}h^{1,\flat}_{\alpha\beta} \right\|_{L^{2}(\mathscr{H}_{s})}\lesssim C_{1}\epsilon s^{-1+\eta}. \tag{4.68}\] \[\left\|(s/t)^{2}\partial_{t}^{2}\partial Z^{K}h^{1,\flat}_{ \alpha\beta}\right\|_{L^{2}(\mathscr{H}_{s})}\lesssim C_{1}\epsilon s^{-\frac {1}{2}+\eta}. \tag{4.67}\] Proof.: This is based on (4.63) and (4.64). We only detail the proof of estimate (4.67), since (4.68) is obtained in a similar way by replacing energy bound (4.20) with (4.19) whenever it occurs. We make use of (4.63), (4.64) and (4.65) to estimate the \(S_{1},S_{2}\) terms. From \(\underline{\partial}_{\boldsymbol{a}}=(1/t)\Omega_{0\boldsymbol{a}}\), the energy bound (4.20) and the pointwise bounds (3.11), (4.43) we derive that \[\sum_{i=0}^{1}\left\|sS_{i}(Z^{K}h^{1,\flat}_{\alpha\beta})\right\|_{L^{2}( \mathscr{H}_{s})}\lesssim\|(s/t)\partial\Gamma^{\leq 1}Z^{K}h^{1,\flat}_{ \alpha\beta}\|_{L^{2}(\mathscr{H}_{s})}\lesssim C_{1}\epsilon s^{\zeta_{k+1}}.\] We recall that the quadratic null terms satisfy (4.56), while the cubic terms verify (4.53). In general, the zero-mode of a quadratic interaction can be estimated as follows: using (4.20) and (4.38) we find \[\sum_{|K_{1}|+|K_{2}|\leq|K|}\|\partial Z^{K_{1}}h^{1,\flat}\cdot \partial Z^{K_{2}}h^{1,\flat}\|_{L^{2}_{x}(\mathscr{H}_{s})}\lesssim\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! We observe that the above bounds are satisfied by any \(\partial Z^{K}h^{1,\sharp}_{TU}\) with \(|K|\leq N_{1}\) as a consequence of (4.41). In the interior of the cone \(t=2r\) they follow immediately from (4.38), where one has \(t^{2}-r^{2}\geq 3t^{2}/4\) and consequently \[|\partial Z^{\leq N-3}h^{1,\flat}_{a\beta}(t,x,y)|\lesssim C_{1}\epsilon t^{-3 /2+\zeta_{N}}.\] For all other points in the interior region, (4.70)-(4.71) are instead obtained by integration along characteristics, see lemma 4.13. The difference between \(h^{1}_{TU}\) and any general coefficient \(h^{1}_{\alpha\beta}\) relies on the fact that weak null terms do not appear in the equation (3.31) satisfied by the former. We only sketch the proof of the following lemma, see [40] for additional details. **Lemma 4.13**.: _Let \(s>s_{0}\) and \(\mathscr{D}_{s}\) be the set of points \((t,x)\) in the cone \(t/2<|x|<2t-3\) with \(t\geq 2\) such that_ \[|x|\leq\sqrt{t^{2}-s^{2}}\text{ if }(t,x)\in\mathscr{D}^{i}\quad\text{ or }\quad t\leq s^{2}/2\text{ otherwise}.\] _We denote by \(\partial_{B}\mathscr{D}_{s}\) the lateral boundary of \(\mathscr{D}_{s}\), i.e._ \[\partial_{B}\mathscr{D}_{s}:=\{(t,x):|x|=t/2\text{ and }t\leq 8/3\text{ or }|x|=2t-3\text{ and }t\leq s^{2}/2\}.\] _Let \(u\) be a solution to the wave equation on the curved 4-dimensional spacetime \(\tilde{\Box}_{g}u=F\), where \(g=(g_{\boldsymbol{\mu\nu}})\) is a Lorenztian metric, \(g^{-1}=(g^{\boldsymbol{\mu\nu}})\) is its inverse and \(F\) is some smooth source term. Let \(m=(m_{\boldsymbol{\mu\nu}})\) denote the Minkowski metric, \(m^{-1}=(m^{\boldsymbol{\mu\nu}})\) its inverse and \(\pi^{\boldsymbol{\mu\nu}}:=g^{\boldsymbol{\mu\nu}}-m^{\boldsymbol{\mu\nu}}\). For any spacetime point \((t,x)\in\mathscr{D}_{s}\), let \((\tau,\varphi(\tau;t,x))\) be the integral curve of the vector field_ \[\partial_{t}+\frac{1-\pi_{LL}/4}{1+\pi_{LL}/4}\partial_{\tau}\] _passing through \((t,x)\), i.e. \(\varphi(t;t,x)=x\). Assume that_ \[|\pi^{L\underline{L}}(t,x)|<1/4\quad\text{and}\quad|\pi_{LL}(t,x)|\leq\epsilon \frac{|t-r|}{t+r},\qquad\forall(t,x)\in\mathscr{D}_{s}.\] _Then for any \((t,x)\in\mathscr{D}_{s}\) one has that_ \[t|(\partial_{t}-\partial_{r})u(t,x)|\lesssim\sup_{\partial_{B}\tilde{\mathscr{ D}}_{s}}|(\partial_{t}-\partial_{r})(ru)|+\int_{2}^{t}|M[u,\pi](\tau)|d\tau+\int_{2}^{ t}\tau|F|_{(\tau,\varphi(\tau;t,x))}d\tau \tag{4.72}\] _where_ \[M[u,\pi](\tau)=\Big{(}r|\Delta_{\mathbb{S}^{1}}u|+|\pi|_{\mathscr{L}\mathscr{ T}}\big{(}r|\overline{\partial}\partial u|+|\partial u|\big{)}+|\pi|r|\overline{ \partial}^{2}u|\Big{)}|_{(\tau,\varphi(\tau;t,x))}. \tag{4.73}\] Proof.: From the hypothesis on \(\pi\), \(-2g^{L\underline{L}}=1-2\pi^{L\underline{L}}>1/2\) and \(\tilde{g}^{\alpha\beta}:=\frac{g^{\alpha\beta}}{-2g^{L\underline{L}}}\) is well-defined. From \(\tilde{\Box}_{g}u=F\) one has that \[\Box_{x}u+\theta^{\alpha\beta}\partial_{\alpha}\partial_{\beta}u=\frac{F}{-2g ^{L\underline{L}}}\] where \(\theta^{\alpha\beta}:=\tilde{g}^{\alpha\beta}-m^{\alpha\beta}\) satisfies the following \[\theta_{L\underline{L}}=0,\quad\theta_{LT}=(-2g^{L\underline{L}})^{-1}\pi_ {LT},\quad\overline{\mathrm{tr}}\theta=(-2g^{L\underline{L}})^{-1}(\overline {\mathrm{tr}}\theta+\pi_{L\underline{L}}),\quad|\theta|\lesssim|\pi|\] \[\Big{|}\theta^{\alpha\beta}\partial_{\alpha}\partial_{\beta}u-\frac{1}{r} \theta^{L\underline{L}}\underline{L}^{2}(ru)\Big{|}\lesssim|\pi|_{\mathscr{L} \mathscr{T}}|\overline{\partial}\partial u|+|\pi||\overline{\partial}^{2}u|+| \pi|r^{-1}|\partial u|.\] We recall that the flat wave operator can be written as follows \[\square_{x}u=\frac{1}{r}(\partial_{t}+\partial_{r})(\partial_{t}-\partial_{r})( ru)+\Delta_{\mathbb{S}^{1}}u.\] Therefore \[\Big{|}\Big{(}\partial_{t}+\frac{1-\pi_{LL}/4}{1+\pi_{LL}/4}\partial_{r}\Big{)}( \partial_{t}-\partial_{r})(ru)\Big{|}\lesssim r|\Delta_{\mathbb{S}^{1}}u|+| \pi|_{\mathscr{L}\mathscr{T}}\big{(}r|\overline{\partial}\partial u|+|\partial u |\big{)}+|\pi|r|\overline{\partial}^{2}u|+|rF|.\] Due to the smallness assumption on \(\pi_{LL}\), any integral curve \((\tau,\varphi(\tau;t,x))\) passing through a point \((t,x)\in\mathscr{D}_{s}\) must intersect the boundary \(\partial_{B}\mathscr{D}_{s}\). The result of the lemma finally follows from integration along the characteristic curve, from which we get \[|(\partial_{t}-\partial_{r})(ru)(t,x)|\lesssim|(\partial_{t}-\partial_{r})( ru)(t_{0},x_{0})|+\int_{t_{0}}^{t}|M[u,\pi](\tau)|+\tau|F|_{(\tau,\varphi(\tau;t,x) )}d\tau\] where \((t_{0},x_{0})\) is the first point at which the intersection with the lateral boundary occurs. Proof of Proposition 4.12.: Throughout this proof we will denote by \(\eta\) any small positive constant that linearly depends on \(\zeta_{k},\gamma_{k},\delta_{k}\). After the above observations, we only need to prove that (4.70) and (4.71) are satisfied in the exterior of the cone \(t=2r\). Such estimates are satisfied by the \((s/t)^{2}\partial_{t}\) and \(\underline{\partial}_{\boldsymbol{a}}\) derivatives as a consequence of (4.38). Moreover, since \[\partial_{t}=\frac{t-r}{t}\partial_{t}+\frac{x^{\boldsymbol{a}}}{t+r} \underline{\partial}_{\boldsymbol{a}}+\frac{r}{t+r}(\partial_{t}-\partial_{r }),\qquad\partial_{\boldsymbol{a}}\ \ =\underline{\partial}_{\boldsymbol{a}}-\frac{x_{ \boldsymbol{b}}}{t}\partial_{t}\] we can reduce to proving them for the \(\partial_{t}-\partial_{r}\) derivative, which we do by applying Lemma 4.13. We remark that for a point \((t,x)\in\mathscr{D}_{s}\cap\mathscr{D}^{\rm i}\), the integral curve \((\tau,\varphi(\tau;t,x))\) may have a non-empty intersection with the exterior region, which explains why in the following we invoke some pointwise estimates obtained in section 3. The integration of (3.31) along \(\mathbb{S}^{1}\) shows that \(Z^{K}h^{1,\flat}_{\alpha\beta}\) is solution to the following equation \[\square_{x}Z^{K}h^{1,\flat}_{TU}+(H^{\boldsymbol{\mu}\boldsymbol{\nu}})^{ \flat}\partial_{\boldsymbol{\mu}}\partial_{\boldsymbol{\nu}}Z^{K}h^{1,\flat} _{TU}=F^{K,\flat}_{TU}+F^{0,K}_{TU}-\big{(}(H^{\mu\nu})^{\natural}\cdot \partial_{\mu}\partial_{\nu}Z^{K}h^{1,\natural}_{TU}\big{)}^{\flat} \tag{4.74}\] where \(F^{K,\flat}_{TU}=(F^{K}_{TU})^{\flat}\) and \(F^{K}_{TU}\) is given by (3.32). We recall that tensor \(H^{\mu\nu}\) decomposes as in (1.12). The hypothesis of Lemma 4.13 are met thanks to the pointwise bounds (3.11), (3.17) and (4.47), therefore for all \(s>s_{0}\) and all \((t,x)\in\mathscr{D}_{s}\) \[|(\partial_{t}-\partial_{r})Z^{K}h^{1,\flat}_{TU}(t,x)|\leq t^{-1 }\sup_{\partial_{B}\mathscr{D}_{s}}|(\partial_{t}-\partial_{r})(rZ^{K}h^{1, \flat}_{TU})|+t^{-1}\int_{2}^{t}|M[Z^{K}h^{1,\flat}_{TU},H^{\flat}](\tau)|d\tau\] \[+t^{-1}\int_{2}^{t}\tau\,(\text{RHS of \eqref{eq:4.74}})d\tau \tag{4.75}\] where \(M[\cdot,\cdot]\) is given by the formula in (4.73). From the interior pointwise bounds (4.38), (4.43) and the exterior pointwise bounds (3.12), (3.14) we see that \[\sup_{\partial_{B}\mathscr{D}_{s}}|(\partial_{t}-\partial_{r})(rZ^{K}h^{1, \flat}_{TU})|\lesssim C_{1}\epsilon t^{-\frac{1}{2}+\eta}.\] As concerns the contribution coming from \(M[Z^{K}h^{1,\flat}_{TU},H^{\flat}]\), we see from formula (4.73) together with the fact that \(\boldsymbol{\dot{\phi}_{j}}=\frac{x^{\sharp}}{r^{2}}\Omega_{\boldsymbol{i} \boldsymbol{j}}\) and the exterior pointwise bounds (3.12)-(3.14) that for points \((t,x)\in\mathscr{D}_{s}\cap\mathscr{D}^{\rm e}\) \[|M[Z^{K}h^{1,\flat}_{TU},H^{\flat}](t,x)|\lesssim C_{0}\epsilon t^{-1+2\sigma} \sqrt{l(t)}+C_{0}^{2}\epsilon^{2}t^{-2+2\sigma}.\] In \(\mathscr{D}_{s}\cap\mathscr{D}^{\rm i}\), we rewrite (4.73) using inequality (1.20) \[|M[Z^{K}h^{1,\flat}_{TU},H^{\flat}](t,x)|\lesssim|\underline{ \partial}Z^{\leq 1}Z^{K}h^{1,\flat}_{TU}|+|H^{\flat}|_{\mathscr{L}\mathscr{T}} \Big{(}r\Big{(}\frac{s}{t}\Big{)}^{2}|\partial^{2}Z^{K}h^{1,\flat}_{TU}|+| \partial Z^{\leq 1}Z^{K}h^{1,\flat}_{TU}|\Big{)}\] \[+|H^{\flat}|\Big{(}r\Big{(}\frac{s}{t}\Big{)}^{4}|\partial^{2}Z^ {K}h^{1,\flat}_{TU}|+\Big{(}\frac{s}{t}\Big{)}^{2}|\partial Z^{\leq 1}Z^{K}h^{1, \flat}_{TU}|+|\underline{\partial}Z^{\leq 1}Z^{K}h^{1,\flat}_{TU}|\Big{)}\] and hence deduce from pointwise bounds (3.11), (4.38), (4.38), (4.43), (4.47) that \[|M[Z^{K}h^{1,\flat}_{TU},H^{\flat}](t,x)|\lesssim C_{1}\epsilon t^{-\frac{3} {2}+\eta}.\] Overall, \(M[Z^{K}h^{1,\flat}_{TU},H^{\flat}]|_{(\tau,\varphi(\tau;t,x))}\) is an integrable function of \(\tau\). We next show that the right hand side of (4.74) multiplied by \(t\) is integrable in \(\tau\) along the characteristic curve. Concerning the contributions to \(F^{K,\flat}_{TU}\), see formula (3.32): the weak null terms do not appear in \(F^{K,\flat}_{TU}\), hence from (3.22), (3.23) and (4.58), (4.59) we have that \[|Z^{K}F^{\flat}_{TU}(t,x)|\lesssim C_{1}^{2}\epsilon^{2}t^{-\frac{5}{2}+\eta} +C_{0}^{2}\epsilon^{2}t^{-2+2\sigma}\sqrt{l(t)}.\] The terms arising from the commutation of the null frame with the wave operator are estimated, on the one hand, using (3.11), pointwise interior bounds (4.38), (4.43) and the exterior bounds (3.13), (3.14) \[\sum_{|K^{\prime}|\leq|K|}C^{\boldsymbol{i}\alpha\beta}_{TU,K^{\prime}}| \mathscr{\partial}_{\boldsymbol{i}}Z^{K^{\prime}}h^{1,\flat}_{\alpha\beta}|+ \big{|}D^{\alpha\beta}_{TU,K^{\prime}}Z^{K^{\prime}}h^{1,\flat}_{\alpha\beta} \big{|}\lesssim C_{1}\epsilon t^{-\frac{5}{2}+\eta}+C_{0}\epsilon t^{-2+\sigma} \sqrt{l(t)}.\] On the other hand, using additionally the a-priori bound (4.6) we see that \[\sum_{|K_{1}|+|K_{2}|\leq|K|}\big{|}E^{\boldsymbol{i}\alpha\beta}_{TU\mu\nu,K _{1}K_{2}}\big{(}Z^{K_{1}}H^{\mu\nu}\cdot\mathscr{\partial}_{\boldsymbol{i}}Z ^{K_{2}}h^{1}_{\alpha\beta}\big{)}^{\flat}\big{|}+\big{|}F^{\alpha\beta}_{TU \mu\nu,K_{1}K_{2}}\big{(}Z^{K_{1}}H^{\mu\nu}\cdot Z^{K_{2}}h^{1}_{\alpha\beta} \big{)}^{\flat}\big{|}\] \[\lesssim C_{1}C_{2}\epsilon^{2}t^{-\frac{5}{2}+\eta}+C_{0}^{2}\epsilon^{2}t^ {-2+2\sigma}\sqrt{l(t)}.\] From (4.66), together with (3.12) and (3.14), we have that \[\big{|}\big{(}(H^{1,\mu\nu})^{\natural}\cdot\partial_{\mu}\partial_{\nu}Z^{K} h^{1,\natural}_{TU}\big{)}^{\flat}(t,x)\big{|}+\big{|}\big{(}[Z^{K},H^{1,\mu\nu} \partial_{\mu}\partial_{\nu}]h^{1,\natural}_{TU}\big{)}^{\flat}(t,x)\big{|} \lesssim C_{1}^{2}\epsilon^{2}t^{-\frac{5}{2}+\eta}.\] Thus all together \[t|F^{K,\flat}_{TU}-[Z^{K},H^{0,\boldsymbol{\mu}\boldsymbol{\nu}} \partial_{\boldsymbol{\mu}}\partial_{\boldsymbol{\nu}}]h^{1,\flat}_{TU}|+t \big{|}\big{(}(H^{1,\mu\nu})^{\natural}\cdot\partial_{\mu}\partial_{\nu}Z^{K} h^{1,\natural}_{TU}\big{)}^{\flat}(t,x)\big{|}\\ \lesssim C_{1}\epsilon t^{-\frac{3}{2}+\eta}+C_{0}\epsilon t^{-1+ 2\sigma}\sqrt{l(t)}.\] Using the structure highlighted in Lemma 3.9, one can easily show that \[|F^{0,K}_{TU}(t,x)|\lesssim\epsilon t^{-3}.\] Finally, if \(K\) is such that \(Z^{K}=\partial^{I}\) is a product of derivatives only, with \(|I|\leq N_{1}\), we derive from (3.11), (3.12) and (4.38) that \[t\big{|}[\partial^{I},(H^{0,\boldsymbol{\mu}\boldsymbol{\nu}})\partial_{ \boldsymbol{\mu}}\partial_{\boldsymbol{\nu}}]h^{1,\flat}_{TU}\big{|}\lesssim \sum_{|I_{1}|+|I_{2}|=|I|\atop|I_{1}|\geq 1}t|\partial^{I_{1}}H^{0}|\,| \partial^{2}\partial^{I_{2}}h^{1,\flat}_{TU}|\lesssim C_{1}\epsilon^{2}t^{-2+ \delta_{k+2}}.\] This is an integrable quantity, as all others above, therefore we obtain (4.71). Bound (4.70) follows then by induction on the number \(k\) of vector fields in \(Z^{K}\), since \[|[Z^{K},(H^{0,\boldsymbol{\mu\nu}})\partial_{\boldsymbol{\mu}} \partial_{\boldsymbol{\nu}}]h_{TU}^{1,\flat}| \leq\sum_{\begin{subarray}{c}|I_{1}|+|K_{2}^{\prime}|\leq|K|\\ |I_{1}|\geq 1,|K_{2}^{\prime}|<|K|\end{subarray}}|\partial^{I_{1}}H^{0}|| \partial^{2}Z^{K_{2}^{\prime}}h_{TU}^{1,\flat}|+\sum_{\begin{subarray}{c}|K_{1 }|+|K_{2}^{\prime}|\leq|K|\\ |K_{2}^{\prime\prime}|<|K|\end{subarray}}|Z^{K_{1}}H^{0}||\partial^{2}Z^{K_{2}^ {\prime\prime}}h_{TU}^{1,\flat}|\] \[\leq\sum_{|K^{\prime}|<|K|}\frac{1}{(1+t+r)^{2}}|\partial^{2}Z^{K ^{\prime}}h_{TU}^{1,\flat}|+\sum_{|K^{\prime\prime}|<|K|}\frac{1}{(1+t+r)}| \partial^{2}Z^{K^{\prime\prime}}h_{TU}^{1,\flat}|\] where \(K^{\prime}\) is a multi-index of type \((|K|-1,k)\) and \(K^{\prime\prime}\) is of type \((|K|-1,k-1)\). ### The weak null terms In this section we prove estimates on the weak null terms. **Lemma 4.14**.: _For any multi-index \(K\) of type \((N,k)\) and \(i=0,1\), we have that_ \[\big{\|}\partial^{i}Z^{K}P_{\alpha\beta}(\partial h,\partial h)\big{\|}_{L^{2} _{xy}(\mathscr{H}_{s})}\lesssim C_{1}^{2}\epsilon^{2}s^{-\frac{1}{2}+\zeta_{k}}, \tag{4.76}\] _while for any multi-index \(K\) of type \((N,k)\) with \(k\leq N_{1}\)_ \[\big{\|}Z^{K}P_{\alpha\beta}(\partial h,\partial h)\big{\|}_{L^{2} _{xy}(\mathscr{H}_{s})}\lesssim C_{1}^{2}\epsilon^{2}s^{-1+\delta_{k}}. \tag{4.77}\] _Moreover, for \(K\) a multi-index of type \((N,k)\) we have_ \[\big{\|}Z^{K}P_{\alpha\beta}^{\flat}(\partial h,\partial h)\big{\|} _{L^{2}_{xy}(\mathscr{H}_{s})}\] \[\lesssim C_{1}\epsilon s^{-1}\sum_{K^{\prime}}E^{i}(s,Z^{K^{ \prime}}h^{1,\flat})^{1/2}+C_{1}\epsilon s^{-1+C_{\epsilon}} \sum_{K^{\prime\prime}}E^{i}(s,Z^{K^{\prime\prime}}h^{1,\flat})^{1/2}+C_{1} ^{2}\epsilon^{2}s^{-\frac{3}{2}+2\delta_{N}}\] \[+C_{1}^{2}\epsilon^{2}\delta_{k>N_{1}}\left(\sum_{i=1}^{4}s^{-1+ \gamma_{i}+\zeta_{k-i}}+s^{-1+\zeta_{k}}\right) \tag{4.78}\] _where \(K^{\prime}\) is of type \((|K|,k)\), \(K^{\prime\prime}\) of type \((|K|-1,k-1)\), and where \(\delta_{k>N_{1}}=1\) when \(k>N_{1}\), 0 otherwise._ Proof.: We start by decomposing each occurrence of \(h\) into the sum \(h^{0}+h^{1}\) and observe that the quadratic interactions involving at least one factor \(h^{0}\) verify (4.50). We hence focus on estimating the weak null terms only involving factors \(h^{1}\) and distinguish between the region inside the cone \(t=2r\) and its complement in \(\mathscr{D}^{i}\). In the interior of the cone \(t=2r\) (where \(s\approx t\)), the bounds of the statement do not depend on the weak null structure: consequently they are the same as the bounds for the null terms (4.52), (4.54) and (4.56). The estimates in the region \(\mathscr{D}^{i}\cap\{t<2r\}\) follow from the particular structure of the weak null terms and the wave condition. We have already seen that these two yield inequality (3.47) which, after (1.20), can be also written in the following form (4.79) \[\begin{split}&\big{|}\partial^{i}Z^{K}P_{\alpha\beta}(\partial h^{1}, \partial h^{1})\big{|}\lesssim\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|\leq|K| \\ |I_{1}|+|I_{2}|=i\end{subarray}}|\partial\partial^{I_{1}}Z^{K_{1}}h^{1}|\beta \partial^{I_{2}}Z^{K_{2}}h^{1}|+|\underline{\partial}\partial^{I_{1}}Z^{K_{1} }h^{1}||\partial\partial^{I_{2}}Z^{K_{2}}h^{1}|\\ &+\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|\leq|K|\\ |I_{1}|+|I_{2}|=i\end{subarray}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! _for multi-indexes \(K\) of type \((N,k)\)_ \[\iint_{\mathscr{H}_{[s_{0},s]}}\big{|}\big{(}[Z^{K},H^{1,\mu\nu} \partial_{\mu}\partial_{\nu}]h^{1,\flat}_{\alpha\beta}\big{)}^{\flat}\big{|}| \partial_{t}Z^{K}h^{1,\flat}_{\alpha\beta}|dxdt\\ +\iint_{\mathscr{H}_{[s_{0},s]}}\big{|}\big{(}(H^{1,\mu\nu})^{ \natural}\cdot\partial_{\mu}\partial_{\nu}Z^{K}h^{1,\natural}_{\alpha\beta} \big{)}^{\flat}\big{|}|\partial_{t}Z^{K}h^{1,\flat}_{\alpha\beta}|dxdt\lesssim( C_{0}^{2}+C_{1}^{2})C_{2}^{2}\epsilon^{3}\Big{(}\sum_{i=1}^{4}s^{\gamma_{i}+ \zeta_{k-i}+\zeta_{k}}+s^{2\zeta_{k}}\Big{)} \tag{4.81}\] _for multi-indexes \(K\) of type \((N,k)\) with \(k\leq N_{1}\)_ \[\iint_{\mathscr{H}_{[s_{0},s]}}\big{|}[Z^{K},H^{1,\mu\nu} \partial_{\mu}\partial_{\nu}]h^{1}_{\alpha\beta}\big{|}|\partial_{t}Z^{K}h^{1,\flat}_{\alpha\beta}|dxdt\lesssim(C_{0}^{2}+C_{1}^{2})C_{2}^{2}\epsilon^{3}s ^{2\delta_{k}} \tag{4.82}\] _and for multi-indexes \(K\) of type \((N-1,k)\) with \(k\leq N_{1}\)_ \[\iint_{\mathscr{H}_{[s_{0},s]}}\big{|}\big{(}[Z^{K},H^{1,\mu\nu} \partial_{\mu}\partial_{\nu}]h^{1,\flat}_{\alpha\beta}\big{)}^{\flat}\big{|}| \partial_{t}Z^{K}h^{1,\flat}_{\alpha\beta}|dxdt\\ +\iint_{\mathscr{H}_{[s_{0},s]}}\big{|}\big{(}(H^{1,\mu\nu})^{ \natural}\cdot\partial_{\mu}\partial_{\nu}Z^{K}h^{1,\natural}_{\alpha\beta} \big{)}^{\flat}\big{|}|\partial_{t}Z^{K}h^{1,\flat}_{\alpha\beta}|dxdt\lesssim( C_{0}^{2}+C_{1}^{2})C_{2}^{2}\epsilon^{3}. \tag{4.83}\] We postpone the proof of the above proposition and first observe that, because of (4.48), we will need to estimate quadratic terms (in fact, commutators) that are either pure products of zero modes, or pure products of non-zero modes, or mixed products. We proceed to the analysis of those separately, in the lemmas that follow. **Lemma 4.16**.: _There exists \(0<\eta\leq 2\delta_{N}\ll 1\), linearly depending on \(\zeta_{k},\gamma_{k},\delta_{k}\), such that for any multi-index \(K\) of type \((N,k)\) we have_ (4.84) \[\begin{split}\big{\|}[\partial^{i}Z^{K},(H^{1,\mu\boldsymbol{\nu} })^{\flat}\partial_{\boldsymbol{\mu}}\partial_{\boldsymbol{\nu}}]h^{1,\flat} _{\alpha\beta}-\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! When \(i=1\), the \(L^{2}\) bound (4.68) gives \[\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|\leq|K|\\ |K_{1}|\leq|\lfloor K/2\rfloor,\,|K_{2}|<|K|\end{subarray}}\big{\|}Z^{K_{1}}H_{ LL}^{1,\flat}\cdot\partial_{t}^{2}\partial Z^{K_{2}}h_{\alpha\beta}^{1,\flat} \big{\|}_{L^{2}(\mathscr{H}_{s})}\lesssim C_{1}C_{2}\epsilon^{2}s^{-1+\eta}.\] The second sum is estimated using (4.19) and (4.46) whenever \(|K_{1}|\leq N-2\), and using (4.38) and (4.49) otherwise \[\sum_{|K_{1}|+|K_{2}|\leq|K|}\hskip-14.226378pt\big{\|}\partial Z^{K_{1}}H_{ LL}^{1,\flat}\cdot\partial_{t}^{2}Z^{K_{2}}h_{\alpha\beta}^{1,\flat}\big{\|}_{L^{ 2}(\mathscr{H}_{s})}\lesssim C_{1}^{2}\epsilon s^{-1+\eta}.\] **Lemma 4.17**.: _For any fixed fixed multi-index \(K\) and any smooth function \(\phi\), we have_ \[\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|\leq|K|\\ |K_{2}|\leq|\lfloor K/2\rfloor\end{subarray}}\iint_{\mathscr{H}_{[s_{0},s]}}|Z ^{K_{1}}H_{LL}^{1,\flat}||\partial_{t}^{2}Z^{K_{2}}h_{\alpha\beta}^{1,\flat }||\partial_{t}\phi|dxdt\lesssim(C_{0}^{2}+C_{1}^{2})C_{2}^{2}\epsilon^{3}s^{ \kappa_{\phi}} \tag{4.85}\] _with_ \[\kappa_{\phi}=\begin{cases}1,&\text{ if }\phi=Z^{K}h_{\alpha\beta}^{1}\text{ and K is of type }(N+1,k),\ k\leq N\\ 0,&\text{ if }\phi=Z^{K}h_{\alpha\beta}^{1,\flat}\text{ and K is of type }(N,k)\\ 0,&\text{ if }\phi=Z^{K}h_{\alpha\beta}^{1}\text{ and K is of type }(N,k),\ k\leq N_{1}\end{cases} \tag{4.86}\] Proof.: We restrict our attention to the case where \(Z^{K_{1}}=\Gamma^{K_{1}}\), as the ones where \(Z^{K_{1}}=\partial^{I_{1}}\Gamma^{J_{1}}\) with \(|I_{1}|\geq 1\) can be estimated as in the proof of lemma 4.16. For any \(K_{1}\), \(K_{2}\) in the selected range of indexes and any fixed \(\nu\) such that \(2\delta_{k}<\nu\ll 1\), we write the following \[\iint_{\mathscr{H}_{[s_{0},s]}}\hskip-14.226378pt|Z^{K_{1}}H_{ LL}^{1,\flat}||\partial_{t}^{2}Z^{K_{2}}h_{\alpha\beta}^{1,\flat}||\partial_{t} \phi|dxdt\leq\int_{s_{0}}^{s}\hskip-14.226378ptE^{\text{i}}(\tau,\phi)^{\frac{ 1}{2}}\big{\|}Z^{K_{1}}H_{LL}^{1,\flat}\cdot\partial_{t}^{2}Z^{K_{2}}h_{\alpha \beta}^{1,\flat}\big{\|}_{L^{2}(\mathscr{H}_{\tau})}d\tau\] \[\lesssim\int_{s_{0}}^{s}\epsilon\tau^{-1-\nu}E^{\text{i}}(\tau, \phi)d\tau+\frac{1}{\epsilon}\int_{s_{0}}^{s}\int_{\mathscr{H}_{\tau}}\tau^{1 +\nu}|Z^{K_{1}}H_{LL}^{1,\flat}|^{2}|\partial_{t}^{2}Z^{K_{2}}h_{\alpha\beta}^{ 1,\flat}|^{2}dxd\tau\] \[\lesssim C_{1}^{2}\epsilon^{3}s^{\kappa_{\phi}}+\frac{1}{\epsilon }\int_{s_{0}}^{t_{s}}\int_{\mathscr{C}_{t}}|Z^{K_{1}}H_{LL}^{1,\flat}|^{2}| \partial_{t}^{2}Z^{K_{2}}h_{\alpha\beta}^{1,\flat}|^{2}t^{1+\nu}dxdt\] where \(\mathscr{C}_{t}=\{x\in\mathbb{R}^{3}:\sqrt{(t^{2}-s^{2})^{+}}\leq|x|\leq\sqrt{( t-1)^{2}-1}\}\) and \(t_{s}=s^{2}/2\). The latter inequality is obtained by injecting the energy assumptions (4.3), (4.4), (4.5) in the first integral on the second line and by performing a change of coordinates in the second one. We use (4.61) and the Hardy inequality of corollary B.8 with \(\mu=1-\eta\) and \(\alpha=1-\eta-\nu\) to estimate the above integral. For any fixed \(\mu^{\prime}>0\), we get that \[\frac{1}{\epsilon}\int_{s_{0}}^{t_{s}}\int_{\mathscr{C}_{t}}|Z^{K_{1}}H_{LL}^{1,\flat}|^{2}|\partial_{t}^{2}Z^{K_{2}}h_{\alpha\beta}^{1,\flat}|^{2}t^{1+\nu}dxdt\] \[\lesssim C_{2}^{2}\epsilon\int_{s_{0}}^{t_{s}}\int_{\mathscr{C}_{t}}\frac{|Z^{K_{ 1}}H_{LL}^{1,\flat}|^{2}}{(1+t-r)^{2+(1-\eta)}}\frac{dxdt}{(1+t+r)^{1-\eta-\nu}}\] \[\lesssim C_{2}^{2}\epsilon\int_{s_{0}}^{t_{s}}\int_{\mathscr{H}_{\tau}}\frac{| \partial_{r}Z^{K_{1}}H_{LL}^{1,\flat}|^{2}}{\tau^{2(1-\eta-\nu)}}dxd\tau+C_{2}^ {2}\epsilon\int_{s_{0}}^{t_{s}}\int_{\Sigma_{t}^{\circ}}|\partial_{r}Z^{K_{1}} H_{LL}^{1,\flat}|^{2}\frac{(1+|r-t|)^{1+\mu^{\prime}}}{(1+t+r)^{1-\eta-\nu}}dxdt.\] We estimate the first integral using (4.49): \[C_{2}^{2}\epsilon\int_{s_{0}}^{t_{s}}\int_{\mathscr{H}_{\tau}}\frac{|\partial_{ \tau}Z^{K_{1}}H^{1,\flat}_{LL}|^{2}}{\tau^{2(1-\eta-\nu)}}dxd\tau\lesssim\int_{s _{0}}^{t_{s}}C_{1}^{2}C_{2}^{2}\epsilon^{3}\tau^{-2(1-\eta-\nu)+4\delta_{k_{1}} }d\tau\lesssim C_{1}^{2}C_{2}^{2}\epsilon^{3}.\] For the latter one, we pick \(2\delta_{k}\leq\nu\ll\kappa\) and \(\mu^{\prime}:=2\kappa-\eta-2\nu\) so that \(\mu^{\prime}>0\). From inequality (2.9) we get \[C_{2}^{2}\epsilon\int_{s_{0}}^{t_{s}}\int_{\Sigma_{t}^{\rm e}}| \partial_{\tau}Z^{K_{1}}H^{1,\flat}_{LL}|^{2}\frac{(1+|r-t|)^{1+\mu^{\prime}}} {(1+t+r)^{1-\eta-\nu}}dxdt\] \[\lesssim C_{2}^{2}\epsilon\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! In the case where \(N_{1}<|K_{1}|\leq N\), we estimate \(\partial^{2}Z^{K_{2}}h^{1,\natural}_{\alpha\beta}\) in \(L^{\infty}\) with (4.7) and \(\Gamma^{K_{1}}H^{1,\flat}\) in \(L^{2}\) using (4.28) \[\|\Gamma^{K_{1}}H^{1,\flat}\,\partial^{2}Z^{K_{2}}h^{1,\natural}_{\alpha\beta} \|_{L^{2}(\mathscr{H}_{s})}\lesssim\epsilon^{2}C_{1}C_{2}\left(s^{-\frac{1}{2} +\zeta_{k}}+\sum_{i=1}^{4}s^{-\frac{1}{2}+\gamma_{i}+\zeta_{k-i}}\right)\] The conclusion of the proof follows from relation (4.2) and the observation that, if \(K\) is a multi-index of type \((N,k)\) with \(k\leq N_{1}\) and \(|K_{1}|>N_{1}\), then \(\Gamma^{K_{1}}\) contains at least one usual derivative. We conclude this subsection with some estimates on commutator terms involving \(H^{1,\natural}\). **Lemma 4.19**.: _We have that_ \[\left\|Z^{K}\big{(}(H^{1,\mu\nu})^{\natural}\partial_{\mu} \partial_{\nu}\big{)}h^{1,\natural}_{\alpha\beta}\right\|_{L^{2}_{xy}( \mathscr{H}_{s})} \tag{4.89}\] Proof.: We observe that if \(K\) is a multi-index of type \((N-1,k)\) then \(\partial Z^{K}=Z^{K^{\prime}}\) with \(K^{\prime}\) of type \((N,k)\). The first bound (resp. the second) in (4.89) simply follows from (4.2), (4.7), (4.21) (resp. (4.25)) and the fact that \(\lfloor N/2\rfloor+2\leq N_{1}\). Similarly, the first bound (resp. the second) in (4.90) follows from (4.19) (resp. (4.25)) and (4.41). **Lemma 4.20**.: _We have that_ \[\left\|\big{[}Z^{K},(H^{1,\boldsymbol{\mu\nu}})^{\natural}\partial_{ \boldsymbol{\mu}}\partial_{\boldsymbol{\nu}}\big{]}h^{1,\flat}_{\alpha\beta} \right\|_{L^{2}_{xy}(\mathscr{H}_{s})}\lesssim C_{1}^{2}\epsilon^{2}\begin{cases} s^{-1+2\delta_{N}}&\text{if $K$ of type $(N+1,k)$ with $k\leq N$}\\ s^{-\frac{3}{2}+2\delta_{N}}&\text{if $K$ of type $(N,k)$ with $k\leq N_{1}$}.\end{cases} \tag{4.91}\] Proof.: We begin by writing that \[\left|\big{[}Z^{K},(H^{1,\boldsymbol{\mu\nu}})^{\natural}\partial_{ \boldsymbol{\mu}}\partial_{\boldsymbol{\nu}}\big{]}h^{1,\flat}_{\alpha\beta} \right|\lesssim\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|\leq|K|\\ |K_{2}|<|K|\end{subarray}}|Z^{K_{1}}H^{1,\natural}||\partial^{2}Z^{K_{2}}h^{1, \flat}_{\alpha\beta}|.\] For any multi-index \(K\) of type \((N+1,k)\) with \(k\leq N\), energy bound (4.20) and pointwise bound (4.30) yield \[\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|\leq|K|\\ |K_{1}|\leq(|K|/2),|K_{2}|<|K|\end{subarray}}\big{\|}Z^{K_{1}}H^{1,\natural} \cdot\partial^{2}Z^{K_{2}}h^{1,\flat}_{\alpha\beta}\big{\|}_{L^{2}_{xy}}\] \[\lesssim\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|\leq|K|\\ |K_{1}|\leq(|K|/2),|K_{2}|<|K|\end{subarray}}\big{\|}(t/s)Z^{K_{1}}H^{1,\natural }\big{\|}_{L^{\infty}_{x}L^{2}_{y}}\big{\|}(s/t)\partial^{2}Z^{K_{2}}h^{1, \flat}_{\alpha\beta}\big{\|}_{L^{2}_{x}}\lesssim C_{1}^{2}\epsilon^{2}s^{- \frac{3}{2}+2\delta_{N}}\] while energy bound (4.19) and pointwise bound (4.42) yield \[\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|\leq|K|\\ |K_{2}|\leq(|K|/2)\end{subarray}}\big{\|}Z^{K_{1}}H^{1,\natural}\cdot\partial^ {2}Z^{K_{2}}h^{1,\flat}_{\alpha\beta}\big{\|}_{L^{2}_{xy}}\] \[\lesssim\sum_{\begin{subarray}{c}|K_{1}|+|K_{2}|\leq|K|\\ |K_{2}|\leq(|K|/2)\end{subarray}}\big{\|}\partial^{2}Z^{K_{2}}h^{1,\flat}_{ \alpha\beta}\big{\|}_{L^{\infty}_{x}}\big{\|}Z^{K_{1}}H^{1,\natural}\big{\|}_ {L^{2}_{xy}}\lesssim C_{1}^{2}\epsilon^{2}s^{-1+2\delta_{N}}.\] If \(K\) is instead a multi-index of type \((N,k)\) with \(k\leq N_{1}\), the above estimate can be improved to \(C_{1}^{2}\epsilon^{2}s^{-\frac{3}{2}+2\delta_{N}}\) using energy bound (4.25). Proof of Proposition 4.15.: We decompose \(H^{1,\mu\nu}\) and \(h^{1}_{\alpha\beta}\) appearing in the commutator terms into their zero mode and their zero-average component and express all commutators involving \((H^{1,\mu\nu})^{\flat}\) with respect to the null framework. From (4.84), (4.85) and the energy bounds (4.19), (4.20) and (4.24) we derive that \[\iint_{\mathscr{H}_{[s_{0},s]}}|[Z^{K},(H^{1,\boldsymbol{\mu\nu}})^{\flat} \partial_{\boldsymbol{\mu}}\partial_{\boldsymbol{\nu}}]h^{1,\flat}_{\alpha \beta}||\partial_{t}\phi|dxdydt\lesssim(C_{1}^{3}+C_{0}^{2})\epsilon^{3}s^{ \kappa_{\phi}}\] with \(\kappa_{\phi}\) given by (4.86), while from (4.89) and (4.20) \[\iint_{\mathscr{H}_{[s_{0},s]}}|Z^{K}\big{(}(H^{1,\mu\nu})^{\natural}\cdot \partial_{\mu}\partial_{\nu}h^{1,\natural}_{\alpha\beta}\big{)}^{\flat}|| \partial_{t}Z^{K}h^{1,\flat}_{\alpha\beta}|dxdt\lesssim C_{1}^{3}\epsilon^{2} \Big{(}\sum_{i=0}^{4}s^{\mu(\gamma_{i}+\zeta_{k-i}+\zeta_{k})}+s^{2\mu\zeta_{ k}}\Big{)}\] where \(\mu=0\) if \(K\) is of type \((N-1,k)\) with \(k\leq N_{1}\), \(\mu=1\) otherwise. These two estimates together imply (4.81) and (4.83). Additionally, for \(K\) of type \((N+1,k)\) with \(k\leq N\) we get from (4.87) and (4.19) that \[\iint_{\mathscr{H}_{[s_{0},s]}}|[Z^{K},(H^{1,\mu\nu})^{\flat}\partial_{\mu} \partial_{\nu}]h^{1,\natural}_{\alpha\beta}||\partial_{t}Z^{K}h^{1}_{\alpha \beta}|dxdydt\lesssim(C_{1}^{3}+C_{0}^{2})\epsilon^{3}s\Big{(}\sum_{i=0}^{4}s^{ (\gamma_{i}+\zeta_{k-i}+\zeta_{k})}+s^{2\zeta_{k}}\Big{)}\] while for multi-indexes of type \((N,k)\) with \(k\leq N_{1}\), estimates (4.88) and (4.24) yield \[\iint_{\mathscr{H}_{[s_{0},s]}}|[Z^{K},(H^{1,\mu\nu})^{\flat}\partial_{\mu} \partial_{\nu}]h^{1,\natural}_{\alpha\beta}||\partial_{t}Z^{K}h^{1}_{\alpha \beta}|dxdydt\lesssim(C_{1}^{3}+C_{0}^{2})\epsilon^{3}s^{2\delta_{k}}.\] Finally, from (4.91) and the energy bounds (4.19), (4.24) we get \[\iint_{[s_{0},s]}|[Z^{K},(H^{1,\boldsymbol{\mu\nu}})^{\natural}\partial_{ \boldsymbol{\mu}}\partial_{\boldsymbol{\nu}}]h^{1,\flat}_{\alpha\beta}|| \partial_{t}Z^{K}h^{1}_{\alpha\beta}|dxdydt\lesssim C_{1}^{3}\epsilon^{3}s^{\mu}\] with \(\mu=1\) if \(K\) is of type \((N+1,k)\) with \(k\leq N\), \(\mu=1\) if \(K\) is of type \((N,k)\) with \(k\leq N_{1}\). This concludes the proof of (4.80) and (4.82). ### Propagation of the pointwise bound (4.7) This section is devoted to the propagation of the pointwise estimates (4.7) on the zero-average component \(h^{1,\natural}_{\alpha\beta}\) of the solution. The equation satisfied by \(Z^{K}h^{1,\natural}_{\alpha\beta}\) is obtained from the subtraction of (4.15) from (4.13): \[\square_{xy}Z^{K}h^{1,\natural}_{\alpha\beta}+(H^{\mu\nu})^{ \flat}\partial_{\mu}\partial_{\nu}Z^{K}h^{1,\natural}_{\alpha\beta}= F^{K,\natural}_{\alpha\beta}-(H^{1,\mu\nu})^{\natural}\partial_{\mu} \partial_{\nu}Z^{K}h^{1,\natural}_{\alpha\beta}-\left((H^{\mu\nu})^{\natural} \partial_{\mu}\partial_{\nu}Z^{K}h^{1,\natural}_{\alpha\beta}\right)^{\natural} \tag{4.92}\] where \(F^{K,\natural}_{\alpha\beta}=F^{K}_{\alpha\beta}-F^{K,\flat}_{\alpha\beta}\) and \(F^{K}_{\alpha\beta}\) is defined in (4.14). We observe that, after (4.48), pure zero-mode interactions do not appear in the above right hand side. The proof relies on the following result, which is motivated by the early work of Klainerman [29] and can be found in slightly different forms in the works of LeFloch-Ma [37], Dong-Wyatt [14] and Huneau-Stingo [21]. **Proposition 4.21**.: _Suppose that \(\phi\) is a solution of the equation_ \[\square_{xy}\phi+(H^{\mu\nu})^{\flat}\partial_{\mu}\partial_{\nu}\phi=F,\quad (t,x,y)\in\mathbb{R}^{1+3}\times\mathbb{S}^{1} \tag{4.93}\] _such that \(\int_{\mathbb{S}^{1}}\phi\,dy=0\). For each \((t,x)\) in the cone \(\{t>r\}\), let \(s=\sqrt{t^{2}-r^{2}}\) and \(Y_{tx},A_{tx},B_{tx}\) be functions defined as follows_ \[Y^{2}_{tx}(\lambda) :=\int_{\mathbb{S}^{1}}\lambda\left|\frac{3}{2}\phi_{\lambda}+( \mathscr{S}\phi)_{\lambda}\right|^{2}+\lambda^{3}|\partial_{y}\phi_{\lambda}| ^{2}dy\] \[A_{tx}(\lambda) :=\sup_{\mathbb{S}^{1}}\left|\lambda^{-1}\left(\mathscr{S}\left( (t/s)^{2}(H^{1,UV})^{\flat}c^{00}_{UV})\right)_{\lambda}\right|+\sup_{\mathbb{ S}^{1}}\left|\lambda^{-1}(\mathscr{S}(H^{1,44})^{\flat})_{\lambda}\right|\right.\] \[\quad\quad+\sup_{\mathbb{S}^{1}}\left|\lambda^{-1}(\chi(t/r) \chi^{\prime}(r)(t/s)^{2}M)_{\lambda}\right|\] \[B^{2}_{tx}(\lambda) :=\int_{\mathbb{S}^{1}}\lambda^{-1}|(R[\phi])_{\lambda}|^{2}dy,\] _where \(f_{\lambda}(t,x,y)=f(\frac{\lambda t}{s},\frac{\lambda x}{s},y)\) and \(R[\phi]:=R_{0}[\phi]+\sum_{i=1}^{2}R_{i}^{0}[\phi]+\sum_{i=1}^{3}R_{i}^{1}[ \phi]-s^{2}F\) with_ \[R_{0}[\phi] :=s^{2}\underline{\partial}^{\boldsymbol{a}}\underline{\partial} _{\boldsymbol{a}}\phi+x^{\boldsymbol{a}}x^{\boldsymbol{b}}\underline{\partial} _{\boldsymbol{a}}\underline{\partial}_{\boldsymbol{b}}\phi+\frac{3}{4}\phi+3x^ {\boldsymbol{a}}\underline{\partial}_{\boldsymbol{a}}\phi\] \[R_{1}^{0}[\phi] :=s^{2}\chi(t/r)\chi(r)\frac{M}{r}\left((x^{\boldsymbol{a}}/t) \partial_{t}\underline{\partial}_{\boldsymbol{a}}\phi+(x^{\boldsymbol{a}}/t) \underline{\partial}_{\boldsymbol{a}}\partial_{t}\phi-\underline{\partial}^{ \boldsymbol{a}}\underline{\partial}_{\boldsymbol{a}}\phi+(3/t)\partial_{t}\phi\right)\] \[R_{2}^{0}[\phi] :=\chi(t/r)\chi(r)\frac{M}{r}\frac{t^{2}+r^{2}}{s^{2}}\cdot Q[\phi]\] \[R_{1}^{1}[\phi] :=s^{2}(H^{1,UV})^{\flat}\left(c^{\boldsymbol{a}\beta}_{UV} \underline{\partial}_{\boldsymbol{a}}\underline{\partial}_{\boldsymbol{\beta}} \phi+c^{\boldsymbol{a}\boldsymbol{b}}_{UV}\underline{\partial}_{\boldsymbol{a }}\underline{\partial}_{\boldsymbol{b}}\phi+c^{\boldsymbol{4a}}_{UV} \partial_{y}\underline{\partial}_{\boldsymbol{a}}\phi+d^{\mu}_{UV}\underline{ \partial}_{\boldsymbol{\mu}}\phi\right)\] \[R_{2}^{1}[\phi] :=-(t/s)(H^{1,UV})^{\flat}c^{40}_{UV}\cdot s\left(\tfrac{3}{2} \partial_{\boldsymbol{\beta}}\phi+x^{\boldsymbol{a}}\underline{\partial}_{ \boldsymbol{\beta}}\underline{\partial}_{\boldsymbol{a}}\phi\right)\] \[R_{3}^{1}[\phi] :=-(t/s)^{2}(H^{1,UV})^{\flat}c^{00}_{UV}\cdot Q[\phi]\] _and_ \[Q[\phi]=\left(\tfrac{3}{4}+s^{2}\left(2(x^{\boldsymbol{a}}/t)\underline{ \partial}_{\boldsymbol{a}}\partial_{t}+s^{-2}x^{\boldsymbol{a}}x^{\boldsymbol{b }}\underline{\partial}_{\boldsymbol{a}}\underline{\partial}_{\boldsymbol{b}}+( r^{2}/t^{3})\partial_{t}+(3/t)\partial_{t}+3s^{-2}x^{\boldsymbol{a}} \underline{\partial}_{\boldsymbol{a}}\right)\right)\phi.\] _Then, in the hyperbolic region \(\mathscr{H}_{[s_{0},S_{0})}\), the following inequality holds_ \[s^{\frac{3}{2}}\left(\|\phi\|_{L^{2}(\mathbb{S}^{1})}+\|\partial_{y}\phi\|_{L^{ 2}(\mathbb{S}^{1})}\right)+s^{\frac{1}{2}}\|\mathscr{S}\phi\|_{L^{2}(\mathbb{S} ^{1})}\lesssim\left(Y_{tx}(s_{0})+\int_{s_{0}}^{s}B_{tx}(\lambda)d\lambda \right)\exp^{\int_{s_{0}}^{s}A_{tx}(\lambda)d\lambda}.\] Proof.: The wave operator in question writes in terms of the \(\partial_{t}\) and \(\underline{\partial}_{\boldsymbol{a}}\) derivatives as follows \[-\square_{xy}=(s/t)^{2}\partial_{t}^{2}+2(x^{\boldsymbol{a}}/t)\underline{ \partial}_{\boldsymbol{a}}\partial_{t}-\underline{\partial}^{\boldsymbol{a}} \underline{\partial}_{\boldsymbol{a}}+(r^{2}/t^{3})\partial_{t}+(3/t)\partial _{t}-\Delta_{y}.\] For some \(\lambda>0\) and fixed \((t,x,y)\), we define \(\omega_{txy}(\lambda):=\lambda^{3/2}\phi(\frac{\lambda t}{s},\frac{\lambda x}{ s},y)\) to be the evaluation of \(\phi\) on the hyperboloid \(\mathscr{H}_{\lambda}\) dilated by \(\lambda^{3/2}\). We compute \[\dot{\omega}_{txy} =\lambda^{1/2}\left(\tfrac{3}{2}\phi+(\mathscr{S}\phi)\right)_{\lambda}\] \[\ddot{\omega}_{txy} =\lambda^{-1/2}(P\phi)_{\lambda}:=\lambda^{-1/2}\left(\tfrac{3}{ 4}\phi+3(\mathscr{S}\phi)+(t^{2}\partial_{t}^{2}+2tx^{\boldsymbol{a}} \partial_{\boldsymbol{a}}\partial_{t}+x^{\boldsymbol{a}}x^{\boldsymbol{b}} \partial_{\boldsymbol{a}}\partial_{\boldsymbol{b}})\phi\right)_{\lambda}.\] A calculation shows that \[P\phi=s^{2}\left(-\square_{txy}+\underline{\partial}^{\boldsymbol{a}} \underline{\partial}_{\boldsymbol{a}}+\Delta_{y}\right)\phi+x^{\boldsymbol{a} }x^{\boldsymbol{b}}\underline{\partial}_{\boldsymbol{a}}\underline{\partial}_ {\boldsymbol{b}}\phi+3x^{\boldsymbol{a}}\underline{\partial}_{\boldsymbol{a}} \phi+\tfrac{3}{4}\phi. \tag{4.94}\] Using equation (4.93) we derive that \(\omega_{txy}(\lambda)\) satisfies \[\ddot{\omega}_{txy}-\Delta_{y}\omega_{txy}=\lambda^{-1/2}(s^{2}(H^{\mu\nu})^{ \flat}\partial_{\mu}\partial_{\nu}\phi)_{\lambda}+\lambda^{-1/2}R_{0}[\phi]_{ \lambda}-\lambda^{3/2}F_{\lambda}.\] For the curved part in the above expression, we expand \((H^{\mu\nu})^{\flat}=(H^{1,\mu\nu})^{\flat}+H^{0,\mu\nu}\) where \(H^{0}\) is defined in (1.12). Starting with the \(H^{0}\) piece, we compute \[H^{0,\mu\nu}\partial_{\mu}\partial_{\nu}\phi=H^{0,\boldsymbol{\mu\nu}} \partial_{\boldsymbol{\mu}}\partial_{\boldsymbol{\nu}}\phi=-a(t,x)\left(1+(r/ t)^{2}\right)\partial_{t}^{2}\phi+s^{-2}R_{1}^{0}[\phi].\] where for simplicity we put \(a(t,x):=\chi(r/t)\chi(r)\frac{M}{r}\). Using the calculation for \(P\phi\) given in (4.94), we find \[-\lambda^{-1/2}(s^{2}a(t,x)\left(1+(r/t)^{2}\right)\partial_{t}^ {2}\phi)_{\lambda} =-\lambda^{-1/2}\left(a(t,x)(t/s)^{2}(1+r^{2}/t^{2})\cdot s^{2}(s /t)^{2}\partial_{t}^{2}\phi\right)_{\lambda}\] \[=-\left(a(t,x)(t/s)^{2}(1+r^{2}/t^{2})\right)_{\lambda}\ddot{ \omega}_{txy}+\lambda^{-1/2}R_{2}^{0}[\phi]_{\lambda}.\] For the \(H^{1,\flat}\) part, we use (1.16) and (1.19). We find \[(H^{1,\mu\nu})^{\flat}\partial_{\mu}\partial_{\nu}\phi=\left[(H^{1,UV})^{ \flat}c_{UV}^{00}\partial_{t}^{2}+(H^{1,UV})^{\flat}c_{UV}^{04}\partial_{t} \partial_{y}+(H^{1,44})^{\flat}\partial_{y}^{2}\right]\phi+s^{-2}R_{1}^{1}[ \phi].\] Since we can write \(\dot{\omega}_{txy}=\lambda^{1/2}\left(\tfrac{3}{2}\phi+(s^{2}/t)\partial_{t} \phi+x^{\boldsymbol{a}}\underline{\partial}_{\boldsymbol{a}}\phi\right)_{\lambda}\), we find \[\lambda^{-1/2}(s^{2}(H^{1,UV})^{\flat}c_{UV}^{04}\partial_{t} \partial_{y}\phi)_{\lambda} =\left((t/s)(H^{1,UV})^{\flat}c_{UV}^{04}\partial_{y}(\lambda^{1/ 2}\tfrac{s^{2}}{t}\partial_{t}\phi)\right)_{\lambda}\] \[=\left((t/s)(H^{1,UV})^{\flat}c_{UV}^{04}\right)_{\lambda}\partial_ {y}\dot{\omega}_{txy}+\lambda^{-1/2}R_{2}^{1}[\phi]_{\lambda}.\] In a similar way, using also the calculation for \(P\phi\) given in (4.94), we find \[\lambda^{-1/2}(s^{2}(H^{1,UV})^{\flat}c_{UV}^{00}\partial_{t}^{2} \phi)_{\lambda}=\left((t/s)^{2}(H^{1,UV})^{\flat}c_{UV}^{00}\right)_{\lambda} \ddot{\omega}_{txy}+\lambda^{-1/2}R_{3}^{1}[\phi]_{\lambda}.\] For simplicity, we henceforth write \(\omega_{txy}=\omega(\lambda)\) and suppress also the \(|_{\lambda}\) notation. Putting the above computations together, we have \[b(t,x)\ddot{\omega}-(1+(H^{1,44})^{\flat})\Delta_{y}\omega-(t/s)(H^{1,UV})^{ \flat}c_{UV}^{04}\partial_{y}\dot{\omega}=\lambda^{-1/2}R[\phi] \tag{4.95}\] where \[b(t,x):=1-(t/s)^{2}(H^{1,UV})^{\flat}c_{UV}^{00}+\chi(t/r)\chi(r)\frac{M}{r}(t /s)^{2}(1+r^{2}/t^{2}). \tag{4.96}\] We multiply (4.95) by \(\partial_{\lambda}\omega\), integrate over \(\mathbb{S}^{1}\) and integrate by parts to get: \[\int_{\mathbb{S}^{1}}\partial_{\lambda}\omega\Big{(}b\partial_{ \lambda}^{2}\omega-(1+(H^{1,44})^{\flat})\Delta_{y}\omega-(t/s)(H^{1,UV})^{ \flat}c_{UV}^{04}\partial_{y}\partial_{\lambda}\omega\Big{)}dy\] \[=\frac{d}{d\lambda}\Big{(}\frac{1}{2}\int_{\mathbb{S}^{1}}b| \partial_{\lambda}\omega|^{2}dy+(1+(H^{1,44})^{\flat})|\partial_{y}\omega|^{2 }\Big{)}-\frac{1}{2}\int_{\mathbb{S}^{1}}(\partial_{\lambda}b)|\partial_{ \lambda}\omega|^{2}dy.\] Recalling the definition of \(b\) in (4.96), we obtain \[\frac{d}{d\lambda}\Big{(}\int_{\mathbb{S}^{1}}b|\partial_{ \lambda}\omega|^{2}+\left(1+(H^{44})^{\flat}\right)|\partial_{y}\omega|^{2}dy \Big{)}\] \[=-\int_{\mathbb{S}^{1}}\partial_{\lambda}\left((t/s)^{2}(H^{1,UV })^{\flat}c_{UV}^{00}\right)|\partial_{\lambda}\omega|^{2}dy+\int_{\mathbb{S} ^{1}}\partial_{\lambda}\left(\chi(t/r)\chi(r)(t/s)^{2}(1+r^{2}/t^{2})\right) \tfrac{M}{r}|\partial_{\lambda}\omega|^{2}dy\] \[+\int_{\mathbb{S}^{1}}\chi(t/r)\chi(r)(t/s)^{2}(1+r^{2}/t^{2}) \partial_{\lambda}\left(\tfrac{M}{r}\right)|\partial_{\lambda}\omega|^{2}dy+2 \int_{\mathbb{S}^{1}}\lambda^{-1/2}R[\phi]\partial_{\lambda}\omega dy.\] We crucially can drop the third term on the RHS above using the fact that \(\chi\geq 0\), \(M>0\) and the identity \(\partial_{\lambda}(\tfrac{M}{r})_{\lambda}=-(\tfrac{M}{sr})_{\lambda}\). Note also that the cut-off function \(\chi\) is supported for \(2r>t>2\) and so in the region \(\{t\geq r+1\}\) we have \(|\chi(t/r)\chi(r)\tfrac{M}{r}(t^{2}/s^{2})|\lesssim\epsilon\). By relation (1.18) with \(\pi=H^{1,\flat}\), the estimates (1.17), (4.43), and (4.47), and the fact that \(t/s^{2}\leq 1\) in the interior of the light cone, we find \[\sup_{\mathbb{S}^{1}}|(t/s)^{2}(H^{1,UV})^{\flat}c_{UV}^{00}|+|(H^{1,44})^{ \flat}|\lesssim\varepsilon.\] All together, we obtain \[\frac{d}{d\lambda}Y_{tx}^{2}(\lambda)\lesssim A_{tx}(\lambda)Y_{tx}^{2}( \lambda)+B_{tx}(\lambda)Y_{tx}(\lambda)\] with \(A_{tx},B_{tx},Y_{tx}\) as in the statement. From Gronwall's lemma, we have \[Y_{tx}(s)\lesssim\left(Y_{tx}(s_{0})+\int_{s_{0}}^{s}B_{tx}(\lambda)d\lambda \right)\exp\left(\int_{s_{0}}^{s}A_{tx}(\lambda)d\lambda\right), \tag{4.97}\] and the conclusion follows from the Poincare inequality. **Proposition 4.22**.: _There exists a constant \(C_{2}\) sufficiently large, a finite and increasing sequence of parameters \(0<\gamma_{k},\delta_{k},\zeta_{k}\ll 1\) satisfying (4.2), and \(0<\epsilon_{0}\ll 1\) sufficiently small such that, under the assumptions of Proposition 4.1, we have (4.12)._ Proof.: Throughout the proof, \(0<\eta\leq 2\delta_{N}\ll 1\) will denote a constant that linearly depends on \(\zeta_{k},\gamma_{k},\delta_{k}\). We apply Proposition 4.21 to the variable \(\mathbf{W}=\partial^{I}\Gamma^{J}h_{\alpha\beta}^{1,\natural}\) with \(|I|+|J|\leq N_{1}+1=N-4\) and \(|J|\leq N_{1}\), governed by the PDE (4.92). _1. The \(A_{tx}(\lambda)\) term:_ this is the same for all values of \(\mathbf{W}\). Bound (4.40) gives that \[\sup_{\mathbb{S}^{1}}\lambda^{-1}|(\mathscr{S}(H^{1,44})^{\flat})_{\lambda}| \lesssim\epsilon\lambda^{-\frac{3}{2}+\delta_{2}}.\] For the other piece of \(A_{tx}(\lambda)\) we use (1.18) with \(\pi=H^{1,\flat}\), the identities \(\mathscr{S}(s)=s\) and \(\mathscr{S}(t+r)=t+r\), the pointwise bounds (4.40), (4.45), (4.47) and the fact that \(t/s^{2}\leq 1\) in the interior of the lightcone, to derive \[|\mathscr{S}\left((t/s)^{2}(H^{1,UV})^{\flat}c_{UV}^{00}\right)|\lesssim|(s/t) ^{2}\mathscr{S}H^{1}_{\underline{L}\underline{L}}|+|(t/s)^{2}\mathscr{S}(H^{1}_ {LL})^{\flat}|+|\mathscr{S}H^{1}_{L\underline{L}}|\lesssim\epsilon s^{-\frac{1}{ 2}+\gamma_{1}}.\] Note that the estimate of the second term in the above right hand side is obtained using also the following decomposition of the scaling vector field \[\mathscr{S}=(t-r)\partial_{t}+(r-t)\partial_{r}+(x^{\boldsymbol{a}}/r)\Omega_{0 \boldsymbol{a}}. \tag{4.98}\] Consequently \[\lambda^{-1}|\left(\mathscr{S}\left((t/s)^{2}(H^{1,UV})^{\flat}c_{UV}^{00} \right)\right)_{\lambda}|\lesssim\epsilon\lambda^{-\frac{3}{2}+\gamma_{1}}.\] Finally, we use that \(\chi(r)=O(1)\) on its support, so that we can bound \(|\chi^{\prime}(r)|\lesssim r^{-2}\) and thus get \[\lambda^{-1}\left|(\chi(t/r)\chi^{\prime}(r)(t/s)^{2}M)_{\lambda}\right| \lesssim\epsilon\lambda^{-2}.\] Bringing all this together gives \[\int_{s_{0}}^{s}A_{tx}(\lambda)d\lambda\lesssim\epsilon. \tag{4.99}\] _2. The \(Y_{tx}(s_{0})\) term:_ We evaluate all the expressions here on the hyperboloid \(\mathscr{H}_{s_{0}}\) for \(s_{0}\) close to \(2\). We observe that \(1+t+r=O(1)\) on \(\mathscr{H}_{s_{0}}\), so by (4.98) and lemma B.5 \[|Y_{tx}(s_{0})|\lesssim\|\mathbf{W}_{s_{0}}\|_{L^{2}_{y}}+\|(\mathscr{S} \mathbf{W})_{s_{0}}\|_{L^{2}_{y}}+\|(\partial_{y}\mathbf{W})_{s_{0}}\|_{L^{2} _{y}}\lesssim\Big{(}\frac{s}{t}\Big{)}^{\frac{3}{2}}E^{\mathrm{i}}(s_{0},Z^{ \leq 2}\mathbf{W})^{\frac{1}{2}}. \tag{4.100}\] _3.a. The \(R_{0}\) term in \(B_{tx}(\lambda)\):_ By (4.30), (4.31), and (4.32) we obtain \[\|\lambda^{-1/2}(R_{0}[\mathbf{W}])_{\lambda}\|_{L^{2}_{y}}\lesssim C_{1} \epsilon\lambda^{-\frac{3}{2}+\zeta_{k+4}}\Big{(}\frac{s}{t}\Big{)}^{\frac{3} {2}}. \tag{4.101}\] _3.b. The \(R_{1}^{1}\) term in \(B_{tx}(\lambda)\):_ First note that in the region \(\{r<t\}\) we have \(|c_{UV}^{\boldsymbol{a}\beta}|\lesssim 1\) using (1.17) and straightforward computations. Thus, by (4.6), (4.30) and (4.31), \[\begin{split}\|\lambda^{-1/2}(R_{1}^{1}[\mathbf{W}])_{\lambda}\| _{L^{2}_{y}}&\lesssim\lambda^{\frac{3}{2}}\left|H^{1,\flat} \right|\Big{(}\left\|\partial\underline{\partial}\mathbf{W}\right\|_{L^{2}_{y }}+t^{-1}\|\mathbf{W}\|_{L^{2}_{y}}\Big{)}\\ &\lesssim C_{1}^{2}\epsilon^{2}\lambda^{-\frac{3}{2}+\gamma_{0}+ \zeta_{k+3}}\Big{(}\frac{s}{t}\Big{)}^{\frac{5}{2}}.\end{split} \tag{4.102}\] _3.c The \(R_{2}^{1}\) term in \(B_{tx}(\lambda)\):_ using (4.6), (4.30), (4.31) we get \[\begin{split}\|\lambda^{-1/2}(R_{2}^{1}[\mathbf{W}])_{\lambda}\| _{L^{2}_{y}}&\lesssim\lambda^{\frac{1}{2}}\big{|}\big{(}(t/s)((H^ {1,UV})^{\flat}c_{UV}^{40})_{\lambda}\big{|}\big{(}\left\|\partial_{y} \mathbf{W}\right\|_{L^{2}_{y}}+t\left\|\partial_{y}\underline{\partial} \mathbf{W}\right\|_{L^{2}_{y}}\Big{)}\\ &\lesssim C_{1}^{2}\epsilon^{2}\lambda^{-\frac{3}{2}+\gamma_{0}+ \zeta_{k+3}}\Big{(}\frac{s}{t}\Big{)}^{\frac{3}{2}}.\end{split}\] _3.d.i The \(R_{3}^{1}\) term in \(B_{tx}(\lambda)\):_ we observe that from (4.30), (4.31), (4.32) \[\begin{split}\|Q[\mathbf{W}]\|_{L^{2}_{y}}&\lesssim \big{(}\|\mathbf{W}\|_{L^{2}_{y}}+\left\|x^{2}\underline{\partial}^{2} \mathbf{W}\right\|_{L^{2}_{y}}+\left\|x\underline{\partial}\mathbf{W}\right\| _{L^{2}_{y}}\big{)}+\big{(}\left\|s^{2}\underline{\partial}\partial\mathbf{W} \right\|_{L^{2}_{y}}+\left\|(s^{2}/t)\partial\mathbf{W}\right\|_{L^{2}_{y}} \big{)}\\ &\lesssim C_{1}\epsilon t^{-\frac{3}{2}}s^{\frac{1}{2}+\zeta_{k+4 }}+C_{1}\epsilon t^{-\frac{5}{2}}s^{\frac{5}{2}+\zeta_{k+3}}\end{split}\] which, coupled to (4.6) and the fact that \(s^{-2}\lesssim t^{-1}\) yields again \[\|\lambda^{-1/2}(R_{3}^{1}[\mathbf{W}])_{\lambda}\|_{L^{2}_{y}}\lesssim\big{|} \big{(}s^{-\frac{1}{2}}(t/s)^{2}(H^{1,UV})^{\flat}c_{UV}^{00}\big{)}_{\lambda }\big{\|}\|Q[\mathbf{W}]_{\lambda}\|_{L^{2}_{y}}\lesssim C_{1}^{2}\epsilon^{2} \lambda^{-\frac{3}{2}+\gamma_{0}+\zeta_{k+3}}\Big{(}\frac{s}{t}\Big{)}^{\frac{3 }{2}}.\] _3.e The \(R_{1}^{0}\) and \(R_{2}^{0}\) terms in \(B_{tx}(\lambda)\):_ satisfy the same bounds as \(R_{1}^{1}\) and \(R_{3}^{1}\) respectively. In summary, for \({\bf W}=\partial^{I}\Gamma^{J}h^{1,\natural}_{\alpha\beta}\) with \(|I|+|J|\leq N-4=N_{1}+1\) \[\|\lambda^{-1/2}(R[{\bf W}]+s^{2}F)_{\lambda}\|_{L^{2}(\mathbb{S}^{1})}\lesssim C _{1}\epsilon\lambda^{-\frac{3}{2}+\eta}\Big{(}\frac{s}{t}\Big{)}^{\frac{3}{2}}. \tag{4.103}\] _4.a The source term \(F\):_ we simply distribute derivatives and vector fields across the nonlinearities given in (4.92). We begin by analyzing the quadratic interactions of zero-average component with itself, which are of the form \[\sum_{\begin{subarray}{c}|I_{1}|+|I_{2}|=|I|\\ |J_{1}|+|J_{2}|\leq|J|\end{subarray}}\Big{(}\partial(\partial^{I_{1}}\Gamma^{J _{1}}h^{1,\natural})\cdot\partial(\partial^{I_{2}}\Gamma^{J_{2}}h^{1,\natural} )\Big{)}^{\natural}+\Big{(}(\partial^{I_{1}}\Gamma^{J_{1}}h^{1,\natural}) \cdot\partial^{2}(\partial^{I_{2}}\Gamma^{J_{2}}h^{1,\natural})\Big{)}^{ \natural}\] and recall that there must exist an index \(l=1,2\) such that \(|I_{l}|+|J_{l}|\leq\lfloor N_{1}+1/2\rfloor\). To estimate the first sum, we use (4.7) and (4.41) to obtain that \[\sum_{\begin{subarray}{c}|I_{1}|+|I_{2}|=|I|\\ |J_{1}|+|J_{2}|\leq|J|\end{subarray}}\lambda^{\frac{3}{2}}\left\|\Big{(} \partial(\partial^{I_{1}}\Gamma^{J_{1}}h^{1,\natural})\cdot\partial(\partial^ {I_{2}}\Gamma^{J_{2}}h^{1,\natural})\Big{)}^{\natural}\right\|_{L^{2}_{y}} \lesssim\lambda^{\frac{3}{2}}\left\|\partial Z^{\leq N_{1}-1}h^{1, \natural}\right\|_{L^{\infty}_{y}}\left\|\partial Z^{\leq N_{1}}h^{1,\natural }\right\|_{L^{2}_{y}}\] \[\lesssim C_{1}^{2}\epsilon^{2}\lambda^{-\frac{3}{2}+\eta}\Big{(} \frac{s}{t}\Big{)}^{2}.\] All terms in the second sum are estimated in the same way, besides the one corresponding to \(|I_{2}|+|J_{2}|=|I|+|J|=N_{1}+1\). For this one we use (4.30) and (4.41), together with the assumption (4.2) on the parameters (i.e. \(\zeta_{i}\ll\gamma_{j}\) for any \(i,j\)), and derive that \[\lambda^{\frac{3}{2}}\left\|\Big{(}h^{1,\natural}\cdot\partial^{2}(\partial^{ I}\Gamma^{J}h^{1,\natural})\Big{)}^{\natural}\right\|_{L^{2}_{y}}\lesssim \lambda^{\frac{3}{2}}\|h^{1,\natural}\|_{L^{\infty}_{y}}\left\|\partial^{2}( \partial^{I}\Gamma^{J}h^{1,\natural})\right\|_{L^{2}_{y}}\lesssim C_{1}C_{2} \epsilon^{2}\lambda^{-1+\gamma_{k}}\Big{(}\frac{s}{t}\Big{)}^{2}.\] Let us note that this term is highly specific to the Kaluza-Klein problem and is absent in the Einstein-Klein Gordon equations. We also observe that, in the case where \(|I|+|J|=N_{1}+1\) but \(|J|\leq N_{1}-2\) we can use (4.33) instead of (4.41) to obtain \[\lambda^{\frac{3}{2}}\left\|\Big{(}h^{1,\natural}\cdot\partial^{2}(\partial^{ I}\Gamma^{J}h^{1,\natural})\Big{)}^{\natural}\right\|_{L^{2}_{y}}\lesssim C _{1}C_{2}\epsilon^{2}\lambda^{-\frac{3}{2}+\eta}\Big{(}\frac{s}{t}\Big{)}^{2}.\] Next we turn to the mixed interactions between the zero mode and the zero-average component. We begin with the commutator terms \([\partial^{I}\Gamma^{J},(H^{1,\mu\nu})^{\flat}\partial_{\mu}\partial_{\nu}]h^ {1,\natural}_{\alpha\beta}\), which we rewrite using (3.25) with \(\pi=H^{1,\flat}\). We focus on treating the following products \[\partial^{I_{1}}\Gamma^{J_{1}}H^{1,\flat}_{LL}\cdot\partial_{t}^ {2}(\partial^{I_{2}}\Gamma^{J_{2}}h^{1,\natural}_{\alpha\beta}),\quad\partial ^{I_{1}}\Gamma^{J_{1}}H^{1,\flat}_{4L}\cdot\partial_{t}\partial_{y}(\partial^ {I_{2}}\Gamma^{J_{2}}h^{1,\natural}_{\alpha\beta}),\quad\partial^{I_{1}} \Gamma^{J_{1}}H^{1,\flat}_{44}\cdot\partial_{y}^{2}(\partial^{I_{2}}\Gamma^{J_ {2}}h^{1,\natural}_{\alpha\beta})\] \[\text{for }|I_{1}|+|I_{2}|=|I|,\quad|J_{1}|+|J_{2}|\leq|J|,\quad|I_{2} |+|J_{2}|<|I|+|J| \tag{4.104}\] the remaining ones being simpler. The latter term can be rewritten using the equation, i.e. \[\partial_{y}^{2}(\partial^{I_{2}}\Gamma^{J_{2}}h^{1,\natural}_{ \alpha\beta})=\Big{(}(s/t)^{2}\partial_{t}^{2}+2(x^{\boldsymbol{a}}/t) \underline{\partial}_{\boldsymbol{a}}\partial_{t}-\underline{\partial}^{ \boldsymbol{a}}\underline{\partial}_{\boldsymbol{a}}+(r^{2}/t^{3})\partial_{t}+ (3/t)\partial_{t}\Big{)}(\partial^{I_{2}}\Gamma^{J_{2}}h^{1,\natural}_{\alpha \beta})\] \[+\Box_{xy}(\partial^{I_{2}}\Gamma^{J_{2}}h^{1,\natural}_{ \alpha\beta}).\] If \(|I_{1}|>0\), we estimate the first two terms in (4.104) using (4.7) and (4.46) and the latter one using (4.38) and (4.7), hence getting that they are all bounded by \(C_{1}C_{2}\epsilon^{2}\lambda^{-\frac{3}{2}+\eta}(s/t)^{2}\) If \(|I_{1}|=0\) and \(|J_{1}|>0\), we use (4.6) and (4.7) for all terms, together with the algebraic relation (4.2), obtaining \[\lambda^{\frac{3}{2}}\left|\Gamma^{J_{1}}h^{1,\flat}\right|\left\|\partial^{2}( \partial^{I}\Gamma^{J_{2}}h^{1,\natural}_{\alpha\beta})\right\|_{L_{y}^{2}} \lesssim C_{2}^{2}\epsilon^{2}\Big{(}\frac{s}{t}\Big{)}^{\frac{3}{2}}\lambda^ {-1+\gamma_{k}}.\] Turning now to the semilinear interactions between the zero-mode and the zero-average component, we immediately obtain from (4.7) and (4.38) that \[\lambda^{\frac{3}{2}}|\partial(\partial^{I_{1}}\Gamma^{J_{1}}h^{1,\flat})| \left\|\partial(\partial^{I_{2}}\Gamma^{J_{2}}h^{1,\natural})\right\|_{L_{y}^ {2}}\lesssim C_{1}C_{2}\epsilon^{2}\lambda^{-\frac{3}{2}+\eta}\Big{(}\frac{s} {t}\Big{)}^{2},\quad|I_{2}|+|J_{2}|\leq N_{1}.\] When \(|I_{2}|+|J_{2}|=|I|+|J|=N_{1}+1\), (4.7) does not provide us with the right power of \((s/t)\), which we instead get using the structure of the semilinear terms. On the one hand, using (4.57) together with (4.7), relation \(\underline{\partial}_{\boldsymbol{a}}=(1/t)\Omega_{0\boldsymbol{a}}\), (4.37) and (4.38), we easily derive that \[\lambda^{\frac{3}{2}}\left\|\mathbf{Q}(\partial h^{1,\flat},\partial(\partial ^{I}\Gamma^{J}h^{1,\natural}))\right\|_{L_{y}^{2}}\lesssim C_{1}C_{2}\epsilon^ {2}\lambda^{-\frac{3}{2}+\eta}\Big{(}\frac{s}{t}\Big{)}^{2}.\] The cubic terms also satisfy the same estimate as above, we leave the details to the reader. On the other hand, using lemma 3.13 we see that \[\left\|P_{\alpha\beta}(\partial h^{1,\flat},\partial(\partial^{I} \Gamma^{J}h^{1,\natural}))\right\|_{L_{y}^{2}}\] \[\qquad\lesssim|\partial h^{1,\flat}_{TU}|\|\partial(\partial^{I }\Gamma^{J}h^{1,\natural})_{TU}\|_{L_{y}^{2}}+|\partial h^{1,\flat}_{LL}|\| \partial(\partial^{I}\Gamma^{J}h^{1,\natural})_{\underline{L}\underline{L}} \|_{L_{y}^{2}}+|\partial h^{1,\flat}_{\underline{L}\underline{L}}|\|\partial (\partial^{I}\Gamma^{J}h^{1,\natural})_{LL}\|_{L_{y}^{2}}.\] From (4.7) and (4.71) \[\lambda^{\frac{3}{2}}|\partial h^{1,\flat}_{TU}|\|\partial(\partial^{I} \Gamma^{J}h^{1,\natural})_{TU}\|_{L_{y}^{2}}\lesssim C_{1}C_{2}\lambda^{-1+ \gamma_{k}}\Big{(}\frac{s}{t}\Big{)}^{\frac{3}{2}};\] from (4.7) and (4.46) \[\lambda^{\frac{3}{2}}|\partial h^{1,\flat}_{LL}|\|\partial(\partial^{I} \Gamma^{J}h^{1,\natural})_{\underline{L}\underline{L}}\|_{L_{y}^{2}}\lesssim C _{1}C_{2}\epsilon^{2}\lambda^{-\frac{3}{2}+\eta}\Big{(}\frac{s}{t}\Big{)}^{2};\] finally, since \[|\partial(\partial^{I}\Gamma^{J}h^{1,\natural})_{LL}|\leq|\partial(\partial^{ I}\Gamma^{J}h^{1})_{LL}|+|\partial(\partial^{I}\Gamma^{J}h^{1,\flat})_{LL}|\] from lemma 2.2 and pointwise estimates (4.6), (4.7), (4.38), (4.46) we deduce that \[\|\partial(\partial^{I}\Gamma^{J}h^{1,\natural})_{LL}\|_{L_{y}^{2}}\lesssim C _{2}\epsilon t^{-\frac{3}{2}}s^{\gamma_{k}}\] and consequently that \[|\partial h^{1,\flat}_{\underline{L}\underline{L}}|\|\partial(\partial^{I} \Gamma^{J}h^{1,\natural})_{LL}\|_{L_{y}^{2}}\lesssim C_{1}C_{2}\epsilon^{2} \lambda^{-\frac{3}{2}+\eta}\Big{(}\frac{s}{t}\Big{)}^{2}.\] In summary, \[\lambda^{\frac{3}{2}}\|F_{\lambda}\|_{L_{y}^{2}}\lesssim C_{2}^{2}\epsilon^{2} \Big{(}\frac{s}{t}\Big{)}^{\frac{3}{2}}\begin{cases}\lambda^{-\frac{3}{2}+\eta},& \text{ if }|I|\leq N_{1},\ |J|=0\\ \lambda^{-1+\gamma_{k}},&\text{ if }|I|+|J|\leq N_{1}+1,\ |J|=k\leq N_{1}\end{cases}\] and \[\int_{s_{0}}^{s}B_{tx}(\lambda)d\lambda\lesssim(C_{1}\epsilon+C_{2}^{2} \epsilon^{2})\Big{(}\frac{s}{t}\Big{)}^{\frac{3}{2}}\begin{cases}1&\text{ if }|I|\leq N_{1},\ |J|=0\\ s^{\gamma_{k}},&\text{ if }|I|+|J|\leq N_{1}+1,|J|=k\leq N_{1}\end{cases}\] Finally, from Proposition 4.21 we obtain that there exists a constant \(C>0\) such that \[s^{\frac{3}{2}}\|\partial_{y}^{\leq 1}(\partial^{I}\Gamma^{J}h^{1, \natural})\|_{L^{2}}+s^{\frac{1}{2}}\|\mathscr{S}(\partial^{I}\Gamma^{J}h^{1, \natural})\|_{L^{2}}\leq C\Big{(}\frac{s}{t}\Big{)}^{\frac{3}{2}}E^{\rm i}(s_{0 },Z^{\leq 2}(\partial^{I}\Gamma^{J}h^{1,\natural}))^{\frac{1}{2}}\] \[+C(C_{1}\epsilon+C_{2}^{2}\epsilon^{2})\Big{(}\frac{s}{t}\Big{)}^ {\frac{3}{2}}\begin{cases}1&\text{ if }|I|\leq N_{1},\ |J|=0\\ s^{\gamma_{k}},&\text{ if }|I|+|J|\leq N_{1}+1,|J|=k\leq N_{1}\end{cases}\] The conclusion of the proof then follows from the following relation \[\partial_{t}=\frac{t}{s^{2}}\mathscr{S}-\frac{tx^{\boldsymbol{j}}}{s^{2}} \underline{\partial}_{\boldsymbol{j}},\quad\partial_{\boldsymbol{j}}= \underline{\partial}_{\boldsymbol{j}}-\frac{x_{\boldsymbol{j}}}{s^{2}} \mathscr{S}+\frac{x_{\boldsymbol{j}}x^{\boldsymbol{k}}}{s^{2}}\underline{ \partial}_{\boldsymbol{k}}\] and by choosing \(C_{2}\) sufficiently large so that \(CC_{1}\ll C_{2}\) and \(CE^{\rm i}(s_{0},Z^{\leq 2}(\partial^{I}\Gamma^{J}h^{1,\natural}))^{\frac{1}{2}} \ll(C_{2}\epsilon)\), together with \(\epsilon_{0}\) sufficiently small so that \(CC_{2}\epsilon\ll 1\). _Remark 4.23_.: It will be useful in view of Proposition 4.32 to observe that the loss \(s^{\gamma_{k}}\) in the estimate of \(\partial^{I}\Gamma^{J}h^{1,\natural}_{\alpha\beta}\) when \(|I|+|J|\leq N_{1}\) and \(|J|=k\) is only due to the following contributions, which arise from the commutator term between zero modes and zero-average components \[\Gamma^{J_{1}}h^{1,\flat}\cdot\partial^{2}(\partial^{I}\Gamma^{J_{2}}h^{1, \natural}_{\alpha\beta}),\quad|J_{1}|>0.\] ### Enhanced energy bounds for the zero modes The goal of this subsection is to show that the lower order energies of the zero-modes enjoy enhanced energy estimates compared to (4.4). **Proposition 4.24**.: _Under the assumptions of Proposition 4.1, we have that for any fixed \(s\in[s_{0},S_{0})\) and any multi-index \(K\) of type \((N-1,N_{1})\)_ \[E^{\rm i}(s,Z^{K}h^{1,\flat})^{1/2}\lesssim C_{1}\epsilon s^{3\sigma} \tag{4.105}\] _where \(0<\sigma\ll\gamma_{0}\) is the rate of growth of the exterior energies._ The proof of the above statement is based on energy inequality (4.17). We recall the estimates already obtained in the previous subsections on the quadratic null terms (4.54), on quadratic weak null terms (4.78) and on the cubic terms (4.55) appearing in \(F^{K,\flat}_{\alpha\beta}\), as well as on the commutator terms (4.83) and on the contributions coming from \(F^{0,K}_{\alpha\beta}\) (4.69). We complete the picture with the estimates of the remaining trilinear terms appearing in the right hand side of (4.17). **Lemma 4.25**.: _For any multi-index \(K\) of type \((N,k)\) we have that_ (4.106) \[\iint_{\mathscr{H}_{[s_{0},s]}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Proof.: It is a straightforward consequence of (3.37), (3.38) applied to \(\phi=Z^{K}h^{1}_{\alpha\beta}\) and \(\phi=Z^{K}h^{1,\flat}_{\alpha\beta}\) respectively, coupled with the pointwise bounds (3.11), (4.38), (4.46) and with the energy bounds (4.19) to get (4.106), with (4.20) to get (4.107) and with (4.24) to get (4.108). Proof of Proposition 4.24.: This follows by plugging the estimates obtained so far in the energy inequality (4.17). In fact, from Lemma 4.6, estimates (4.54), (4.55), (4.78), (4.83) and the a-priori energy bound (4.20) there exists a constant \(C>0\) such that for any fixed multi-index \(K\) of type \((N_{1}-1,k)\) we have \[\iint_{\mathscr{H}_{[s_{0},s]}} |F^{K,\flat}_{\alpha\beta}(h)(\partial h,\partial h)||\partial_{t}Z^ {K}h^{1,\flat}_{\alpha\beta}|dxdt+\iint_{\mathscr{H}_{[s_{0},s]}}\big{|}(H^{1, \mu\nu})^{\natural}\cdot\partial_{\mu}\partial_{\nu}Z^{K}h^{1,\natural}_{ \alpha\beta}\big{)}^{\flat}\big{|}|\partial_{t}Z^{K}h^{1,\flat}_{\alpha\beta} |dxdt\] \[\leq\int_{s_{0}}^{s}CC_{1}\epsilon\tau^{-1}\sum_{K^{\prime}}E^{ \mathrm{i}}(\tau,Z^{K^{\prime}}h^{1,\flat})^{1/2}E^{\mathrm{i}}(\tau,Z^{K}h^{ 1,\flat}_{\alpha\beta})^{1/2}d\tau\] \[+\int_{s_{0}}^{s}CC_{1}\epsilon\tau^{-1+C\epsilon}\sum_{K^{\prime \prime}}E^{\mathrm{i}}(\tau,Z^{K^{\prime\prime}}h^{1,\flat})^{1/2}E^{\mathrm{ i}}(\tau,Z^{K}h^{1,\flat}_{\alpha\beta})^{1/2}d\tau+(C_{0}^{2}+C_{1}^{2})C_{2}^{2} \epsilon^{3}\] where \(K^{\prime}\) denote multi-indexes of type \((|K|,k)\) and \(K^{\prime\prime}\) multi-indexes of type \((|K|-1,k-1)\). Furthermore, from (3.11) \[\iint_{\mathscr{H}_{[s_{0}s]}}|[Z^{K},H^{0,\boldsymbol{\mu}\nu} \partial_{\boldsymbol{\mu}}\partial_{\boldsymbol{\nu}}]h^{1,\flat}_{\alpha \beta}||\partial_{t}Z^{K}h^{1,\flat}_{\alpha\beta}|dxdt\\ \leq\int_{s_{0}}^{s}C\epsilon\tau^{-1}\sum_{K^{\prime}}E^{\mathrm{ i}}(\tau,Z^{K^{\prime}}h^{1,\flat})^{1/2}E^{\mathrm{i}}(\tau,Z^{K}h^{1,\flat} _{\alpha\beta})^{1/2}d\tau.\] Summing the above estimates up together with (4.18), (4.69) and (4.107) we get that there exists some positive constant \(C>0\) such that \[E^{\mathrm{i}}(s,Z^{\leq K}h^{1,\flat}_{\alpha\beta})\leq CE^{ \mathrm{i}}(s_{0},Z^{\leq K}h^{1,\flat}_{\alpha\beta})+CC_{0}^{2}\epsilon^{2}s ^{2\sigma+C\epsilon}+C(C_{0}^{2}+C_{1}^{2})C_{2}^{2}\epsilon^{3}\\ +\int_{s_{0}}^{s}CC_{1}\epsilon\tau^{-1}\sum_{K^{\prime}}E^{ \mathrm{i}}(\tau,Z^{K^{\prime}}h^{1,\flat})^{1/2}E^{\mathrm{i}}(\tau,Z^{\leq K }h^{1,\flat})^{1/2}d\tau\\ +\int_{s_{0}}^{s}CC_{1}\epsilon\tau^{-1+C\epsilon}\sum_{K^{\prime \prime}}E^{\mathrm{i}}(\tau,Z^{K^{\prime\prime}}h^{1,\flat})^{1/2}E^{\mathrm{ i}}(\tau,Z^{\leq K}h^{1,\flat})^{1/2}d\tau.\] Performing an induction argument on \(k\), it then follows that there exist some positive constants \(c_{1}<c_{2}<\cdots<c_{N}\) such that \[E^{\mathrm{i}}(s,Z^{\leq K}h^{1,\flat}_{\alpha\beta})\leq C\big{(}E^{\mathrm{ i}}(s_{0},Z^{\leq K}h^{1,\flat}_{\alpha\beta})+CC_{0}^{2}\epsilon^{2}+(C_{0}^{2}+C_{ 1}^{2})C_{2}^{2}\epsilon^{3}\big{)}s^{2\sigma+c_{k}\epsilon}\] so the end of the proof follows from the smallness assumptions on the data and after choosing \(0<\epsilon_{0}\ll 1\) sufficiently small so that \(c_{N}\epsilon_{0}\leq\sigma\). The improved lower-order energy estimate (4.105) leads to the following improved sup-norm estimates. These are obtained following the proofs of their analogues with \(s^{\delta_{k}}\) losses and using the energy bound (4.105) in place of (4.20). For any multi-index \(K\) of type \((N-3,N_{1})\), bounds (4.38) and (4.43) are enhanced to the following ones \[\left\|st^{\frac{1}{2}}\partial Z^{K}h^{1,\flat}_{\alpha\beta}\right\|_{L^{ \infty}(\mathscr{H}_{s})}+\left\|t^{\frac{3}{2}}\underline{\partial}Z^{K}h^{1, \flat}_{\alpha\beta}\right\|_{L^{\infty}(\mathscr{H}_{s})}+\left\|t^{\frac{1} {2}}Z^{K}h^{1,\flat}_{\alpha\beta}\right\|_{L^{\infty}(\mathscr{H}_{s})} \lesssim C_{1}\epsilon s^{3\sigma} \tag{4.109}\] and bounds (4.46), (4.47) are improved to \[\|t^{\frac{3}{2}}\partial Z^{K}(H^{1}_{LT})^{\flat}\|_{L^{\infty}(\mathscr{H}_ {s})}+\|t^{\frac{1}{2}}(t/s)^{2}Z^{K}(H^{1}_{LT})^{\flat}\|_{L^{\infty}( \mathscr{H}_{s})}\lesssim C_{1}\epsilon s^{3\sigma}. \tag{4.110}\] Furthermore, for multi-indexes \(K\) of type \((N-4,k)\) we can also improve (4.61) to the following \[\|t^{\frac{3}{2}}(s/t)^{2}\partial_{t}^{2}Z^{K}h^{1,\flat}_{\alpha\beta}\|_{ L^{\infty}(\mathscr{H}_{s})}\lesssim C_{1}\epsilon s^{-1+6\sigma}. \tag{4.111}\] ### Propagation of pointwise bound (4.6) This section is dedicated to the proof of the enhanced pointwise bound (4.6), see proposition 4.31. We will make use of the following lemmas, which are due respectively to Alinhac [1], Asakura [3] and Katayama-Yokoyama [28]. **Lemma 4.26**.: _Let \(u=u(t,x)\) be the solution to the inhomogeneous wave equation \(\square_{tx}u=F\) on the flat space \(\mathbb{R}^{1+3}\) with zero initial data. Suppose that \(F\) is spatially compacted supported and that there exist some constants \(C_{0}>0\), \(\mu,\nu\geq 0\) such that the following pointwise bound is satisfied_ \[|F(t,x)|\leq C_{0}t^{-\nu}(t-r)^{-\mu}.\] _Defining \(\Phi_{\mu}(s)=1,\log s,s^{1-\mu}/(1-\mu)\) according to \(\mu>1,=1,<1\) respectively, we then have_ 1. _If_ \(\nu>2\)_,_ \[|u(t,x)|\leq CC_{0}\Phi_{\mu}\big{(}\langle t-r\rangle\big{)}\frac{\langle t- r\rangle^{\nu-2}}{\nu-2}(1+t)^{-1}\] 2. _If_ \(\nu=2\)_,_ \[|u(t,x)|\leq CC_{0}\Phi_{\mu}\big{(}\langle t-r\rangle\big{)}(1+t)^{-1}\log( 1+t)\] 3. _If_ \(\nu<2\)_,_ \[|u(t,x)|\leq CC_{0}\Phi_{\mu}\big{(}\langle t-r\rangle\big{)}\frac{(1+t)^{1- \nu}}{2-\nu}.\] **Lemma 4.27**.: _Let \(\phi,\psi\) be smooth functions on \(\mathbb{R}^{3}\) such that_ \[|\phi|\leq C(1+|x|)^{-1-\kappa},\qquad|\nabla\phi|+|\psi|\leq C(1+|x|)^{-2-\kappa}\] _for some constant \(C>0\) and some fixed \(0<\kappa<1\). Let \(u\) be the solution to the homogeneous wave equation \(\square u=0\) with initial \((u,\partial_{t}u)|_{t=0}=(\phi,\psi)\). There exists a constant \(\tilde{C}>0\) such that \(u\) satisfies the following inequality_ \[|u(t,x)|\leq\frac{C\tilde{C}}{(1+t+|x|)(1+|t-r|)^{\kappa}}.\] **Lemma 4.28**.: _Let \(u\) be the solution to the inhomogeneous wave equation \(\square u=F\) with zero data and \(F\) be a smooth function on \(\mathbb{R}^{1+3}\). Let \(\mu,\nu>1\) be fixed constants. Provided the following right hand side is finite, \(u\) satisfies the following inequality_ \[\langle t+|x|\rangle\langle t-|x|\rangle^{\mu-1}|u(t,x)|\lesssim\sup_{\tau\in[0,t]}\sup_{|x-y|\leq|t-\tau|}|y|\langle\tau+|y|\rangle^{\mu}\langle\tau-|y| \rangle^{\nu}|F(\tau,y)|.\] The idea of the proof of Proposition 4.31 is to look at \(u=\Gamma^{J}h^{1,\flat}_{\alpha\beta}\), for any fixed \(J\) with \(|J|=j\leq N_{1}\), as the solution to a Cauchy problem of the following form \[\begin{cases}\square_{tx}u=F\\ (u,\partial_{t}u)|_{t=2}=(\phi,\psi)\end{cases}\] for some given smooth initial data \(\phi,\psi\) and source term \(F\), and to successively decompose \(u\) as the sum of three waves \(v_{1},v_{2},v_{3}\) such that \[\begin{cases}\square_{tx}v_{1}=\chi((r+1/2)/t)F\\ (v_{1},\partial_{t}v_{1})|_{t=0}=(0,0)\end{cases}\qquad\text{and}\qquad \begin{cases}\square_{tx}v_{2}=(1-\chi((r+1/2)/t))F\\ (v_{2},\partial_{t}v_{2})|_{t=0}=(0,0)\end{cases}\] and \(v_{3}\) is the solution to the homogeneous wave equation with data \((\phi,\psi)\). In the above systems, \(\chi\) is a cut-off function equal to \(1\) on the ball \(B_{1/2}(0)\) and supported in \(\overline{B_{1}(0)}\), so that the source term in the equation of \(v_{1}\) is supported in the interior of the cone \(t=r+1/2\), while the source term in the equation of \(v_{2}\) is supported in the portion of exterior region such that \(t\leq r+1/2\). The solutions \(v_{1},v_{2},v_{3}\) are estimated using lemma 4.26, 4.27 and 4.28 respectively. The combination of such estimates will provide us with the desired estimate on \(u=Z^{K}h^{1,\flat}_{\alpha\beta}\). Let us denote by \(D^{J}_{\alpha\beta}\) the nonlinearity in equation (4.15) satisfied by \(\Gamma^{J}h^{1,\flat}_{\alpha\beta}\), i.e. \[D^{J\flat}_{\alpha\beta}:=-(H^{\boldsymbol{\mu}\boldsymbol{\nu}})^{\flat} \cdot\partial_{\boldsymbol{\mu}}\partial_{\boldsymbol{\nu}}\Gamma^{J}h^{1, \flat}_{\alpha\beta}+F^{J,\flat}_{\alpha\beta}+F^{0,J}_{\alpha\beta}-\big{(}( H^{\mu\nu})^{\natural}\cdot\partial_{\mu}\partial_{\nu}\Gamma^{J}h^{1, \natural}_{\alpha\beta}\big{)}^{\flat}.\] We start by estimating the source term \(D^{J,\flat}_{\alpha\beta}\) in the interior of the cone \(t=r+1/2\). Since the intersection of this cone with the exterior region \(\mathscr{D}^{\rm e}\) is non-empty, we will make use of some estimates obtained in Section 3. **Lemma 4.29**.: _For any multi-index \(J\) with \(|J|=k\leq N_{1}\), any \(s\in[s_{0},S_{0})\) and any \((t,x)\) with \(t^{2}-r^{2}=s^{2}\) and \(t>r+1/2\) we have that_ \[|D^{J\flat}_{\alpha\beta}(t,x)|\lesssim(C_{2}\epsilon)^{2}t^{-1}s^{-2+\gamma _{k}}\] Proof.: From the pointwise bounds (3.11), (3.12) and (3.14) we get that for any \((t,x,y)\in\mathscr{D}^{\rm e}\) such that \(t>r+1/2\) and \(t^{2}-r^{2}=s^{2}\) \[\sum_{|J_{1}|+|J_{2}|\leq k}|\partial\Gamma^{J_{1}}h\cdot\partial \Gamma^{J_{2}}h| \lesssim C_{0}^{2}\epsilon^{2}t^{-2+2\sigma}(2+r-t)^{-2-2\kappa} \lesssim C_{0}^{2}\epsilon^{2}t^{-1}s^{-2+4\sigma}\] \[\sum_{|J_{1}|+|J_{2}|\leq k}|\Gamma^{J_{1}}H\cdot\partial^{2} \Gamma^{J_{2}}h| \lesssim C_{0}^{2}\epsilon^{2}t^{-2+2\sigma}(2+r-t)^{-\frac{3}{2 }-2\kappa}\lesssim C_{0}^{2}\epsilon^{2}t^{-1}s^{-2+4\sigma}.\] In the interior region, we recall the pointwise decay estimates already obtained in lemma 4.6 for the quadratic and cubic terms involving at least one \(h^{0}\). Turning next to the pure quadratic zero-mode interactions, we derive from (4.109) that \[\sum_{|J_{1}|+|J_{2}|\leq k}\left|\partial\Gamma^{J_{1}}h^{1,\flat}\cdot \partial\Gamma^{J_{2}}h^{1,\flat}\right|\lesssim C_{1}^{2}\epsilon^{2}t^{-1}s ^{-2+6\sigma}\] and from (3.25) and bounds (4.109), (4.111) that \[\left|\Gamma^{J}\big{(}(H^{1,\mathbf{\mu\nu}})^{\flat}\cdot\partial_{ \mathbf{\mu}}\partial_{\mathbf{\nu}}h^{1,\flat}_{\alpha\beta}\big{)}\right|\lesssim \sum_{|J_{1}|+|J_{2}|\leq k}|\Gamma^{J_{1}}H^{1,\flat}_{LL}|\,|\partial_{t}^{2} \Gamma^{J_{2}}h^{1,\flat}_{\alpha\beta}|+\Big{(}\frac{s}{t}\Big{)}^{2}|\Gamma^ {J_{1}}H^{1,\flat}|\,|\partial_{t}^{2}\Gamma^{J_{2}}h^{1,\flat}_{\alpha\beta}|\] \[+\sum_{|J_{1}|+|J_{2}|\leq j}(1+t+r)^{-1}|\Gamma^{J_{1}}H^{1, \flat}|\,|\partial\Gamma^{J_{2}}h^{1,\flat}_{\alpha\beta}|\lesssim C_{1}^{2} \epsilon^{2}t^{-1}s^{-2+9\sigma}.\] As concerns instead the pure interaction of the zero-average components, we derive from (4.7) and the algebraic relation (4.2) that \[\sum_{|J_{1}|+|J_{2}|=k}|(\partial\Gamma^{J_{1}}h^{1,\natural}\cdot\partial \Gamma^{J_{2}}h^{1,\natural})^{\flat}|+\left|\big{(}(H^{\mu\nu})^{\natural} \cdot\partial_{\mu}\partial_{\nu}\Gamma^{J}h^{1,\natural}_{\alpha\beta}\big{)} ^{\flat}\right|\lesssim C_{2}^{2}\epsilon^{2}t^{-3}s^{\gamma_{k}}.\] The conclusion of the proof follows from the fact that \(9\sigma\ll\gamma_{0}\). _Remark 4.30_.: It is important to observe, in view of Proposition 4.32, that the loss \(s^{\gamma_{k}}\) in the above estimate for \(D^{J}_{\alpha\beta}\) is only caused by the pure interactions of the zero-average components of the metric perturbations. All other interactions cause a smaller loss \(s^{9\sigma}\). We are now able to propagate the a-priori pointwise bound (4.6). **Proposition 4.31**.: _There exist two constants \(1\ll C_{1}\ll C_{2}\) sufficiently large, a finite and increasing sequence of parameters \(0<\zeta_{k},\gamma_{k},\delta_{k}\ll 1\) satisfying (4.2) and \(0<\epsilon_{0}\ll 1\) sufficiently small such that, under the assumptions of Proposition 4.1, the enhanced estimate (4.11) is satisfied._ Proof.: We split \(\Gamma^{J}h^{1,\flat}_{\alpha\beta}\) into the sum of three waves \(v^{J}_{1,\alpha\beta}\), \(v^{J}_{2,\alpha\beta}\) and \(v^{J}_{3,\alpha\beta}\), where \[\begin{cases}\square_{tx}v^{J}_{1,\alpha\beta}=\chi((r+1/2)/t)D^{J,\flat}_{ \alpha\beta}\\ (v^{J}_{1,\alpha\beta},\partial_{t}v^{J}_{1,\alpha\beta})|_{t=2}=(0,0)\end{cases} \begin{cases}\square_{tx}v^{J}_{2,\alpha\beta}=\big{(}1-\chi((r+1/2)/t)\big{)} D^{J,\flat}_{\alpha\beta}\\ (v^{J}_{2,\alpha\beta},\partial_{t}v^{J}_{2,\alpha\beta})|_{t=2}=(0,0)|_{t=2 }\end{cases}\] and \[\begin{cases}\square_{tx}v^{J}_{3,\alpha\beta}=0\\ (v^{J}_{3,\alpha\beta},\partial_{t}v^{J}_{3,\alpha\beta})|_{t=2}=(\Gamma^{J} h^{1,\flat}_{\alpha\beta},\partial_{t}\Gamma^{J}h^{1,\flat}_{\alpha\beta})|_{t=2}. \end{cases}\] From Lemma 4.29, we get that in the interior of the cone \(t=r+1/2\) \[|D^{J,\flat}_{\alpha\beta}(t,x)|\lesssim C_{2}^{2}\epsilon^{2}t^{-1}s^{-2+ \gamma_{k}}\lesssim C_{2}^{2}\epsilon^{2}t^{-2+\frac{\gamma_{k}}{2}}\langle t -r\rangle^{-1+\frac{\gamma_{k}}{2}}. \tag{4.112}\] Then, Lemma 4.26 with \(\nu=2-\frac{\gamma_{k}}{2}\) and \(\mu=1-\frac{\gamma_{k}}{2}\) yields \[|v^{J}_{\alpha y}(t,x)|\lesssim C_{2}^{2}\epsilon^{2}(1+t)^{-1+\frac{\gamma_{k }}{2}}\langle t-r\rangle^{\frac{\gamma_{k}}{2}}. \tag{4.113}\] As concerns the region exterior to the cone \(t=r+1/2\), we recall the estimates (3.22) for the quadratic null terms, (3.23) for the cubic interactions, (3.29) for the commutator terms and (3.48) for the weak null interactions. For any \((t,x)\) with \(t<r+1/2\), we at least have \[|D^{J,\flat}_{\alpha\beta}(t,x)|\lesssim C_{0}^{2}\epsilon^{2}t^{-2+2\sigma}( 2+r-t)^{-\frac{3}{2}}\] so that \[\sup_{\tau\in[0,t]}\sup_{|x-z|\leq|t-\tau|}|z|\langle\tau+|z|\rangle^{\mu} \langle\tau-|z|\rangle^{\nu}|D^{J,\flat}_{\alpha\beta}(z,\tau)|\lesssim C_{0} ^{2}\epsilon^{2}\langle t+|x|\rangle^{3\sigma}\] provided that \(\mu,\nu>1\) are chosen so that \(\mu=1+\sigma\) and \(1<\nu<3/2\). From Lemma 4.28 we then have \[|v_{2,\alpha\beta}^{J}(t,x)|\lesssim C_{0}^{2}\epsilon^{2}\langle t+r\rangle^{-1 +3\sigma}\langle t-r\rangle^{-\sigma}.\] Finally, the initial data satisfy \[|\Gamma^{J}h^{1,\flat}_{\alpha\beta}(2,x)|\lesssim C_{0}\epsilon(1+|x|)^{-1- \kappa},\qquad|\nabla_{tx}\Gamma^{J}h^{1,\flat}_{\alpha\beta}(2,x)|\lesssim C _{0}(1+|x|)^{-2-\kappa}\] as a consequence of the assumptions on the initial data (1.10) and the pointwise bounds (3.12) and (3.14), so that Lemma 4.27 implies \[|v_{3,\alpha\beta}^{J}(t,x)|\lesssim\frac{C_{0}\epsilon}{(1+t+|x|)(1+|t-r|)^{ \kappa}}.\] Summing all up, we find that there exists a constant \(C>0\) such that \[|\Gamma^{J}h^{\flat}_{\alpha\beta}(t,x)|\leq C(C_{0}\epsilon+C_{0}^{2} \epsilon^{2})\langle t+r\rangle^{-1+3\sigma}\langle t-r\rangle^{-\sigma}+CC_{ 2}^{2}\epsilon^{2}\langle t+r\rangle^{-1+\frac{\gamma_{k}}{2}}\langle t-r \rangle^{\frac{\gamma_{k}}{2}}\] so the result of the statement follows from the fact that \(\sigma\ll\gamma_{0}\) and by choosing \(C_{2}\gg 1\) sufficiently large so that \(C(C_{0}+C_{0}\epsilon)<(C_{2}\epsilon)/2\) and \(0<\epsilon_{0}\ll 1\) sufficiently small so that \(CC_{2}\epsilon_{0}<1/2\). Following Remarks 4.23 and 4.30, we conclude this section with enhanced pointwise bounds for the metric perturbation. **Proposition 4.32**.: _There exists a constant \(c>9\) such that the metric perturbation satisfies the following_ \[\|t\,\Gamma^{J}h^{1,\flat}_{\alpha\beta}\|_{L^{\infty}_{x}( \mathscr{H}_{s})}\lesssim C_{2}\epsilon s^{c\sigma}\quad\text{if }|J|=k\leq N_{1}-1,\] \[\|t^{\frac{1}{2}}s\,\partial_{tx}(\partial^{I}\Gamma^{J}h^{1, \natural}_{\alpha\beta})\|_{L^{\infty}_{x}L^{2}_{y}(\mathscr{H}_{s})}+\|t^{ \frac{3}{2}}\partial_{y}^{\leq 1}(\partial^{I}\Gamma^{J}h^{1,\natural}_{ \alpha\beta})\|_{L^{\infty}_{x}L^{2}_{y}(\mathscr{H}_{s})} \tag{4.115}\] \[\leq\begin{cases}2C_{2}\epsilon,&\text{if }|I|\leq N_{1},|J|=0 \\ 2C_{2}\epsilon s^{c\sigma},&\text{if }|I|+|J|\leq N_{1},|J|=k\leq N_{1}-1.\end{cases} \tag{4.114}\] Proof.: The proof proceeds by induction. We assume that there exists a finite increasing sequence \(c_{k}\) with \(9\leq c_{k}\ll c_{k+1}\) and \(c_{i}+c_{j}<c_{k}\) whenever \(i,j<k\), such that \[\|t\,\Gamma^{J}h^{1,\flat}_{\alpha\beta}\|_{L^{\infty}_{x}( \mathscr{H}_{s})}\lesssim 2C_{2}\epsilon s^{c_{k}\sigma}\quad\text{if }|J|=k\leq N_{1}-1,\] \[\|t^{\frac{1}{2}}s\,\partial_{tx}(\partial^{I}\Gamma^{J}h^{1, \natural}_{\alpha\beta})\|_{L^{\infty}_{x}L^{2}_{y}(\mathscr{H}_{s})}+\|t^{ \frac{3}{2}}\partial_{y}^{\leq 1}(\partial^{I}\Gamma^{J}h^{1,\natural}_{ \alpha\beta})\|_{L^{\infty}_{x}L^{2}_{y}(\mathscr{H}_{s})}\] \[\leq\begin{cases}2C_{2}\epsilon,&\text{if }|I|\leq N_{1},|J|=0 \\ 2C_{2}\epsilon s^{c_{k}\sigma},&\text{if }|I|+|J|\leq N_{1},|J|=k\leq N_{1}-1 \end{cases}\] We then have that, whenever \(|I|+|J_{1}|+|J_{2}|\leq N_{1}\) with \(|J_{1}|=k_{1}>0\) and \(|J_{2}|=k_{2}\leq N_{1}-2\), \[|\Gamma^{J_{1}}h^{1,\flat}|\|\partial^{2}\partial^{I}\Gamma^{J_{2}}h^{1, \natural}_{\alpha\beta}\|_{L^{2}_{y}}\lesssim C_{2}^{2}\epsilon^{2}t^{-\frac{ 3}{2}}s^{-1+(c_{k_{1}}+c_{k_{2}})\sigma}\lesssim C_{2}^{2}\epsilon^{2}t^{- \frac{3}{2}}s^{-1+c_{k}\sigma}.\] The arguments in the proof of Proposition 4.22 show that \[s^{\frac{3}{2}}\|\partial_{y}^{\leq 1}(\partial^{I}\Gamma^{J}h^{1, \natural})\|_{L^{2}}+s^{\frac{1}{2}}\|\mathscr{S}(\partial^{I}\Gamma^{J}h^{1, \natural})\|_{L^{2}}\leq C\Big{(}\frac{s}{t}\Big{)}^{\frac{3}{2}}E^{\natural}( s_{0},Z^{\leq 2}(\partial^{I}\Gamma^{J}h^{1,\natural}))^{\frac{1}{2}}\] \[+C(C_{1}\epsilon+C_{2}^{2}\epsilon^{2})\Big{(}\frac{s}{t}\Big{)}^{ \frac{3}{2}}\!\!\begin{cases}1&\text{if }|I|\leq N_{1},\ |J|=0\\ s^{c_{k}\sigma},&\text{if }|I|+|J|\leq N_{1}+1,|J|=k\leq N_{1}\end{cases}\] and an appropriate choice of constants allows us to get (4.115). Furthermore, whenever \(|J_{1}|+|J_{2}|\leq|J|\leq N_{1}-1\) \[\sum_{|J_{1}|+|J_{2}|=k}|(\partial\Gamma^{J_{1}}h^{1,\natural}\cdot\partial \Gamma^{J_{2}}h^{1,\natural})^{\flat}|+\big{|}\big{(}(H^{\mu\nu})^{\natural} \cdot\partial_{\mu}\partial_{\nu}\Gamma^{J}h^{1,\natural}_{\alpha\beta}\big{)} ^{\flat}\big{|}\lesssim C_{2}^{2}\epsilon^{2}t^{-3}s^{c_{k}\sigma}\] which implies, following the proof of Lemma 4.29, that \[|D^{J}_{\alpha\beta}|\lesssim(C_{1}^{2}+C_{2}^{2})\epsilon^{2}t^{-1}s^{-2+c_{ k}\sigma}.\] Using lemma 4.26 with \(\nu=2-\frac{c_{k}\sigma}{2}\) and \(\mu=1-\frac{c_{k}\sigma}{2}\), we can then replace (4.113) with \[|v^{J}_{\alpha y}(t,x)|\lesssim C_{2}^{2}\epsilon^{2}(1+t)^{-1+\frac{c_{k}}{2 }}\langle t-r\rangle^{\frac{c_{k}}{2}}\] and the arguments in the proof of Proposition 4.31 yield (4.114). ### Propagation of the energy bounds In this section we propagate the a-priori energy bounds (4.3)-(4.5) and hence conclude the proof of Proposition 4.1. Proof of Proposition 4.1.: We note first that, using bound (4.115) instead of (4.7) and the fact that \(\sigma\ll\zeta_{0}\) so that \(c\sigma+\zeta_{j}<\zeta_{k}\) whenever \(j<k\) in the proof of Lemma 4.14 and Proposition 4.15, allows us to replace the loss \(s^{\gamma_{i}+\zeta_{k-i}}\) for \(i=\overline{1,4}\) in (4.78), (4.80) and (4.81) with \(s^{\zeta_{k}}\), hence having \[\big{\|}Z^{K}P^{\flat}_{\alpha\beta}(\partial h,\partial h)\big{\|}_{L ^{2}_{y}(\mathscr{H}_{s})}\lesssim C_{1}\epsilon s^{-1}\sum_{K^{\prime}}E^{ \mathrm{i}}(s,Z^{K^{\prime}}h^{1,\flat})^{1/2}\\ +C_{1}\epsilon s^{-1+C\epsilon}\sum_{K^{\prime\prime}}E^{\mathrm{ i}}(s,Z^{K^{\prime\prime}}h^{1,\flat})^{1/2}+C_{1}^{2}\epsilon^{2}s^{-\frac{3}{2}+2 \delta_{N}}+C_{1}^{2}\epsilon^{2}\delta_{k>N_{1}}s^{-1+\zeta_{k}} \tag{4.116}\] for multi-index \(K\) of type \((N+1,k)\) with \(k\leq N\), and (4.117) \[\iint_{\mathscr{H}_{[s_{0},s]}}\big{|}([Z^{K},H^{1,\mu\nu} \partial_{\mu}\partial_{\nu}]h^{1}_{\alpha\beta}\big{)}^{\flat}\big{|}| \partial_{t}Z^{K}h^{1,\flat}_{\alpha\beta}|dxdt\\ +\iint_{\mathscr{H}_{[s_{0},s]}}\big{|}\big{(}(H^{1,\mu\nu})^{ \natural}\cdot\partial_{\mu}\partial_{\nu}Z^{K}h^{1,\natural}_{\alpha\beta} \big{)}^{\flat}\big{|}|\partial_{t}Z^{K}h^{1,\flat}_{\alpha\beta}|dxdt\lesssim( C_{0}^{2}+C_{1}^{2})C_{2}^{2}\epsilon^{3}s^{2\zeta_{k}}\] (4.118) for multi-indexes \(K\) of type \((N,k)\). For multi-indexes \(K\) of type \((N+1,k)\), we substitute (4.18), (4.52), (4.53), (4.69), (4.76), (4.106), (4.117), together with the energy bound (4.19), into (4.16) and hence deduce the existence of a constant \(\tilde{C}>0\) such that for all \(s\in[s_{0},S_{0})\) \[E^{\mathrm{i}}(s,Z^{K}h^{1})\leq\tilde{C}\big{(}E^{\mathrm{i}}(s_{0},Z^{K}h^{1} )+\epsilon^{2}s^{2\sigma+C\epsilon}+(C_{0}^{2}+C_{1}^{2})C_{2}^{2}\epsilon^{3 }s^{1+2\zeta_{k}}\big{)}.\] The enhanced energy estimate (4.8) is obtained by picking \(C_{1}\gg 1\) sufficiently large so that \(3\tilde{C}E^{\mathrm{i}}(s_{0},Z^{K}h^{1})\leq C_{1}^{2}\epsilon^{2}\) and \(3\tilde{C}\leq C_{1}\), and \(0<\epsilon_{0}\ll 1\) sufficiently small so that \(3(C_{0}^{2}+1)C_{2}^{2}\epsilon_{0}\leq C_{1}\) and \(2\sigma+C\epsilon<2\delta_{k}\). Analogously, if \(K\) is of type \((N,k)\) with \(k\leq N_{1}\), we derive from (4.18), (4.54), (4.55), (4.69), (4.77), (4.82), (4.108), together with the energy bound (4.25) that there exists another constant \(\tilde{C}>0\) such that for all \(s\in[s_{0},S_{0})\) \[E^{\mathrm{i}}(s,Z^{K}h^{1})\leq\tilde{C}\big{(}E^{\mathrm{i}}(s_{0},Z^{K}h^{1} )+\epsilon^{2}s^{2\sigma+C\epsilon}+(C_{0}^{2}+C_{1}^{2})C_{2}^{2}\epsilon^{3} s^{2\delta_{k}}\big{)}.\] Choosing accordingly \(C_{1}\) and \(\epsilon_{0}\) yields (4.10). Finally, for any multi-index \(K\) of type \((N,k)\), we have the following estimate for the energy of \(Z^{K}h^{1,\flat}_{\alpha\beta}\), which is obtained by plugging (4.18), (4.53), (4.56), (4.69), (4.107), (4.116), (4.118) and the energy bound (4.20) into (4.17) \[E^{\mathrm{i}}(s,Z^{K}h^{1,\flat})\leq\tilde{C}\big{(}E^{\mathrm{i}}(s_{0},Z^{ K}h^{1,\flat})+\epsilon^{2}s^{2\sigma+C\epsilon}+(C_{0}^{2}+C_{1}^{2})C_{2}^{2} \epsilon^{3}s^{2\zeta_{k}}\big{)},\] for a new constant \(\tilde{C}\). Again, the enhanced energy bound (4.9) follows by choosing \(C_{1},\epsilon_{0}\) appropriately. ## Appendix A Energy Inequalities In this section we group together different energy inequalities that are useful in the paper. We denote by \(\mathbf{W}\) the solution of the following linear inhomogeneous wave equation \[g^{\mu\nu}\partial_{\mu}\partial_{\nu}\mathbf{W}=\mathbf{F}\] which can be also written (A.1) \[(-\partial_{t}^{2}+\Delta_{x}+\partial_{y}^{2})\mathbf{W}+H^{\mu\nu}\partial _{\mu}\partial_{\nu}\mathbf{W}=\mathbf{F},\qquad(t,x,y)\in\mathbb{R}^{1+3} \times\mathbb{S}^{1}\] where the tensor components \(H^{\mu\nu}\) are assumed to be sufficiently small functions and \(\mathbf{F}\) is some source term. In this section, a particular attention will be given to the energy flux on hyperboloids. These are spacelike hypersurfaces in Minkowski spacetime, but have a degeneracy caused by the fact that they are asymptotically null. This is something which could be destroyed by perturbations of the metric. We take advantage of the Schwarzschild component of \(H\), introduced in (1.12), to show that the hyperboloids remain spacelike everywhere. **Proposition A.1** (Exterior energy inequality).: _Let \(\mathbf{W}\) be a solution of equation (A.1) decaying sufficiently fast as \(|x|\to\infty\) and assume that there exist \(\epsilon>0\) small such that tensor \(H\) satisfies the following bounds_ \[|H(t,x,y)|\lesssim\frac{\epsilon}{(1+t+r)^{\frac{3}{4}}},\quad\Big{|}H_{LL}+ \chi\left(\frac{r}{t}\right)\chi(r)\frac{2M}{r}\Big{|}\lesssim\frac{\epsilon} {(1+t+r)^{1+\delta}},\] _where \(\chi\) is a cut-off function such that \(\chi(s)=0\) for \(s\leq 1/2\), \(\chi(s)=1\) for \(s\geq 3/4\) and \(\delta\) is any fixed positive constant. Let \(w(q)\) be a smooth function that only depends on the distance \(q=r-t\) from the light cone and such that \(w(q),w^{\prime}(q)\geq 0\). For any \(2\leq t_{1}<t_{2}\), let \(\tilde{\mathscr{H}}_{t_{1}t_{2}}\) denote the portion of \(\tilde{\mathscr{H}}\) in the time interval \([t_{1},t_{2}]\) and \(d\mu_{\mathscr{H}}\) be its surface element. We _have the following inequality_ (A.2) \[\begin{split}&\int_{\Sigma^{\epsilon}_{t_{2}}}w(q)|\nabla_{txy} \mathbf{W}|^{2}dxdy+\int_{\tilde{\mathscr{H}}_{t_{1}t_{2}}}w(q)\Big{[}\frac{1}{ 2(1+r^{2})}+\chi\left(\frac{r}{t}\right)\chi(r)\frac{M}{2r}\Big{]}|\partial_{t }\mathbf{W}|^{2}+w(q)|\tilde{\nabla}\mathbf{W}|^{2}dxdy\\ &+\iint_{\mathscr{D}^{\epsilon}_{[t_{1},t_{2}]}}w^{\prime}(q) \big{(}|L\mathbf{W}|^{2}+|\tilde{\mathbf{V}}\mathbf{W}|^{2}\big{)}dtdxdy\lesssim \int_{\Sigma^{\epsilon}_{t_{1}}}w(q)|\nabla_{txy}\mathbf{W}|^{2}dxdy\\ &+\iint_{\mathscr{D}^{\epsilon}_{[t_{1},t_{2}]}}w(q)|(\mathbf{F} +\partial_{\mu}H^{\mu\nu}\,\partial_{\nu}\mathbf{W})\partial_{t}\mathbf{W}|+w (q)|\partial_{t}H^{\mu\nu}\,\partial_{\mu}\mathbf{W}\,\partial_{\nu}\mathbf{W}| \,dtdxdy\\ &+\iint_{\mathscr{D}^{\epsilon}_{[t_{1},t_{2}]}}w^{\prime}(q)|H^ {\rho\sigma}\partial_{\rho}\mathbf{W}\partial_{\sigma}\mathbf{W}|\ dtdxdy+\iint_{\mathscr{D}^{\epsilon}_{[t_{1},t_{2}]}}w^{ \prime}(q)|(-H^{0\nu}+\omega_{\boldsymbol{j}}H^{\boldsymbol{j}\nu})\partial_{ \nu}\mathbf{W}\partial_{t}\mathbf{W}|\ dtdxdy\end{split}\] _where \(\tilde{\nabla}=(\tilde{\partial}_{1},\ldots,\tilde{\partial}_{4})\) is the tangent gradient to \(\tilde{\mathscr{H}}\), i.e. \(\tilde{\partial}_{\boldsymbol{i}}=\partial_{\boldsymbol{i}}+\frac{x^{ \boldsymbol{i}}}{t-1}\partial_{\boldsymbol{i}}\) for \(\boldsymbol{i}=1,2,3\) and \(\tilde{\partial}_{\boldsymbol{i}}=\partial_{y}\), and \(\omega_{\boldsymbol{j}}=x_{\boldsymbol{j}}/r\)._ Proof.: We start with the following computation (A.3) \[\begin{split}\partial_{\boldsymbol{t}}\mathbf{W}\left(g^{\mu\nu} \partial_{\mu}\partial_{\nu}\mathbf{W}\right)=&\partial_{\mu} \left(g^{\mu\nu}\partial_{\nu}\mathbf{W}\partial_{t}\mathbf{W}\right)-\frac{1 }{2}\partial_{\boldsymbol{i}}\left(g^{\mu\nu}\partial_{\mu}\mathbf{W}\partial _{\nu}\mathbf{W}\right)\\ &-(\partial_{\mu}H^{\mu\nu})\partial_{\mu}\mathbf{W}\partial_{ \boldsymbol{i}}\mathbf{W}+\frac{1}{2}(\partial_{\boldsymbol{i}}H^{\mu\nu}) \partial_{\mu}\mathbf{W}\partial_{\nu}\mathbf{W}.\end{split}\] Multiplying by \(w(q)\) we obtain (A.4) \[\begin{split} w(q)\partial_{t}\mathbf{W}\left(g^{\mu\nu} \partial_{\mu}\partial_{\nu}\mathbf{W}\right)=&\partial_{\mu} \left(w(q)g^{\mu\nu}\partial_{\nu}\mathbf{W}\partial_{t}\mathbf{W}\right)- \frac{1}{2}\partial_{\boldsymbol{i}}\left(w(q)g^{\mu\nu}\partial_{\mu}\mathbf{W }\partial_{\nu}\mathbf{W}\right)\\ &-w(q)(\partial_{\mu}H^{\mu\nu})\partial_{\mu}\mathbf{W}\partial _{t}\mathbf{W}+\frac{1}{2}w(q)(\partial_{\boldsymbol{i}}H^{\mu\nu})\partial_{ \mu}\mathbf{W}\partial_{\nu}\mathbf{W}\\ &-(\partial_{\mu}w(q))\left(g^{\mu\nu}\partial_{\nu}\mathbf{W} \partial_{t}\mathbf{W}\right)+\frac{1}{2}(\partial_{\boldsymbol{i}}w(q))\left( g^{\mu\nu}\partial_{\mu}\mathbf{W}\partial_{\nu}\mathbf{W}\right)\end{split}\] We integrate (A.4) in the spacetime portion of the exterior region included between the two spacelike hypersurfaces \(\Sigma^{\mathrm{e}}_{t_{1}}\) and \(\Sigma^{\mathrm{e}}_{t_{2}}\), denoted by \(\mathscr{D}^{\mathrm{e}}_{[t_{1},t_{2}]}\). We treat the divergence term via Stokes' theorem, meaning \[\int_{\mathscr{D}^{\mathrm{e}}_{[t_{1},t_{2}]}}(\partial_{\mu}X^{\mu})dtdxdy= \int_{\Sigma^{\mathrm{e}}_{t_{2}}}X^{0}dxdy-\int_{\Sigma^{\mathrm{e}}_{t_{1}}} X^{0}dxdy+\int_{\tilde{\mathscr{H}}_{t_{1}t_{2}}}\Big{(}X^{0}-\frac{x_{ \boldsymbol{i}}}{t-1}X^{\boldsymbol{i}}\Big{)}dxdy.\] Applying this to (A.4) we obtain \[\begin{split}&\int_{\Sigma^{\mathrm{e}}_{t_{2}}}w(q)e^{curv}_{ \Sigma}dxdy+\int_{\tilde{\mathscr{H}}_{t_{1}t_{2}}}w(q)e^{curv}_{\tilde{ \mathscr{H}}}dxdy+\iint_{\mathscr{D}^{\mathrm{e}}_{[t_{1},t_{2}]}}w^{\prime}(q )B\,dtdxdy\\ &=\int_{\Sigma^{\mathrm{e}}_{t_{1}}}w(q)e^{curv}_{\Sigma}dx+\iint_ {\mathscr{D}^{\mathrm{e}}_{[t_{1},t_{2}]}}w(q)(C-\mathbf{F})dxdt,\end{split}\] where the curved energy densities are defined by (A.5) \[e^{curv}_{\tilde{\mathscr{H}}} =-g^{\mu 0}\partial_{\mu}\mathbf{W}\partial_{t}\mathbf{W}+\frac{1}{2}g^ {\mu\nu}\partial_{\mu}\mathbf{W}\partial_{\nu}\mathbf{W}\] (A.6) \[e^{curv}_{\tilde{\mathscr{H}}} =-g^{\mu 0}\partial_{\mu}\mathbf{W}\partial_{t}\mathbf{W}+\frac{1}{2}g ^{\mu\nu}\partial_{\mu}\mathbf{W}\partial_{\nu}\mathbf{W}+\frac{x_{i}}{t-1}g^{ i\mu}\partial_{\mu}\mathbf{W}\partial_{t}\mathbf{W}\] and the bulk terms are given by (A.7) \[B =-(\partial_{\mu}q)\left(g^{\mu\nu}\partial_{\nu}\mathbf{W} \partial_{t}\mathbf{W}\right)+\frac{1}{2}(\partial_{t}q)\left(g^{\mu\nu} \partial_{\mu}\mathbf{W}\partial_{\nu}\mathbf{W}\right)\] (A.8) \[C =-(\partial_{\mu}H^{\mu\nu})\partial_{\mu}\mathbf{W}\partial_{ \nu}\mathbf{W}+\frac{1}{2}(\partial_{t}H^{\mu\nu})\partial_{\mu}\mathbf{W} \partial_{\nu}\mathbf{W}.\] We have \[e^{curv}_{\Sigma}=\frac{1}{2}\left((\partial_{t}\mathbf{W})^{2}+|\nabla_{x} \mathbf{W}|^{2}+(\partial_{y}\mathbf{W})^{2}\right)+O\left(H|\nabla\mathbf{W}| ^{2}\right),\] so with the hypothesis \(|H|\leq\frac{1}{100}\) we easily obtain \[\frac{1}{4}\left((\partial_{t}\mathbf{W})^{2}+|\nabla_{x}\mathbf{W}|^{2}+( \partial_{y}\mathbf{W})^{2}\right)\leq e^{curv}_{\Sigma}\leq 4\left((\partial_{t} \mathbf{W})^{2}+|\nabla\mathbf{W}_{x}|^{2}+(\partial_{y}\mathbf{W})^{2} \right).\] We have to be a little more careful with \(e^{curv}_{\tilde{\mathscr{H}}}\). We have \[e^{curv}_{\tilde{\mathscr{H}}}= \frac{1}{2}\left((\partial_{t}\mathbf{W})^{2}+|\nabla_{x}\mathbf{ W}|^{2}+(\partial_{y}\mathbf{W})^{2}\right)+\frac{r}{t-1}\partial_{r}\mathbf{W} \partial_{t}\mathbf{W}-\frac{1}{4}H_{LL}(\partial_{t}\mathbf{W})^{2}\] \[+O(H\cdot\partial\mathbf{W}\cdot\underline{\tilde{\partial}} \mathbf{W})+\left(1-\frac{r}{t-1}\right)O(H|\nabla\mathbf{W}|^{2})+O(H| \underline{\tilde{\nabla}}\mathbf{W}|^{2})\] \[= \frac{1}{2}\left(\frac{1}{(t-1)^{2}}(\partial_{t}\mathbf{W})^{2}+ \sum_{i=1}^{3}\left(\partial_{i}\mathbf{W}+\frac{x_{i}}{t-1}\partial_{t} \mathbf{W}\right)^{2}+(\partial_{y}\mathbf{W})^{2}\right)-\frac{1}{4}H_{LL}( \partial_{t}\mathbf{W})^{2}\] \[+O(H\cdot\partial\mathbf{W}\cdot\underline{\tilde{\partial}} \mathbf{W})+\left(1-\frac{r}{t-1}\right)O(H|\nabla\mathbf{W}|^{2})+O(H| \underline{\tilde{\nabla}}\mathbf{W}|^{2})\] We note that on \(\mathscr{H}\) we have \((t-1)^{2}=1+r^{2}\). Using the decomposition (1.12) of \(H\) we write \[\left(\frac{1}{2(t-1)^{2}}-\frac{1}{4}H_{LL}\right)(\partial_{t}\mathbf{W})^{2 }=\left(\frac{1}{2(1+r^{2})}+\chi\left(\frac{r}{t}\right)\chi(r)\frac{M}{2r}- \frac{1}{4}H^{1}_{LL}\right)(\partial_{t}\mathbf{W})^{2}\] so that \[\left(\frac{1}{2(t-1)^{2}}-\frac{1}{4}H_{LL}\right)(\partial_{t} \mathbf{W})^{2}+O(H\cdot\partial\mathbf{W}\cdot\underline{\partial}\mathbf{W}) +\left(1-\frac{r}{t-1}\right)O(H|\nabla\mathbf{W}|^{2})\] \[=\left(\frac{1}{2(1+r^{2})}+\chi\left(\frac{r}{t}\right)\chi(r) \frac{M}{2r}+O(H^{1}_{LL})+O(\epsilon^{-\frac{1}{2}}|H|^{2})\right)(\partial_{ t}\mathbf{W})^{2}\] \[+\left(1-\frac{r}{t-1}\right)O(H(\partial\mathbf{W})^{2})+O( \epsilon^{\frac{1}{2}}|\underline{\nabla}\mathbf{W}\nabla\mathbf{W}|^{2}).\] Under the hypothesis \[|H^{1}_{LL}|\lesssim\frac{\epsilon}{(1+t+r)^{1+\delta}}\] we obtain that for not too large values of \(r\) (e.g. \(r\ll 1/(2\epsilon)\)), \(H^{1}_{LL}\) is small in front of \(\frac{1}{2(1+r^{2})}\), while for \(r\gtrsim 1/(2\epsilon)\) it is small compared to the then dominant term \(\frac{M}{2r}\). Under the hypothesis on \(H\) we obtain \[|h^{1}_{LL}+\epsilon^{-\frac{1}{2}}|H|^{2}|\lesssim\frac{\epsilon}{(1+r)^{-1- \delta}}.\] Consequently for not too large values of \(r\) (e.g. \(r\ll 1/(2\epsilon)\)) we have that \[|h^{1}_{LL}+\epsilon^{-\frac{1}{2}}|H|^{2}|\leq\frac{1}{100(1+r^{2})};\] on the other hand, when \(r\gtrsim 1/(2\epsilon)\) the dominant term is \(M/(2r)\) and for \(\epsilon\) sufficiently small \[|h^{1}_{LL}+\epsilon^{-\frac{1}{2}}|H|^{2}|\leq\frac{M}{100r}\] Consequently, we can bound \[\frac{1}{4}\left(\left(\frac{1}{2(1+r^{2})}+\chi\left(\frac{r}{t }\right)\chi(r)\frac{M}{2r}\right)|\partial_{t}\mathbf{W}|^{2}+|\tilde{\nabla }\mathbf{W}|^{2}\right)\\ \leq e^{curv}_{\mathscr{H}}\leq 4\left(\left(\frac{1}{2(1+r^{2})}+ \chi\left(\frac{r}{t}\right)\chi(r)\frac{M}{2r}\right)|\partial_{t}\mathbf{W}| ^{2}+|\tilde{\nabla}\mathbf{W}|^{2}\right).\] Finally, a simple computation shows that \[B=|L\mathbf{W}|^{2}+|\dot{\nabla}\mathbf{W}|^{2}+\frac{1}{2}H^{\mu\nu} \partial_{\mu}\mathbf{W}\cdot\partial_{\nu}\mathbf{W}+\Big{(}-H^{0\nu}+\frac{x ^{\mathbf{i}}}{r}H^{i\nu}\Big{)}\partial_{\nu}\mathbf{W}\partial_{t}\mathbf{W}\] **Proposition A.2** (Energy inequality on hyperboloids).: _Let \(\mathbf{W}\) be a solution of (A.1) and \(E^{i}(s,\mathbf{W})\) be the energy functional defined in (4.1). We assume that \(H\) satisfies the same hypothesis of proposition A.1 For any \(2<s_{1}<s_{2}\), let \(\tilde{\mathscr{H}}_{s_{1}s_{2}}\) denote the portion of \(\tilde{\mathscr{H}}\) bounded by the hyperboloids \(\mathscr{H}_{s_{i}}\) with \(i=1,2\). We have the following inequality_ \[E^{i}(s_{2},\mathbf{W}) \lesssim E^{i}(s_{1},\mathbf{W})+\int_{\tilde{\mathscr{H}}_{s_{1} s_{2}}}\Big{[}\frac{1}{2(1+r^{2})}+\chi\left(\frac{r}{t}\right)\chi(r)\frac{M}{2r }\Big{]}|\partial_{t}\mathbf{W}|^{2}+|\tilde{\nabla}\mathbf{W}|^{2}dxdy\] \[+\iint_{\mathscr{H}_{[s_{1},s_{2}]}}|(\mathbf{F}+\partial_{\mu}H ^{\mu\nu}\,\partial_{\sigma}\mathbf{W})\partial_{t}\mathbf{W}|+|\partial_{t} H^{\mu\nu}\,\partial_{\mu}\mathbf{W}\,\partial_{\nu}\mathbf{W}|\,dtdxdy.\] _The implicit constant in the above inequality is a universal constant. An analogue inequality holds true for solutions \(\mathbf{W}\) to (A.1) on \(\mathbb{R}^{1+3}\)._ Proof.: The proof is analogous to that of proposition A.1 except that we integrate (A.4) with \(w\equiv 1\) in the portion of interior region bounded above by \(\mathscr{H}_{s_{2}}\), below by \(\mathscr{H}_{s_{1}}\) and laterally by \(\tilde{\mathscr{H}}_{s_{1}s_{2}}\), which we denote by \(\mathscr{H}_{[s_{1},s_{2}]}\). This yields \[\int_{\mathscr{H}_{s_{2}}}\mathbf{e}^{curv}_{\mathscr{H}}dxdy=\int_{\mathscr{H }_{s_{1}}}\mathbf{e}^{curv}_{\mathscr{H}}dxdy+\int_{\mathscr{H}_{s_{1}s_{2}} }e^{curv}_{\mathscr{H}}dxdy+\iint_{\mathscr{H}_{[s_{1},s_{2}]}}(C-\mathbf{F}) dtdxdy,\] where \(e^{curv}_{\mathscr{H}}\) and \(C\) have been defined in (A.6) and (A.8) respectively and \[\mathbf{e}^{curv}_{\mathscr{H}}=-g^{\mu 0}\partial_{\mu}\mathbf{W}\partial_{t} \mathbf{W}+\frac{1}{2}g^{\mu\nu}\partial_{\mu}\mathbf{W}\partial_{\nu} \mathbf{W}+\frac{x_{i}}{t}g^{\mathbf{i}\mu}\partial_{\mu}\mathbf{W}\partial_{t} \mathbf{W}.\] In the region \(\mathscr{H}_{[s_{1},s_{2}]}\) we have \[\frac{1}{4}\left(\frac{s^{2}}{t^{2}}|\partial_{t}\mathbf{W}|^{2}+|\nabla\mathbf{W }|^{2}\right)\leq\mathbf{e}_{\mathscr{H}}^{curv}\leq 4\left(\frac{s^{2}}{t^{2}}| \partial_{t}\mathbf{W}|^{2}+|\nabla\mathbf{W}|^{2}\right).\] In fact, we have that \(r\leq t-1\) and the mass term can be absorbed in the following way \[\chi\left(\frac{r}{t}\right)\chi(r)\frac{M}{r}\leq\frac{(t-r)(t+r)}{100(t+r)^{ 2}}=\frac{s^{2}}{100t^{2}}.\] ## Appendix B Sobolev and Hardy inequalities We start by listing some weighted inequalities that are used in section 3. Their proofs can be found in Huneau-Stingo [21]. **Lemma B.1** (Weighted Sobolev inequalities).: _Let \(\beta\in\mathbb{R}\). For any sufficiently smooth function \(u\) we have the following inequalities_ (B.1) \[\sup_{\Sigma_{t}^{\varepsilon}}{(2+r-t)^{2\beta}r^{2}}|u(t,x,y)|^ {2}\\ \lesssim\iint_{\Sigma_{t}^{\varepsilon}}(2+r-t)^{1+2\beta}( \partial_{r}Z^{\leq 2}u)^{2}+(2+r-t)^{2\beta-1}(Z^{\leq 2}u)^{2}\,dxdy,\] (B.2) \[\sup_{\Sigma_{t}^{\varepsilon}}{(2+r-t)^{2\beta}r^{2}}|u(t,x,y)|^{2}\lesssim \iint_{\Sigma_{t}^{\varepsilon}}(2+r-t)^{2\beta}\Big{(}(\partial_{r}Z^{\leq 2 }u)^{2}+(Z^{\leq 2}u)^{2}\Big{)}dxdy,\] (B.3) \[\sup_{\Sigma_{t}^{\varepsilon}}{(2+r-t)^{2\beta}r^{2}}\|u(t,r)\|_{L^{2}( \mathbb{S}^{2}\times\mathbb{S}^{1})}^{2}\lesssim\iint_{\Sigma_{t}^{ \varepsilon}}(2+r-t)^{1+2\beta}(\partial_{r}u)^{2}+(2+r-t)^{-1+2\beta}u^{2}\,dxdy.\] **Lemma B.2** (Weighted Hardy inequality).: _Let \(\beta>-1\). For any sufficiently regular function \(u\) for which the left-hand side of the following inequality is finite we have_ (B.4) \[\iint_{\Sigma_{t}^{\varepsilon}}(2+r-t)^{\beta}u^{2}dxdy\lesssim\iint_{\Sigma _{t}^{\varepsilon}}(2+r-t)^{\beta+2}(\partial u)^{2}dxdy.\] **Corollary B.3**.: _Let \(\beta>0\). For any sufficiently regular function \(u\) we have the following inequalities_ (B.5) \[(2+r-t)^{\beta}r|u(t,x,y)|\lesssim\|(2+r-t)^{1/2+\beta}\partial Z ^{\leq 2}u(t)\|_{L^{2}(\Sigma_{t}^{\varepsilon})}\] (B.6) \[(2+r-t)^{\beta}r\|u(t,r)\|_{L^{2}(\mathbb{S}^{2}\times\mathbb{S}^ {1})}\lesssim\|(2+r-t)^{1/2+\beta}\partial u(t)\|_{L^{2}(\Sigma_{t}^{ \varepsilon})}\] Proof.: Inequality (B.5) (resp. (B.6)) is a straight consequence of the combination of (B.1) (resp. (B.3)) and (B.4). Below are some Sobolev and Hardy inequalities that are useful in section 4. Lemma B.4 is standard while lemma B.5 is a simple adaptation of a result in [18]. The result of lemma B.7 can be also obtained with small modifications from the one in [40]. **Lemma B.4**.: _For any sufficiently smooth function \(u\) we have the following Sobolev inequality_ \[\|u\|_{L^{p}(\mathscr{H}_{s})}\lesssim\|\underline{\nabla}u\|_{L^{2}(\mathscr{H}_{ s})}^{\frac{3}{2}-\frac{3}{p}}\|u\|_{L^{2}(\mathscr{H}_{s})}^{\frac{3}{2}-\frac{1}{2}}+s ^{-\left(\frac{3}{2}-\frac{3}{p}\right)}\|u\|_{L^{2}(\mathscr{H}_{s})},\quad 2 \leq p\leq 6\] _as well as the trace inequality_ \[\|u\|_{L^{4}(S_{s,r})}\lesssim\|\underline{\nabla}u\|_{L^{2}(\mathscr{H}_{s}) }+s^{-1}\|u\|_{L^{2}(\mathscr{H}_{s})}.\] **Lemma B.5**.: _Let \(B=\{\Omega_{0j}:j=1,2,3\}\). For any sufficiently smooth function \(u=u(t,x)\) we have_ \[\sup_{\mathscr{H}_{s}}|t^{\frac{3}{2}}u|\lesssim\|B^{\leq 2}u\|_{L^{2}_{x}( \mathscr{H}_{s})}.\] **Lemma B.6**.: _Let \(s>0\), \(r_{s}:=\max\{r\,|\,S_{r}\subset\mathscr{H}_{s}\}\) and \(t_{s}=\sqrt{s^{2}+r_{s}}\). For any sufficiently smooth function \(u=u(t,x)\) we have that_ (B.7) \[\|r^{-1}u\|_{L^{2}(\mathscr{H}_{s})}\lesssim\|\underline{\partial}u\|_{L^{2}( \mathscr{H}_{s})}+\|\partial u(t_{s})\|_{L^{2}(\Sigma_{t_{s}})}\] Proof.: It is a straightforward consequence of the classical Hardy inequality applied to \[v(x)=\begin{cases}u(\sqrt{s^{2}+r^{2}},x),&\quad\text{if }|x|<r_{s}\\ u(t_{s},x),&\quad\text{if }|x|>r_{s}.\end{cases}\] **Lemma B.7**.: _Let \(0\leq\alpha\leq 2\), \(1+\mu>0\) and \(\gamma>0\). For any function \(u\in\mathscr{C}_{0}^{\infty}([0,\infty))\), any arbitrary time \(t>0\) and \(s>0\) there exists a constant \(C\), depending on a lower bound for \(\gamma\) and \(1+\mu\), such that_ (B.8) \[\begin{split}&\int_{r(s,t)}^{t}\frac{u^{2}}{(1+t-r)^{2+\mu}}\, \frac{r^{2}dr}{(1+t+r)^{\alpha}}+\int_{t}^{\infty}\frac{u^{2}}{(1+r-t)^{1- \gamma}}\frac{r^{2}dr}{(1+t+r)^{\alpha}}\\ &\leq C\int_{r(s,t)}^{t}\frac{|\partial_{r}u|^{2}}{(1+t-r)^{\mu}} \frac{r^{2}dr}{(1+t+r)^{\alpha}}\,+C\int_{t}^{\infty}|\partial_{r}u|^{2}\frac {(1+r-t)^{1+\gamma}}{(1+t+r)^{\alpha}}\,\,r^{2}dr\end{split}\] _where \(r(s,t)=\sqrt{(t^{2}-s^{2})^{+}}\)._ **Corollary B.8**.: _Under the same assumptions of Lemma B.7, we have that_ \[\begin{split}\int_{s_{0}}^{t_{s}}\!\int_{\mathscr{C}_{t}}& \frac{|u|^{2}}{(1+t-r)^{2+\mu}}\frac{dxdt}{(1+t+r)^{\alpha}}\\ &\lesssim\int_{s_{0}}^{s}\!\int_{\mathscr{H}_{\tau}}\frac{| \partial_{r}u|^{2}}{(1+t(\tau)-r)^{\mu}(1+t(\tau)+r)^{\alpha}}dxd\tau+\int_{s _{0}}^{t_{s}}\!\int_{\Sigma_{t}^{\varepsilon}}\!|\partial_{r}u|^{2}\frac{(1+ |r-t|)^{1+\gamma}}{(1+t+r)^{\alpha}}dxdt\end{split}\] Proof.: The proof is a simple application of inequality (B.8) and of a change of coordinates.
2304.03155
Duality between open systems and closed bilayer systems, and thermofield double states as quantum many-body scars
We establish a duality between open many-body systems governed by the Gorini-Kossakowski-Sudarshan-Lindblad (GKSL) equation and satisfying the detailed balance condition on the one side, and closed bilayer systems with a self-adjoint Hamiltonian on the other side. Under this duality, the identity operator on the open system side maps to the thermofield double state which turns out to be a quantum many-body scar of the dual Hamiltonian $\mathcal H$. A remarkable feature of this thermofield scar is a tunable entanglement entropy controlled by the reservoir temperature on the open system side. Further, we identify broad classes of many-body open systems with nontrivial explicit eigen operators $Q$ of the Lindbladian superoperator. The expectation values of the corresponding observables exhibit a simple exponential decay, $\langle Q\rangle_t=e^{-\Gamma t} \langle Q \rangle_0$, irrespectively of the initial state. Under the above duality, these eigen operators give rise to additional (towers of) scars. Finally, we point out that more general superoperators (not necessarily of the GKSL form) can be mapped to self-adjoint Hamiltonians of bilayer systems harbouring scars, and provide an example thereof.
Alexander Teretenkov, Oleg Lychkovskiy
2023-04-06T15:38:53Z
http://arxiv.org/abs/2304.03155v3
# Exact quantum dynamics of selected observables ###### Abstract We address dynamics of open many-body systems governed by the Gorini-Kossakowski-Sudarshan-Lindblad (GKSL) equation. We attempt to solve this equation in the Heisenberg representation, i.e. for observables, not states. We demonstrate that there are broad classes of models where the GKSL equation can be solved (essentially) _exactly_ for _certain_ observables. In the simplest case, the only effect of dissipation is an exponential decay on top of a coherent dynamics. This is true, in particular, for the total energy, provided the Hamiltonian is an eigenoperator of the dissipation superoperator - no matter whether the model is integrable or not. In more complex cases, dissipation alters the dynamics in a much more profound way. As an example, we solve the GKSL equation for a set of observables in a dissipative one-dimensional \(XX\) model. It turns out that the observables experience the Wannier-Stark localization in the Krylov space of operators. As a result, the expectation values of the observables are linear combinations of a discrete set of decay modes. _Introduction._ An exact solution of a quantum many-body problem is always welcome, since it enriches our understanding of the inherently complex many-body physics. Complementary to their theoretical importance, exact solutions often have direct laboratory applications, thanks to the unceasing progress of experimental techniques and rapid rise of quantum technologies [1; 2]. In the present Letter we address the dynamics of open quantum many-body systems that can be described by the Gorini-Kossakowski-Sudarshan-Lindblad (GKSL) master equation [3; 4; 5]. We work in the Heisenberg representation, where the time evolution of an observable is embodied in the corresponding time-dependent Heisenberg operator. The latter obeys the Heisenberg version of the GKSL equation. In a generic many-body system, coupled GKSL equations include an exponentially large hierarchy of operators and are expected to be too complex to be manageable without approximations. In contrast, if the Hamiltonian is quadratic in bosonic or fermionic operators and the Lindblad operators are either linear [6; 7; 8; 9; 10], or quadratic and Hermitian [11; 12; 13; 14; 15; 16], or unitary with linear or quadratic generators [17], the evolution of \(m\)-body observables decouples from the evolution of \(n\)-body observables with \(n>m\), thus rendering the GKSL equations manageable or even exactly solvable (see also [18; 19; 20]). The same property can hold for various open systems with zero or classical-like Hamiltonian and quantum dissipation [13; 17; 21; 22; 23; 24; 25], as well as some systems with interacting Hamiltonians and fine-tuned dissipation [13]. Here we reveal broad classes of open systems extending well beyond the aforementioned ones, where the coupled GKSL equations for certain sets of observables can be solved exactly. A simple idea behind our construction is as follows: we seek for linear spaces of observables, small enough to be manageable, that remain invariant both under the coherent Hamiltonian dynamics and the dissipation. We start from addressing the simplest case where such invariant space contains a single observable that would be an integral of motion in the absence of dissipation. The dissipation leads to a pure exponential decay of such an observable. We report several classes of open systems with nonintegrable Hamiltonians where this happens. Then we narrow our focus to a one-dimensional open spin-\(1/2\) systems with the nearest-neighbour \(XX\) Hamiltonian. This Hamiltonian can be mapped to a quadratic fermionic Hamiltonian by means of the Jordan-Wigner transformation [26]. As a consequence, a space of operators quadratic in the fermionic representation (known as Pauli strings) remains invariant under Hamiltonian dynamics. We uncover a class of (in general, non-quadratic) Lindblad operators that maintain this invariance. For a particular choice of Lindblad operators worked out in detail, the Heisenberg operator of an observable gets localized in the operator space, with profound consequences for the observable dynamics. We conclude the Letter by discussing our approach and results in a broader context of exactly solvable open quantum systems and outlining possible future developments. GKSL equation.GKSL equation in the Heisenberg representation reads [5, Sec. 3.2.3] \[\partial_{t}O_{t}=i[H,O_{t}]+\mathcal{D}O_{t}, \tag{1}\] \[\mathcal{D}O_{t}\equiv\gamma\sum_{v}\left(L_{v}^{\dagger}O_{t}L_{v}-\frac{1}{2} \{L_{v}^{\dagger}L_{v},O_{t}\}\right), \tag{2}\] with the initial condition \(O_{t=0}=O\). Here \(O_{t}\) and \(O\) are operators of the observable of interest in the Heisenberg and Schrodinger representations, respectively, \(H\) is the Hamiltonian (in the Schrodinger representation),1\(\mathcal{D}\) is the dissipation superoperator (dissipator), \(L_{v}\) are Lindblad operators2 and \(\gamma\) is a real positive constant. The expectation value \(\langle O\rangle_{t}\) of the observable \(O\) evolves in time according to \(\langle O\rangle_{t}=\operatorname{tr}\rho_{0}\,O_{t}\), where \(\rho_{0}\) is the initial state of the open system. In the limit of vanishing dissipation, \(\gamma=0\), the GKSL equation (1) reduces to the Heisenberg equation. Footnote 1: Throughout the paper the presence (absence) of the subscript \(t\) indicates that the operator is in the Heisenberg (Schrödinger) representation. Footnote 2: The subscript \(v\) in \(L_{v}\) is somewhat schematic; specific way of enumeration of the Lindblad superoperators will be chosen on the case-by-case basis. Decay of Hamiltonian integrals of motion.Consider an observable \(Q\) that is an integral of motion in the absence of dissipation, \[[H,Q]=0, \tag{3}\] referred to as Hamiltonian integral of motion (HloM). Assume that \(Q\) is an eigenoperator of the dissipation superoperator \(\mathcal{D}\), \[\mathcal{D}Q=-\Gamma Q, \tag{4}\] where \(\Gamma\) is a real non-negative number. In this case the GKSL equation (1) is immediately solved with the result \[Q_{t}=e^{-\Gamma t}Q. \tag{5}\] Naturally, the expectation value of the HloM experiences the same exponential decay for an arbitrary initial condition, \(\langle Q\rangle_{t}=e^{-\Gamma t}\langle Q\rangle_{0}\). The prime example of a HloM is, of course, the Hamiltonian itself. Condition (4) is satisfied for a surprisingly broad variety of dissipative models, integrable or not. As an example, consider an arbitrary-range \(XY\) spin-1/2 model on an arbitrary lattice with \(N\) sites, \[H=\sum_{i<j}\left(w_{ij}^{x}\,\sigma_{i}^{x}\sigma_{j}^{x}+w_{ij}^{y}\,\sigma_ {i}^{y}\sigma_{j}^{y}\right), \tag{6}\] with \[L_{j}=\sigma_{j}^{z},\qquad j=1,2,\ldots,N. \tag{7}\] Here and in what follows \(i\) and \(j\) label lattice sites, \(\sigma_{j}^{x,y,z}\) are Pauli matrices acting on the spin at the \(j\)'th site, and \(w_{ij}^{x,y}\) are arbitrary real parameters. One can easily verify that the condition (4) is satisfied for this model with \(\Gamma=4\gamma\). Note that this model is, in general, clearly nonintegrable.3 Footnote 3: We do not attempt to give a precise definition of integrability of open quantum many-body systems. Rather we regard a system (non-)integrable if the spectrum and/or eigenvectors of the Liouvillian can (not) be found by Bethe Ansatz or other standard techniques [27]. In fact, there exist broad classes of Hamiltonians and dissipators that meet the condition (4). We report some of them in the Supplement [28]. Decay of HloMs other than the Hamiltonian itself will be showcased below in the one-dimensional nearest-neighbour \(XX\) model. Eq. (5), along with the observation that it holds in a broad variety of open systems, constitutes the first main result of the present Letter. A useful generalization of conditions (3),(4).Consider an observable \(O\) with the following property: the solution \(O_{t}|_{\gamma=0}\) of eq. (1) in the absence of dissipation satisfies \[\mathcal{D}\left(O_{t}|_{\gamma=0}\right)=-\Gamma\ O_{t}|_{\gamma=0}. \tag{8}\] One immediately verifies that in this case \[O_{t}=e^{-\Gamma t}\ (O_{t}|_{\gamma=0}) \tag{9}\] is the solution of the GKSL equation (1) with a finite dissipation. This generalization is important when \((O_{t}|_{\gamma=0})\) is known exactly, e.g. as in refs. [29; 30; 31]. We will apply eq. (9) to a dissipative \(XX\) model in what follows. We remark that a similar exponential damping on top of a coherent dynamics have been found theoretically [32; 33] and experimentally [33] in finite \(XXZ\) spin chains with dissipation. \(XX\) model: Hamiltonian dynamics.Consider an integrable Hamiltonian of the translation-invariant one-dimensional nearest-neighbour \(XX\) model: \[H=\frac{1}{2}\sum_{j=1}^{N}\left(\sigma_{j}^{x}\sigma_{j+1}^{x}+\sigma_{j}^{y} \sigma_{j+1}^{y}\right). \tag{10}\] Here and in what follows subscripts \(j\) and \(j+N\) refer to the same site. Let us introduce translation-invariant operators \(A^{\pm n}\) \(B^{\pm n}\) and their linear combinations: \[A^{n} = \sum_{j=1}^{N}\sigma_{j}^{x}\left(\prod_{m=1}^{n-1}\sigma_{j+m}^{z} \right)\sigma_{j+n}^{x},\] \[A^{-n} = \sum_{j=1}^{N}\sigma_{j}^{y}\left(\prod_{m=1}^{n-1}\sigma_{j+m}^{z} \right)\sigma_{j+n}^{y},\] \[B^{n} = \sum_{j=1}^{N}\sigma_{j}^{x}\left(\prod_{m=1}^{n-1}\sigma_{j+m}^{ z}\right)\sigma_{j+n}^{y},\] \[B^{-n}= -\sum_{j=1}^{N}\sigma_{j}^{y}\left(\prod_{m=1}^{n-1}\sigma_{j+m}^ {z}\right)\sigma_{j+n}^{x},\] \[H^{n} = \frac{1}{2}(A^{n}+A^{-n}),\qquad Q^{n}=\frac{1}{2}(B^{n}+B^{-n}),\] \[R^{\pm n} = \frac{1}{2}(A^{n}-A^{-n})\pm\frac{i}{2}(B^{n}-B^{-n}), \tag{11}\] where \(n\geq 1\), and \[A^{0}=H^{0}=-\sum_{j=1}^{N}\sigma_{j}^{z},\qquad B^{0}=R^{0}=0. \tag{12}\] Note that \(H^{1}=H\) is the \(XX\) Hamiltonian (10) itself. It is easy to verify that operators \(H^{n}\) and \(Q^{n}\) are HIoMs, \[[H,H^{n}]=0,\qquad[H,Q^{n}]=0. \tag{13}\] Products of Pauli matrices entering eqs. (11), (12) are known as Pauli strings; we will use this term also for arbitrary linear combinations thereof. The linear subspace of operators spanned by Pauli strings (Pauli space, for short) has the dimension quadratic in \(N\), in contrast to the dimension \(4^{N}\) of the complete space of operators. Pauli strings are special in several respects. Their key property is as follows: the Pauli space is closed with respect to commutation [29; 34].4 Footnote 4: Of course, this property can be traced back to the fact that Pauli strings can be mapped to quadratic fermionic operators by means of the Jordan-Wigner transformation [26]. We, however, will not utilize this mapping. As a consequence, in the absence of dissipation (i.e. for \(\gamma=0\)) one can write down coupled Heisenberg equations in the Pauli space (_cf._[30; 34; 14; 29]). Heisenberg equations for \(R_{t}^{n}\) acquire a particularly simple form [28]: \[\partial_{t}R_{t}^{n}= -2i\left(R_{t}^{n-1}+R_{t}^{n+1}\right),\quad n\geq 1. \tag{14}\] Remind that \(R_{t}^{0}=0\) and thus, in effect, does not enter the above equation. Solving a linear system of differential equations essentially reduces to diagonalizing its matrix (if the latter is diagonalizable). The matrix of eq. (14) is very simple - its eigenvectors are plane waves. Standard calculations analogous to those in [30] (see the Supplement [28] for details) lead to \[R_{t}^{n}|_{\gamma=0}=\sum_{m=1}^{\infty}i^{n-m}\Big{(}J_{m-n}(4t)-(-1)^{n}J_{ m+n}(4t)\Big{)}\,R^{m}, \tag{15}\] where \(J_{n\pm m}(4t)\) are Bessel functions. For further purposes, we explicitly indicate in the above formula that the dissipation is absent. The Heisenberg operators \(A_{t}^{n}\), \(B_{t}^{n}\) can be obtained from eq. (15) [28]. To illustrate real-time quench dynamics, we consider a translation-invariant out-of-equilibrium initial state \[|\mathrm{in}\rangle=|\mathrm{xxx}\ldots\mathrm{x}\rangle, \tag{16}\] where all spins are polarized along the \(x\) direction. A simple observable \(\sigma_{j}^{x}\sigma_{j+1}^{x}\) then evolves according to [28] \[\langle\sigma_{j}^{x}\sigma_{j+1}^{x}\rangle_{t}|_{\gamma=0}=(1/2)\big{(}1+J_{ 0}(4t)+J_{2}(4t)\big{)}, \tag{17}\] as shown in Fig 1(a). Dissipative \(XX\) model.It turns out that there exists a broad class of dissipators that map Pauli strings to Pauli strings, i.e. leave the Pauli space invariant. As can be directly verified, this property holds for any dissipators built from Lindblad operators of the form \[L^{\alpha_{1},\ldots,\alpha_{m};\;\phi_{1},\ldots,\phi_{n}}_{i_{1},\ldots,\;i_ {m};\;j_{1},\ldots,j_{n}}=\prod_{l=1}^{m}\sigma_{i_{l}}^{\alpha_{l}}\;\prod_{ l=1}^{n}e^{i\phi_{l}\sigma_{j_{l}}^{z}}, \tag{18}\] Figure 1: Dynamics of the dissipative \(XX\) model. (a) Expectation value of \(\langle\sigma_{j}^{x}\sigma_{j+1}^{x}\rangle_{t}\) after a quench from the initial state (16) in three cases: no dissipation, with the dissipation (7) and with the dissipation (22). In the latter two cases \(\gamma=0.1\). (b) Real and imaginary parts of the eigenvalue \(\lambda_{1}\) that dominates the dissipative dynamics in the case of localization, as a function of the dissipation strength \(\gamma\). Dashed orange line marks the critical value \(\gamma_{c}\simeq 1.577\). Left inset: Wannier-Stark localization of an eigenvector of the matrix of eq. (25). Shown are real (blue circles) and imaginary (magenta squares) parts of the 14’th eigenvector \(\mathcal{U}_{14}^{n}\). Right inset: the spectrum of the same matrix. The spectrum features a few pairs of complex conjugate eigenvalues and an infinite nearly equidistant sequence of real eigenvalues. Both insets show data for \(\gamma=0.3\). where \(m\) and \(n\) are some nonnegative integers, \(\phi_{l}\in(0,\pi)\) and \(\alpha_{l}\in\{x,y\}\). This includes the case where Lindblad operators are Pauli strings themselves [11][12]. The corresponding coupled Lindblad equations for Pauli strings remain closed (i.e. do not involve operators other than Pauli strings) and can be solved essentially with the same effort as in the absence of dissipation. This general observation is the second main result of the present Letter. Below we provide two specific examples. XX model: \(\sigma^{z}\) dissipation.Consider a dissipative \(XX\) model with Lindblad operators of the form (7) (they belong to the class (18) with \(m=0\), \(n=1\) and \(\phi_{l}=\pi/2\)). This and related models have been extensively studied previously [14; 16; 35; 36; 37]. The model can be mapped to a fermionic model with a quadratic Hamiltonian and quadratic and Hermitian Lindblad operators [35]. A nonequilibrium steady state has been found in the case of a non-translation-invariant chain with biased boundaries [35; 36]. The GKSL equation in the Heisenberg representation has been solved in [14; 16]. The model has been mapped to non-Hermitian Hubbard model in [27] (see also [14; 37]). We reconsider this model in order to illustrate a simple exponential damping (5) and (9). Indeed, one observes that \[\mathcal{D}F^{\pm n}=-4\gamma F^{\pm n},\quad n\geq 1, \tag{19}\] where \(F^{\pm n}\) stands for any of the operators from eq. (11). As a consequence, the dynamics of HIoMs reads \[H_{t}^{n}=e^{-4\gamma t}H^{n},\qquad Q_{t}^{n}=e^{-4\gamma t}Q^{n},\quad n\geq 1, \tag{20}\] in accordance with eq. (5). Note that the polarization along \(z\) axes, \(-H^{0}\), still does not depend on time since \(\mathcal{D}H^{0}=0\). Further, it is clear from eqs. (15) and (19) that \(R_{t}^{n}\) satisfies the condition (8) and, therefore, evolves according to \[R_{t}^{n}=e^{-4\gamma t}\left(R_{t}^{n}|_{\gamma=0}\right). \tag{21}\] \(A_{t}^{n}\) and \(B_{t}^{n}\) can be found analogously. As an example, we show in Fig. 1(a) the damped dynamics of the observable \(\sigma_{j}^{x}\sigma_{j+1}^{x}\). XX model: \(\sigma^{x,y}\) dissipation.Now we turn to a different type of dissipator. The corresponding Lindbladians belong to the class (18) and are given by5 Footnote 5: In the present case the choice of Lindbladians is non-unique: an equivalent choice that leads to the same dissipator reads \(L_{2j-1}=\sigma_{j}^{-}\), \(L_{2j}=\sigma_{j}^{+}\). \[L_{2j-1}=\frac{1}{\sqrt{2}}\,\sigma_{j}^{x},\qquad L_{2j}=\frac{1}{\sqrt{2}}\, \sigma_{j}^{y},\qquad j=1,2,\ldots,N. \tag{22}\] In contrast to the previous case, these operators map onto highly nonlocal operators under the Jordan-Wigner transformation. One can verify that now \[\mathcal{D}A^{0}=-2\,\gamma\,A^{0},\quad\mathcal{D}F^{\pm n}=-2\,\gamma\,n\,F ^{\pm n},\quad n\geq 1, \tag{23}\] and eq. (5) implies \[H_{t}^{n}=e^{-2\gamma nt}H^{n},\quad Q_{t}^{n}=e^{-2n\gamma t}Q^{n},\quad n\geq 1, \tag{24}\] and \(H_{t}^{0}=e^{-2\gamma t}H^{0}\). Observe an additional factor \(n\) in exponents, as compared to eq. (20). This factor is quite remarkable: it signals that operators with larger support decay faster. Due to this factor the condition (8) is not fulfilled for observables in eq. (11) other than HIoMs, and the dissipation alters their dynamics in a more complex way than in eq. (21). To see this, we again focus on \(R_{t}^{n}\). The coupled GKSL equations for these operators read \[\partial_{t}R_{t}^{n}= -2i\left(R_{t}^{n-1}+R_{t}^{n+1}\right)-2\,\gamma\,n\,R_{t}^{n}, \quad n\geq 1. \tag{25}\] If \(\gamma\) were imaginary, these equations would describe a quantum particle hopping on a half-line in a constant electric field; it is well-known that such particle experiences Wannier-Stark localization [38]. Remarkably, it turns out that the localization phenomenon remains when the value of the "electric field" is imaginary. This can be seen by examining the eigenvectors of the matrix of eq. (25). The \(l\)'th eigenvector \(\mathcal{U}_{l}^{n}\) and eigenvalue \(\lambda_{l}\) read [28; 39] \[\mathcal{U}_{l}^{n}=c_{l}J_{\nu_{l}+n}\left(-\frac{2i}{\gamma}\right),\qquad \lambda_{l}=2\,\gamma\,\nu_{l},\qquad l,n=1,2,\ldots, \tag{26}\] where \(c_{l}\) are normalization factors given in the Supplement [28] and \(\nu_{l}\) are solutions of the equation \(J_{\nu_{l}}(-2i/\gamma)=0\) (ordered by the descending real part). Thus obtained spectrum is shown in Fig. 1(b). It has the following features (see [39; 40; 41; 42; 43] and the Supplement [28] for details). There is a phase transition at a critical dissipation strength \(\gamma_{c}\simeq 1.5775\). If \(\gamma\geq\gamma_{c}\), all eigenvalues are real, otherwise there is \(n_{p}\) conjugate pairs of complex roots, where \(n_{p}\) is the integer part of the ratio \(\gamma_{c}/\gamma\). For \(l\gtrsim 2n_{p}+2\), roots are real and well approximated by \(\nu_{l}\simeq-l\). The eigenvectors (26) are exponentially localized in the vicinity of \(n\simeq l\). Following a standard procedure [28], we obtain \[R_{t}^{n}=\sum_{l,m=1}^{\infty}e^{\lambda_{l}t}\,\mathcal{U}_{l}^{n}\, \mathcal{U}_{l}^{m}\,R^{m}. \tag{27}\] In practice, thanks to localization, it suffices to take a few terms in the above sum. The dynamics of a particular observable is illustrated in Fig. 1 (a). Since the spectrum (26) is discrete, in the large time limit a single mode \(\mathrm{Re}\,e^{\lambda_{1}t}\) dominates the dynamics. The oscillatory part of this mode vanishes above the critical dissipation strength, as shown in Fig. 1 (b). Localization in the _Krylov space_ of operators explored by an observable in the Heisenberg representation has been recently discussed in the context of the recursion method and the growth of Krylov complexity in generic open systems [44, 45]. It is therefore plausible that the localization reported here is a an exactly solvable example of a fairly generic phenomenon. We also note that a simpler version of the localization in the Krylov space has been found earlier in a spin system with a classical-like Ising Hamiltonian and quantum dissipation [21]. The solution (27),(26) of the dissipative dynamics in the model (10),(22) and the demonstration of the localization in the Krylov space is the third main result of the present Letter. _Discussion._ We have reported broad classes of open quantum systems where GKSL equations in the Heisenberg representation have exact solutions for distinguished observables or sets of observables. Our approach complements other approaches to exactly solvable open systems. Let us discuss several interesting interconnections. One popular approach is to map an open system described by a Lindbladian to a formally closed system with doubled degrees of freedom and a non-Hermitian Hamiltonian [46, 47, 48, 49, 37, 48]. An open system is regarded to be integrable if the latter Hamiltonian is integrable, usually by means of the Bethe ansatz. Importantly, most of the systems considered here are not integrable in this sense. Remarkably, under the above mapping exact solutions of the GKSL equation are translated to exact solutions of the Schrodinger equation for the Hermitian counterpart of the corresponding Hamiltonian [27]. This way the results of the present work can be directly applied to the search of quantum many-body scars and more complex states breaking quantum ergodicity, which will be studied in a separate paper. Note that while the integrability in the above-mentioned sense implies the spectrum of the Lindbladian can be found, calculating dynamical quantities typically remains a formidable challenge. This remark is also valid for other methods addressing the Lindbladian spectrum [49, 50, 51]. Our approach circumvents this difficulty. Much effort is being invested in the studies of nonequilibrium steady states (NESS) is systems with biased boundary dissipation [52, 53, 54, 11, 12, 36, 55, 56, 11, 57]. It would be interesting to apply our approach is such setting. _Acknowledgments._ We thank V. Gritsev for useful discussions and I. Ermakov for the careful reading of the manuscript. The work of OL was supported by the Foundation for the Advancement of Theoretical Physics and Mathematics "BASIS" under the grant N\({}^{o}\) 22-1-2-55-1.
2302.11807
On the spectrum of the differential operators of even order with periodic matrix coefficients
In this paper, we consider the band functions, Bloch functions and spectrum of the self-adjoint differential operator L with periodic matrix coefficients. Conditions are found for the coefficients under which the number of gaps in the spectrum of the operator L is finite
O. A. Veliev
2023-02-23T06:34:46Z
http://arxiv.org/abs/2302.11807v1
# On the spectrum of the differential operators of even order with periodic matrix coefficients ###### Abstract In this paper, we consider the band functions, Bloch functions and spectrum of the self-adjoint differential operator \(L\) with periodic matrix coefficients. Conditions are found for the coefficients under which the number of gaps in the spectrum of the operator \(L\) is finite. Key Words: Band functions, Bloch functions, Spectrum. AMS Mathematics Subject Classification: 34L05, 34L20. In this paper, we investigate the band functions, Bloch functions and spectrum of the differential operator \(L\), generated in the space \(L_{2}^{m}(\mathbb{R})\) of vector-valued functions by formally self-adjoint differential expression \[(-i)^{2\nu}y^{(2\nu)}(x)+\sum\limits_{k=2}^{2\nu-2}P_{k}(x)y^{(2\nu-k)}(x), \tag{1}\] where \(\nu>1\) and \(P_{k}\left(x\right),\) for each \(k=2,3,...2\nu-2,\) is the \(m\times m\) matrix with the summable entries \(p_{k,i,j}\) satisfying \(p_{k,i,j}\left(x+1\right)=p_{k,i,j}\left(x\right)\) for all \(i=1,2,...m\) and \(j=1,2,...m.\) To explain the results of this paper, let us introduce some notations. It is well-known that (see [1, 2, 4]) the spectrum \(\sigma(L)\) of the operator \(L\) is the union of the spectra \(\sigma(L_{t})\) of the operators \(L_{t},\) for \(t\in(-\pi,\pi],\) generated in \(L_{2}^{m}[0,1]\) by (1) and the quasiperiodic conditions \[U_{p}(y):=y^{(p)}\left(1\right)-e^{it}y^{(p)}\left(0\right)=0,\text{ }p=0,1,...,(2\nu-1). \tag{2}\] For \(t\in(-\pi,\pi]\) the spectra \(\sigma(L_{t})\) of the operators \(L_{t}\) consist of the eigenvalues \[\lambda_{1}(t)\leq\lambda_{2}(t)\leq\cdot\cdot\cdot \tag{3}\] called the Bloch eigenvalues of \(L\). The eigenfunctions \(\Psi_{n,t}\) corresponding to the Bloch eigenvalues \(\lambda_{n}(t)\) are the Bloch functions of \(L\). In [10] the continuity of the band function \(\lambda_{n}:t\rightarrow\lambda_{n}(t)\) and Bloch function of the operator \(L\) was investigated. In Section 2, we improve these results as follows. We only assume that the entries of the coefficients of (1) are summable function, while in [10] it was assumed that they are bounded functions. In [10] the choice of the continuous Bloch functions was made in a non-constructive way. Here we constructively define the continuous Bloch functions. We prove that for each \(f\in L_{2}^{m}[0,1]\) the function \(P_{t}f(x)\) converges to \(P_{a}f\)\((x)\) uniformly with respect to \(x\in[0,1]\) as \(t\to a,\) while in [10] this convergence was done in the \(L_{2}\) norm, where \(P_{t}\) is the projection of \(L_{t}\) corresponding to the eigenvalue \(\lambda_{n}(t).\) Moreover, the methods used in Section 2 and [10] are completely different. Therefore, Section 2 can be considered as a continuation and completion of the paper [10]. In Section 3, we consider the spectrum of the operator \(L.\) Since \(\sigma(L)\) is the union of \(\sigma(L_{t})\) for \(t\in(-\pi,\pi],\) the spectrum of \(L\) consists of the sets \[I_{n}=\left\{\lambda_{n}(t):t\in(-\pi,\pi]\right\}, \tag{4}\] for \(n=1,2,....\) The set \(I_{n}\) is called the \(n\)th band of the spectrum. The band \(I_{n}\) tends to infinity as \(n\rightarrow\infty.\) The spaces between the bands \(I_{k}\) and \(I_{k+1},\) for \(k=1,2,...,\) are called the gaps in the spectrum of \(L.\) In Section 3, we prove that most of the positive real axis is overlapped by \(m\) bands of the spectrum and consider the gaps (see Theorems 5). Then we find a condition on the eigenvalues of the matrix \[C=\int_{0}^{1}P_{2}\left(x\right)dx \tag{5}\] for which the number of the gaps in the spectrum is finite (see Theorem 6). Note that in [6], we proved Theorem 6 under the assumption that the matrix \(C\) has three simple eigenvalues \(\mu_{j_{1}},\)\(\mu_{j_{2}}\) and \(\mu_{j_{3}}\) satisfying (30). In this paper, we prove Theorem 6 without any conditions on the multiplicity of these eigenvalues. The case \(\nu=1\) was investigated in [9]. Parts of the proofs of Theorems 5 and 6, similar to the proofs of the case \(\nu=1,\) are omitted and references to [9] are given. ## 1 On the band functions and Bloch functions In this section, first, we study the continuity of the band functions and Bloch functions of \(L\) with respect to the quasimomentum by using the following well-known statements (see for example [5] Chap. 3) formulated here as summary. **Summary 1**: _The eigenvalues of \(L_{t}\) are the roots of the characteristic determinant_ \[\Delta(\lambda,t)=\det(Y_{j}^{(p-1)}(1,\lambda)-e^{it}Y_{j}^{(p-1)}(0,\lambda ))_{j,p=1}^{2\nu}= \tag{6}\] \[e^{i2\nu mt}+f_{1}(\lambda)e^{i(2\nu m-1)t}+f_{2}(\lambda)e^{i(2\nu m-2)t}+... +f_{2\nu m-1}(\lambda)e^{it}+1\] _which is a polynomial of \(e^{it}\) with entire coefficients \(f_{1}(\lambda),f_{2}(\lambda),...,\) where_ \(Y_{1}(x,\lambda),Y_{2}(x,\lambda),\ldots,Y_{2\nu}(x,\lambda)\) _are the solutions of the matrix equation_ \[(-i)^{2\nu}Y^{(2\nu)}+P_{2}Y^{(2\nu-2)}+P_{3}Y^{(2\nu-3)}+...+P_{2\nu}Y= \lambda Y,\] _satisfying \(Y_{k}^{(j)}(0,\lambda)=O\) for \(j\neq k-1\) and \(Y_{k}^{(k-1)}(0,\lambda)=I\). Here, \(O\) and \(I\) are \(m\times m\) zero and identity matrices, respectively. The Green's function of \(L_{t}-\lambda I\) is defined by formula_ \[G(x,\xi,\lambda,t)=g(x,\xi,\lambda)-\frac{1}{\Delta(\lambda,t)}\sum_{j,p=1}^{ 2\nu}Y_{j}(x,\lambda)V_{jp}(x,\lambda)U_{p}(g), \tag{7}\] _where \(g\) does not depend on \(t\) and \(V_{jp}\) is the transpose of that \(m\)th order matrix consisting of the cofactor of the element \(U_{p}(Y_{j})\) in the determinant \(\det(U_{p}(Y_{j}))_{j,p=1}^{2\nu}\). Hence, the entries of the matrices \(V_{jp}(x,\lambda)\) and \(U_{p}(g)\) either do not depend on \(t\) or have the forms \(u^{(p)}(1,\lambda)-e^{it}u^{(p)}(0,\lambda)\) and \(h(1,\xi,\lambda)-e^{it}h(0,\xi,\lambda)\) respectively, where the functions \(u\) and \(h\) do not depend on \(t.\)_ Now, using this summary, we prove that for each \(n\) the function \(\lambda_{n}\) defined in (3) is continuous at each point \(a\in(-\pi,\pi].\) For this we introduce the following notations. **Notation 1**: _Let \(\Lambda_{1}(a)<\Lambda_{2}(a)<\cdots\) be the distinct eigenvalues of \(L_{a}\) with the multiplicities \(k_{1},k_{2},...,\), respectively. For each \(n\) there exists \(p\) such that \(n\leq k_{1}+k_{2}+\cdots+k_{p}.\) These notations with the notation (3) imply that \(\lambda_{s_{j-1}+1}(a)=\lambda_{s_{j-1}+2}(a)=\cdots=\lambda_{s_{j}}(a)= \Lambda_{j}(a),\) where \(s_{0}=0,\)\(s_{j}=k_{1}+k_{2}+\cdots+k_{j}\) for \(j=1,2,...,p\) and \(n\leq s_{p}.\) Since \(L\) is below-bounded operator, there exists \(b\in\mathbb{R}\) such that \(\lambda_{1}(t)>b\) for all \(t\in(-\pi,\pi].\)_ Now, we are ready to prove the following theorem. **Theorem 1**: \((a)\) _For every \(r>0\) satisfying the inequality_ \[r<\frac{1}{2}\min_{j=1,2,...,p}\left(\Lambda_{j+1}(a)-\Lambda_{j}(a)\right) \tag{8}\] _there exists \(\delta>0\) such that the operator \(L_{t}\), for \(t\in(a-\delta,a+\delta),\) has \(k_{j}\) eigenvalues in the interval \((\Lambda_{j}(a)-r,\Lambda_{j}(a)+r)\), where \(j=1,2,...,p\) and \(a\in(-\pi,\pi].\)_ \((b)\) _The eigenvalues of \(L_{t}\) for \(t\in(a-\delta,a+\delta)\) lying in \((\Lambda_{j}(a)-r,\Lambda_{j}(a)+r)\) are \(\lambda_{s_{j-1}+1}(t),\lambda_{s_{j-1}+2}(t),...,\lambda_{s_{j}}(t),\) where \(s_{j}\) is defined in Notation 1._ **Proof.**\((a)\) By (8), the circle \(D_{j}(a)=\{z\in\mathbb{C}:|z-\Lambda_{j}(a)|=r\}\) belongs to the resolvent set of the operator \(L_{a}.\) It means that \(\Delta(\lambda,a)\neq 0\) for each \(\lambda\in D_{j}(a).\) Since \(\Delta(\lambda,a)\) is a continuous function on the compact \(D_{j}(a),\) there exists \(c>0\) such that \(|\Delta(\lambda,a)|>c\) for all \(\lambda\in D_{j}(a).\) Moreover, by (6), \(\Delta(\lambda,t)\) is a polynomial of \(e^{it}\) with entire coefficients. Therefore, there exists \(\delta>0\) such that \[|\Delta(\lambda,t)|>c/2 \tag{9}\] for all \(t\in(a-\delta,a+\delta)\) and \(\lambda\in D_{j}(a).\) It implies that \(D_{j}(a)\) belongs to the resolvent set of \(L_{t}\) for all \(t\in(a-\delta,a+\delta).\) On the other hand, it is well-known that \[\left(L_{t}-\lambda I\right)^{-1}f(x)=\int_{0}^{1}G(x,\xi,\lambda,t)f(\xi)d\xi, \tag{10}\] where \(G(x,\xi,\lambda,t)\) is the Green's function of \(L_{t}\) defined in (7). Moreover, it easily follows from Summary 1 and (9) that there exists \(M\) such that \[|G(x,\xi,\lambda,t)-G(x,\xi,\lambda,a)|\leq M\left|t-a\right| \tag{11}\] for all \(x\in[0,1],\)\(\xi\in[0,1],\)\(\lambda\in D_{j}(a)\) and \(t\in(a-\delta,a+\delta).\) Therefore, using (10) and Summary 1, one can easily verify that \(\left(L_{t}-\lambda I\right)^{-1}\) for \(\lambda\in D_{j}(a)\) and the projection \[P_{t}=\int_{D_{j}(a)}\left(L_{t}-\lambda I\right)^{-1}d\lambda \tag{12}\] continuously depend on \(t\in(a-\delta,a+\delta).\) This implies that the operator \(L_{t}\) for each \(t\in(a-\delta,a+\delta)\) has \(k_{j}\) eigenvalues inside \(D_{j}(a)\) and, therefore, in the interval \(\left(\Lambda_{j}(a)-r,\Lambda_{j}(a)-r\right),\) since \(L_{a}\) has \(k_{j}\) eigenvalues (counting multiplicity) inside \(D_{j}(a).\) \((b)\) Since \(L_{a}\) has no eigenvalues in the intervals \([b,\Lambda_{1}(a)-r]\) and \([\Lambda_{j}(a)+r,\Lambda_{j+1}(a)-r]\) for \(j=1,2,...,p\), arguing as above we obtain that \(L_{t}\) for \(t\in(a-\delta,a+\delta)\) also has no eigenvalues in these closed intervals. Therefore, the eigenvalues of \(L_{t}\) for \(t\in(a-\delta,a+\delta)\) lying in \((\Lambda_{j}(a)-r,\Lambda_{j}(a)+r)\) are \(\lambda_{s_{j-1}+1}(t),\lambda_{s_{j-1}+2}(t),...,\lambda_{s_{j}}(t)\) for \(j=1,2,...,p.\) Now, using these statements, we prove the main results of this section. **Theorem 2**: \((a)\) _For each \(n\) the function \(\lambda_{n}\) defined in (3) is continuous at \(a\in(-\pi,\pi].\)_ \((b)\) _For each \(f\in L_{2}^{m}[0,1]\) we have \(\left\|P_{t}f-P_{a}f\right\|_{\infty}\to 0\) as \(t\to a\), where_ \[\left\|f\right\|_{\infty}=\sup_{x\in[0,1]}\left|f(x)\right|.\] **Proof.**\((a)\) Consider any sequence \(\left\{\left(\lambda_{n}(t_{k}),t_{k}\right):k\in\mathbb{N}\right\}\) such that \(t_{k}\in(a-\delta,a+\delta)\) for all \(k\in\mathbb{N}\) and \(t_{k}\to a\) as \(k\rightarrow\infty,\) where \(\delta\) is defined in Theorem 1. Let \(\left(\lambda,a\right)\) be any limit point of the sequence \(\left\{\left(\lambda_{n}(t_{k}),t_{k}\right):k\in\mathbb{N}\right\}.\) Since \(\Delta\) is a continuous function with respect to the pair \(\left(\lambda,t\right)\) and \(\Delta\left(\lambda_{n}(t_{k}),t_{k}\right)=0\) for all \(k,\) we have \(\Delta(\lambda,a)=0.\) This means that \(\lambda\) is an eigenvalue of \(L_{a}\) lying in \(\left(\Lambda_{j}(a)-r,\Lambda_{j}(a)+r\right)\). Hence, by Theorem 1\((b),\) we have \(\lambda=\lambda_{s_{j-1}+1}(a)=\lambda_{s_{j-1}+2}(a)=...=\lambda_{s_{j}}(a),\) where \(n\in[s_{j-1}+1,s_{j}].\) Thus, \(\lambda_{n}(t_{k})\rightarrow\lambda_{n}(a)\) as \(k\rightarrow\infty\) for any sequence \(\left\{t_{k}:k\in\mathbb{N}\right\}\) converging to \(a\) and \(\lambda_{n}\) is continuous at \(a.\) \((b)\) Using (10)-(12) we obtain the following estimation \[\left|P_{t}f(x)-P_{a}f(x)\right|=\left|\int_{D_{j}(a)}\int_{0}^{1}\left(G(x, \xi,\lambda,t)-G(x,\xi,\lambda,a)\right)f(\xi)d\xi d\lambda\right|\leq\] \[2\pi rM\left|t-a\right|\int_{0}^{1}\left|f(\xi)\right|d\xi\] for all \(x\in[0,1].\) This estimation implies the proof of \((b)\). Note that in [9], the continuity of the band function for the case \(\nu=1\) was proved by using the perturbation theory from [3]. In [10], we investigated the differential operator \(T\), generated in the space \(L_{2}^{m}(\mathbb{R}^{d})\) by formally self-adjoint differential expression of order \(2\nu\) with matrix coefficients, whose entries are periodic with respect to the lattice \(\Omega,\) where \(d\geq 1\). Note that the band functions \(\lambda_{1}(t)\leq\lambda_{2}(t)\leq\cdots\) and Bloch functions \(\Psi_{1,t},\Psi_{2,t},...\) of \(T\) are the eigenvalues and normalized eigenfunctions of the operator \(T_{t}\) generated in \(L_{2}^{m}(F)\) by by the same differential expression and the quasiperiodic conditions \[u(x+\omega)=e^{i\left(t,\omega\right)}u(x),\text{ }\forall\omega\in\Omega,\] where \(t\in F^{\star},\)\(\left\langle\cdot,\cdot\right\rangle\) is the inner product in \(\mathbb{R}^{d},\)\(F\) and \(F^{\star}\) are the fundamental domains of the lattice \(\Omega\) and dual lattice \(\Gamma\), respectively. It was proved, in [10], that the Bloch eigenvalues and corresponding projections of the differential operator \(T_{t}\) depend continuously on \(t\in F^{\star}.\) Moreover, if \(\lambda_{n}(a)\) is a simple eigenvalue, then the eigenvalues \(\lambda_{n}(t)\) are simple in some neighborhood of \(a\) and the corresponding eigenfunctions \(\Psi_{n,t}\) can be chosen so that \[\left\|\Psi_{n,t}-\Psi_{n,a}\right\|\to 0\] as \(t\to a\). In [10], the Bloch function \(\Psi_{n,t}\) was chosen so that \[\arg(\Psi_{n,t},\Psi_{n,a})=0 \tag{13}\] which is not a constructive choice. Now, instead of (13), we constructively define the normalized eigenfunctions \(\Psi_{n,t}\) that depend continuously on \(t.\) First of all, let us note the following obvious statement. If \(\lambda_{n}(t)\) is a simple eigenvalue, then the set of all normalized eigenfunctions corresponding to \(\lambda_{n}(t)\) is \(\left\{e^{i\alpha}\Psi_{n,t}:\alpha\in[0,2\pi)\right\},\) where \(\Psi_{n,t}\) is a fixed normalized eigenfunction. If the eigenvalue \(\lambda_{n}(a)\) is simple, then there exists a neighborhood \(U(a)\) of the point \(a\in F^{\star}\) such that for \(t\in U(a)\) the eigenvalue \(\lambda_{n}(t)\) is also simple and the equality \[\int_{D}(T_{t}-\lambda I)^{-1}e^{i\left(a,x\right)}e_{k}d\lambda=(e^{i\left(a,x\right)}e_{k},e^{i\alpha}\Psi_{n,t})e^{i\alpha}\Psi_{n,t}=(e^{i\left(a,x \right)}e_{k},\Psi_{n,t})\Psi_{n,t} \tag{14}\] is true for any choice of the normalized eigenfunction \(\Psi_{n,t},\) where \(D\) is a closed curve enclosing only the eigenvalue \(\lambda_{n}(t),\) and \(e_{1},e_{2},...,e_{m}\) is the standard basis of \(\mathbb{C}^{m}.\) Since the projection operator onto the subspace corresponding to the eigenvalue \(\lambda_{n}(t)\) depends continuously on \(t\in U(a),\) and the norm is a continuous function, it follows from (14) that \(\left|\left(\Psi_{n,t},e^{i\left(a,x\right)}e_{k}\right)\right|\) is also a continuous function with respect to \(t\) in \(U(a)\) for any normalized eigenfunction \(\Psi_{n,t}.\) This and the inequality \[\left|\left|(\Psi_{n,t},e^{i\langle t,x\rangle}e_{k})\right|-\left|(\Psi_{n,t},e ^{i\langle a,x\rangle}e_{k})\right|\right|\leq\left\|e^{i\langle t,x\rangle}-e ^{i\langle a,x\rangle}\right\| \tag{15}\] give the following obvious statement. **Proposition 1**: _If \(\lambda_{n}(a)\) is a simple eigenvalue, then the function \(\left|(\Psi_{n,t},e^{i\langle t,x\rangle}e_{k})\right|\) does not depend on choice of the normalized eigenfunction \(\Psi_{n,t}\) and is continuous in some neighborhood \(U(a)\) of \(a\), where \(U(a)\subset F^{*}\) and any set \(E\) satisfying the conditions:_ \((a)\)__\(\left\{\gamma+t:t\in E,\text{ }\gamma\in\Gamma\right\}=\mathbb{R}^{d}\) _and_ \((b)\) _if_ \(t\in E,\) _then_ \(\gamma+t\notin E\) _for any_ \(\gamma\in\Gamma\backslash\left\{0\right\},\)__ _can be used as the fundamental domain \(F^{*}\) of \(\Gamma.\)_ Since \(\left\{e^{i\langle\gamma+t,x\rangle}e_{k}:\gamma\in\Gamma,\text{ }k=1,2,...,m\right\}\) is an orthonormal basis of \(L_{2}^{m}(F)\) there exist \(\gamma\in\Gamma,\)\(k\in\left\{1,2,...,m\right\}\) and \(\varepsilon>0\) such that \[\left|(\Psi_{n,a},e^{i\langle\gamma+a,x\rangle}e_{k})\right|>\varepsilon. \tag{16}\] For example if \(\left|(\Psi_{n,a},e^{i\langle\gamma+a,x\rangle}e_{k})\right|=\max_{\beta\in \Gamma}\left|(\Psi_{n,a},e^{i\langle\beta+a,x\rangle}e_{k})\right|,\) then (16) holds. Note that if \(F^{*}\) is a fundamental domain of \(\Gamma,\) then \(\gamma+F^{*}\) for any \(\gamma\in\Gamma\) and even \(b+F^{*}\) for any \(b\in\mathbb{R}^{d}\) is a fundamental domain of the lattice \(\Gamma.\) Therefore, without loss of generality and for the simplicity of the notation, we will use \(a\) instead of \(\gamma+a\) and assume that \(a\) is an interior point of \(F^{*}.\) Therefore, by Proposition 1 and (16) there exists a neighborhood \(U(a)\) of \(a\) such that \[\left|(\Psi_{n,t},e^{i\langle t,x\rangle}e_{k})\right|>\varepsilon \tag{17}\] for all \(t\in U(a)\) and the normalized eigenfunction \(\Psi_{n,t}\) can be chosen so that \[\arg(\Psi_{n,t},e^{i\langle t,x\rangle}e_{k})=0. \tag{18}\] Then \((\Psi_{n,t},e^{i\langle t,x\rangle}e_{k})=\left|(\Psi_{n,t},e^{i\langle t,x \rangle}e_{k})\right|,\) and hence by Proposition 1, \((\Psi_{n,t},e^{i\langle t,x\rangle}e_{k})\) depends continuously on \(t\) in some neighborhood of \(a.\) From this, taking into account (15) and (17), it follows that \[\frac{(\Psi_{n,t},e^{i\langle a,x\rangle}e_{k})}{(\Psi_{n,a},e^{i\langle a,x \rangle}e_{k})}=:\alpha(t)\to 1\] as \(t\to a.\) Therefore, using the continuity of the right side of (14) and then (17) we obtain \[\left\|(e^{i\langle a,x\rangle}e_{k},\Psi_{n,t})\Psi_{n,t}-(e^{i\langle a,x \rangle}e_{k},\Psi_{n,a})\Psi_{n,a}\right\|\to 0\] and \(\left\|\alpha(t))\Psi_{n,t}-\Psi_{n,a}\right\|\to 0\) as \(t\to a.\) Thus, we have \[\left\|\Psi_{n,t}-\Psi_{n,a}\right\|\leq\left\|(1-\alpha(t))\Psi_{n,t}\right\| +\left\|\alpha(t))\Psi_{n,t}-\Psi_{n,a}\right\|\to 0 \tag{19}\] as \(t\to a.\) In other words, the following statement is proved. **Proposition 2**: _If \(\lambda_{n}(a)\) is a simple eigenvalue and (17) holds, then the normalized eigenfunction \(\Psi_{n,t}\) satisfying (18) depends continuously on \(t\) in \(U(a).\)_ **Remark 1**: _Note that the constructive choice (18) is also used in [7, 8]. Namely, in [8] for the case when the right side of (1) is equal to \(-\Delta+q\) and \(m=1\) (for the Schrodinger operator) in the neighborhood of the sphere \(\left\{t\in\mathbb{R}^{d}:\left|t\right|=\rho\right\},\) where \(\rho\) is a large number, constructed a set \(B\) such that if \(t\in B,\) then there exists a unique eigenvalue \(\lambda_{n(t)}(t)\) that is simple and close to \(\left|t\right|^{2}\) and the corresponding normalized eigenfunction \(\Psi_{n,t}\) satisfies the asymptotic formula_ \[\left|(\Psi_{n,t},e^{i(t,x)})\right|^{2}=1+O(\rho^{-\delta})>\tfrac{1}{2} \tag{20}\] _for some \(\delta>0.\) Moreover, the normalized eigenfunction was chosen so that (18) holds (see [8], p. 55). In [8] the choice (18) was made in order to write (20) in the elegant form \(\Psi_{n,t}=e^{i(t,x)}+O(\rho^{-\delta})\). However, in Proposition 2 we show that the choice (18) ensures the continuity of \(\Psi_{n,t}.\) Note that Proposition 2 is also new for the Schrodinger operator. However, Proposition 1 for the Schrodinger operator is obvious, since it follows directly from the continuity of the function \(e^{i(t,x)},\) the projection operator, and the norm. Proposition 1 and (20) were used in [8] to prove that \(n(t)=n(a)\) for all \(t\in U(a)\) (see (5.11) of [8]), where \(U(a)\subset B\) and the condition \((b)\) of Proposition 1 holds (see Lemma 5.1\((b)\) of [8]), i.e., \(U(a)\subset(B\cap F^{\star})\) for some fundamental domain \(F^{\ast}.\)_ Now let us return to the study of \(L.\) **Theorem 3**: _If \(\lambda_{n}(a)\) is a simple eigenvalue, then there exists \(\beta>0\) such that the eigenvalues \(\lambda_{n}(t)\) for \(\left|t-a\right|<\beta\) are also simple eigenvalues and the normalized eigenfunctions \(\Psi_{n,t}\) of \(L_{t}\) satisfying (17) and (18) for \(\left\langle t,x\right\rangle=tx\) converges to \(\Psi_{n,a}(x)\) uniformly with respect to \(x\in[0,1]\) as \(t\to a.\)_ **Proof.** If \(\lambda_{n}(a)\) is a simple eigenvalue, then \(\frac{d\Delta(\lambda,a)}{d\lambda}\neq 0\) for \(\lambda=\lambda_{n}(a).\) Then by Summary 1, Theorem 2\((b)\) and (14) there exists \(\beta>0\) such that \(\lambda_{n}(t)\) is also a simple eigenvalue for \(\left|t-a\right|<\beta\) and \[\left\|(e^{iax}e_{k},\Psi_{n,t})\Psi_{n,t}-(e^{iax}e_{k},\Psi_{n,a})\Psi_{n,a} \right\|_{\infty}\to 0 \tag{21}\] as \(t\to a.\) Moreover, from (21) and (17) it follows that there exist \(\varepsilon>0\) and \(M\) such that \(\left|\Psi_{n,t}(x)\right|\leq M\) for all \(\left|t-a\right|<\varepsilon\) and \(x\in[0,1].\) Therefore, replacing \(L_{2}\) norm \(\left\|\cdot\right\|\) everywhere by the norm \(\left\|\cdot\right\|_{\infty}\) and repeating the proof of (19) we obtain the proof of the theorem. ## 2 On the spectrum of \(L\) In this section, we study the spectrum of \(L\). For this we consider the operator \(L_{t}(\varepsilon,C)\) generated by the differential expression \[L_{t}(\varepsilon,C)y=(-i)^{2\nu}y^{(2\nu)}+Cy^{(2\nu-2)}+\varepsilon\left(( P_{2}-C)y^{(2\nu-2)}+\sum\limits_{l=3}^{2\nu}P_{l}(x)y^{(2\nu-l)}\right)\] and boundary conditions (2), where \(\varepsilon\in[0,1]\), and \(C\) is defined in (5). We consider the operator \(L_{t}(\varepsilon,C)\) as perturbation of \(L_{t}(C)\) by \(L_{t}(\varepsilon,C)-L_{t}(C)\), where \(L_{t}(C)\) is the operator generated by the expression \[(-i)^{2\nu}y^{(2\nu)}(x)+Cy^{(2\nu-2)}(x) \tag{22}\] and boundary condition (2). Therefore, first of all, let us analyze the eigenvalues and eigenfunction of the operator \(L_{t}(C)\). We assume that \(C\) is the Hermitian matrix. Then \(L_{t}(C)\) is the self-adjoint operator, since the expression (22) and boundary conditions (2) are self-adjoint. The distinct eigenvalues of \(C\) are denoted by \(\mu_{1}<\mu_{2}<...<\mu_{p}\). If the multiplicity of \(\mu_{j}\) is \(m_{j},\) then \(m_{1}+m_{2}+...+m_{p}=m\). Let \(u_{j,1},\)\(u_{j,2},...,u_{j,m_{j}}\) be the normalized eigenvectors of the matrix \(C\) corresponding to the eigenvalue \(\mu_{j}.\) The functions \(\Phi_{k,j,s,t}(x)=u_{j,s}e^{i(2\pi k+t)x}\) for \(s=1,2,...,m_{j}\) are the eigenfunctions of \(L_{t}(C)\) corresponding to the eigenvalue \[\mu_{k,j}(t)=\left(2\pi k+t\right)^{2\nu}+\mu_{j}\left(2\pi k+t\right)^{2\nu-2}, \tag{23}\] since \[L_{t}(C)\Phi_{k,j,s,t}(x)=\mu_{k,j}(t)\Phi_{k,j,s,t}(x). \tag{24}\] Now we consider the large eigenvalues of \(L_{t}(\varepsilon,C)\). In the forthcoming inequalities we denote by \(c_{1},\)\(c_{2},...\) the positive constants that do not depend on \(t\in(-\pi,\pi]\) and \(\varepsilon\in[0,1].\) **Theorem 4**: _There exists a positive number \(N\) such that the eigenvalues of \(L_{t}(\varepsilon,C)\) lying in \((\mu_{N,1}(t)-\varepsilon_{N},\infty)\) lie in \(\varepsilon_{k}\) neighborhood \(U_{\varepsilon_{k}}(\mu_{k,j}(t)):=(\mu_{k,j}(t)-\varepsilon_{k},\mu_{k,j}(t)+ \varepsilon_{k})\) of \(\mu_{k,j}(t)\) for \(|k|\geq N\) and \(j=1,2,...,p,\) where \(\varepsilon_{k}=c_{1}\left(\mid k^{-1}\ln|k|\mid+q_{k}\right)\left(2\pi k\right) ^{2\nu-2}\) and_ \[q_{k}=\max\left\{\left|\int_{[0,1]}p_{2,s,r}\left(x\right)e^{-2\pi inx}dx \right|:s,r=1,2,...,m;\ n=\pm 2k,\pm(2k+1)\right\}.\] _Moreover, for each \(|k|\geq N\) and \(j=1,2,...,p\), there exists an eigenvalue of \(L_{t}(\varepsilon,C)\) lying in \(U_{\varepsilon_{k}}(\mu_{k,j}(t)).\)_ **Proof.** Let \(\lambda\) be the eigenvalue of \(L_{t}(\varepsilon,C)\) lying in \((\mu_{N,1}(t)-\varepsilon_{N},\infty)\) and \(\mu_{k,j}(t)\) be an eigenvalue of \(L_{t}(C)\) closest to \(\lambda.\) We prove that \(\lambda\in U_{\varepsilon_{k}}(\mu_{k,j}(t)).\) For this we use the formula \[(\lambda-\mu_{k,j}(t))(\Psi_{k,j,s,t},\Phi)=\varepsilon\left(((P_{2}-C)\Psi_{ k,j,s,t}^{(2\nu-2)},\Phi)+\sum\limits_{l=3}^{2\nu}(P_{\nu}\Psi_{k,j,s,t}^{2\nu-l },\Phi)\right) \tag{25}\] which can be obtained from \(L_{t}(\varepsilon,C)\Psi=\lambda\Psi\) by multiplying both sides by \(\Phi_{k,j,s,t}(x)\) and using (24), where \(\Psi\) is a normalized eigenfunction of \(L_{t}(\varepsilon,C)\) corresponding to the eigenvalue \(\lambda.\) It was proved in [6] that there exists \(c_{2}\) such that \[\left|((P_{2}-C)\Psi_{k,j,s,t}^{(2\nu-2)},\Phi)+\sum\limits_{l=3}^{2\nu}(P_{l} \Psi_{k,j,s,t}^{2\nu-l},\Phi)\right|\leq c_{2}\left(\mid\frac{\ln|k|}{k}\mid+q_ {k}\right)\left(2\pi k\right)^{2\nu-2} \tag{26}\] for \(|k|\geq N\) (see (51) and (54) of [6]). Moreover, by Lemma 4 of [6], for each eigenfunction \(\Psi_{k,j,s,t}\) of \(L_{t}(C)\) such that \(|k|\geq N\) there exists an eigenfunction \(\Phi\) of \(L_{t}(\varepsilon,C)\) satisfying \[\left|(\Psi_{k,j,s,t},\Phi)\right|>c_{3} \tag{27}\] and conversely for each eigenfunction \(\Phi\) corresponding to the eigenvalue of \(L_{t}(\varepsilon,C)\) lying in \((\mu_{N,1}(t)-\varepsilon_{N},\infty)\) there exists \(\Psi_{k,j,s,t}\) satisfying (27). Therefore, using (26) and (27) in (25) we get the proof of the theorem. Now, using Theorem 4 and repeating the proof of Theorem 2.3, Corollary 2.4, and Theorem 2.5 of [9] we obtain the following theorem about the bands and gaps. **Theorem 5**: \((a)\) _There exists a positive integer \(N_{1}\) such that if \(s\geq N_{1}\) then the interval \([a(s),b(s)]\) is contained in each of the bands \(I_{sm+1},I_{sm+2},...,I_{sm+m},\) where_ \[a(s)=(s\pi)^{2\nu}+\mu_{p}\left(\pi s\right)^{2\nu-2}+\varepsilon(s),\ b(s)=(s\pi+\pi)^{2}+\mu_{1} \left(s\pi+\pi\right)^{2\nu-2}-\varepsilon(s),\] \(I_{n}\) _is defined in (4), \(\varepsilon(s)=\varepsilon_{k}\) if \(s\in\{2k,2k+1\}\) and \(\varepsilon_{k}\) is defined in Theorem 4._ \((b)\) _Let \((\alpha,\beta)\) be the spectral gap of \(L\) such that \(\alpha>b(N_{1}).\) Then \((\alpha,\beta)\) is contained in the interval \(U(s):=(b(s),a(s+1))\) for some \(s\geq N_{1}\). Moreover, the spectral gap \((\alpha,\beta)\subset U(s)\) lies between the bands \(I_{sm+m}(Q)\) and \(I_{sm+m+1}(Q)\) and its length does not exceed \(2\max\left\{\varepsilon(s),\varepsilon(s+1)\right\}.\)_ For a detailed study of \(\sigma(L)\), by using the asymptotic formulas, we need to consider the multiplicities of the eigenvalues of \(L_{t}(C)\) and the exceptional points of the spectrum of \(L(C)\). The multiplicity of \(\mu_{k,j}(t)\) is \(m_{j}\) if \(\mu_{k,j}(t)\neq\mu_{n,i}(t)\) for all \((n,i)\neq(k,j).\) The multiplicity of \(\mu_{k,j}(t)\) is changed, that is, \(\mu_{k,j}(t)\) is an exceptional point of \(\sigma(L(C))\) if \(\mu_{k,j}(t)=\mu_{n,i}(t)\) for some \((n,i)\neq(k,j).\) To consider the exceptional points of \(\sigma(L(C))\) and \(\sigma(L)\) we use the notation \(a_{k}\asymp b_{k}\) which means that there exist constants \(c_{4},\)\(c_{5},\)\(c_{6}\) such that \(c_{4}|a_{k}|<|b_{k}|<c_{5}|a_{k}|\) for all \(|k|>c_{6}.\) It follows from (23) that if \(t\in[-\frac{\pi}{2},\frac{3\pi}{2}),\) then \(\mu_{k,j}(t)-\mu_{k,i}(t)\asymp k^{2\nu-2}\) for \(j\neq i\) and \(|\mu_{k,j}(t)-\mu_{n,i}(t)|\geq d_{k}\) for \(n\neq k,-k,-(k+1),\) where \(d_{k}\asymp k^{2\nu-1}.\) Thus, the large eigenvalue \(\mu_{k,j}(t)\) for \(t\in[-\frac{\pi}{2},\frac{3\pi}{2})\) may become an exceptional Bloch eigenvalue of \(L(C)\) if at least one of the following equalities holds \[\mu_{k,j}(t)=\mu_{-k,i}(t),\ \mu_{k,j}(t)=\mu_{-k-1,i}(t). \tag{28}\] Therefore we need to consider the points \(t\in[-\frac{\pi}{2},\frac{3\pi}{2})\) for which the equalities in (28) do not hold. Moreover, to prove that the eigenvalues of \(L_{t}\) lying in \(\varepsilon_{k}=o(k^{2\nu-2})\) neighborhood of \(\mu_{k,j}(t)\) (see Theorem 4) do not coincide with the eigenvalues lying in \(\varepsilon_{-k}\) and \(\varepsilon_{-k-1}\) neighborhood of \(\mu_{-k,i}(t)\) and \(\mu_{-k-1,i}(t)\) we consider the points \(t\in[-\frac{\pi}{2},\frac{3\pi}{2})\) for which \[|f(t)|>\varepsilon_{k}+\varepsilon_{-k},\ |g(t)|>\varepsilon_{k}+\varepsilon_{-k-1}, \tag{29}\] where \(f(t)=\mu_{k,j}(t)-\mu_{-k,i}(t),\)\(g(t)=\mu_{k,j}(t)-\mu_{-k-1,i}(t).\) Using (23) and the binomial expansion of \((a+b)^{n}\) for \(n=2\nu\) and \(n=2\nu-2\) we obtain \[f(t)=(2\pi k)^{2\nu-2}(8\nu k\pi t+\mu_{j}-\mu_{i})+O(k^{2\nu-3}),\ f\left( \frac{\mu_{i}-\mu_{j}}{8\nu k\pi}\right)=O(k^{2\nu-3}).\] On the other hand, one can easily verify that \(f^{{}^{\prime}}(t)\asymp k^{2\nu-1}.\) Therefore, there exists \(\delta_{k}=o(k^{-1})\) such that the first inequality of (29) holds if \(t\) does not belong to the interval \[\left(\frac{\mu_{i}-\mu_{j}}{8\nu k\pi}-\delta_{k},\frac{\mu_{i}-\mu_{j}}{8 \nu k\pi}+\delta_{k}\right).\] In the same way we prove that if \(t\) does not belong to the interval \[\left(\pi+\frac{\mu_{i}-\mu_{j}}{4\pi\nu(2k+2\nu-1)}-\delta_{k},\pi+\frac{\mu _{i}-\mu_{j}}{4\pi\nu(2k+2\nu-1)}+\delta_{k}\right),\] then the second inequality of (29) holds. Therefore, using (29) and Theorem 4 and repeating the proof of Corollary 2.8 and Theorem 2.10 of [9] we obtain. **Theorem 6**: \((a)\) _There exist \(N_{2}>N_{1}\) and \(\gamma_{k}=o(k^{2\nu-2})\) such that the spectral gap \((\alpha,\beta)\) defined in Theorem 5 and lying in \(U(k)\) for \(k>N_{2}\) is contained in the intersection of the sets \(S(1,k),S(2,k),...,S(p,k),\) where_ \[S(j,k)=\bigcup_{i=1,2,...,p}\left((\pi k)^{2\nu}+\frac{\mu_{i}+\mu_{j}}{2}\,( \pi k)^{2\nu-2}-\gamma_{k},(\pi k)^{2s}+\frac{\mu_{i}+\mu_{j}}{2}\,(\pi k)^{2 \nu-2}+\gamma_{k}\right).\] \((b)\) _If there exists a triple \((j_{1},j_{2},j_{3})\) such that_ \[\min_{i_{1},i_{2},i_{3}}\,(diam(\{\mu_{j_{1}}+\mu_{i_{1}},\mu_{j_{2}}+\mu_{i_{ 2}},\mu_{j_{3}}+\mu_{i_{3}}\}))\neq 0, \tag{30}\] _where minimum is taken under condition \(i_{s}\in\{1,2,...,p\}\) for \(s=1,2,3\) and_ \[diam(E)=\sup_{x,y\in E}\mid x-y\mid,\] _then there exists a number \(H\) such that \((H,\infty)\subset\sigma(L)\) and the number of the gaps in \(\sigma(L)\) is finite._
2307.12508
Information Geometry of Wasserstein Statistics on Shapes and Affine Deformations
Information geometry and Wasserstein geometry are two main structures introduced in a manifold of probability distributions, and they capture its different characteristics. We study characteristics of Wasserstein geometry in the framework of Li and Zhao (2023) for the affine deformation statistical model, which is a multi-dimensional generalization of the location-scale model. We compare merits and demerits of estimators based on information geometry and Wasserstein geometry. The shape of a probability distribution and its affine deformation are separated in the Wasserstein geometry, showing its robustness against the waveform perturbation in exchange for the loss in Fisher efficiency. We show that the Wasserstein estimator is the moment estimator in the case of the elliptically symmetric affine deformation model. It coincides with the information-geometrical estimator (maximum-likelihood estimator) when the waveform is Gaussian. The role of the Wasserstein efficiency is elucidated in terms of robustness against waveform change.
Shun-ichi Amari, Takeru Matsuda
2023-07-24T03:48:37Z
http://arxiv.org/abs/2307.12508v4
# Information Geometry of Wasserstein Statistics ###### Abstract Information geometry and Wasserstein geometry are two main structures introduced in a manifold of probability distributions, and they capture its different characteristics. We study characteristics of Wasserstein geometry in the framework of Li and Zhao (2023) for the affine deformation statistical model, which is a multi-dimensional generalization of the location-scale model. We compare merits and demerits of estimators based on information geometry and Wasserstein geometry. The shape of a probability distribution and its affine deformation are separated in the Wasserstein geometry, showing its robustness against the waveform perturbation in exchange for the loss in Fisher efficiency. We show that the Wasserstein estimator is the moment estimator in the case of the elliptically symmetric affine deformation model. It coincides with the information-geometrical estimator (maximum-likelihood estimator) when and only when the waveform is Gaussian. The role of the Wasserstein efficiency is elucidated in terms of robustness against waveform change. ## 1 Introduction We study statistics based on probability distribution patterns \(p(\mathbf{x})\) over \(\mathbf{x}\in X=\mathbf{R}^{d}\), by using both information geometry (see Amari, 2016; Ay et al., 2017, etc) and Wasserstein geometry (see Villani, 2003; Peyre and Cuturi, 2019; Santambrogio, 2015, among many others). Here, \(p(\mathbf{x})\) is a probability distribution on \(X=\mathbf{R}^{d}\). When \(d=2\), \(p(\mathbf{x})\) is regarded as a visual pattern on \(\mathbf{R}^{2}\). There are lots of applications of Wasserstein geometry to statistics (see, e.g., Amari and Matsuda, 2022; Bernton et al., 2019; Yatracos, 2022; Bassetti et al., 2006; Matsuda and Strawderman, 2021; Imaizumi et al., 2022; Li and Montufar, 2020; Chen et al., 2021, and others), machine learning (see, e.g., Arjovsky et al., 2017; Fronger et al., 2015; Wang and Li, 2020; Peyre and Cuturi, 2019; Montavon et al., 2015, among many others) and statistical physics (Ito, 2023). We have a good review paper and a book, Panaretos and Zemel (2019, 2022), which include lots of references. However, applications to statistical inference look still premature. For example, the statistical efficiency of the Wasserstein estimator is studied only in the one-dimensional location-scale model (Amari and Matsuda, 2022). We give characterization of the Wasserstein estimator from the point of view of the robustness for changes of the shape of probability distributions. We further focus on the efficiency of the Wasserstein estimator for the affine deformation statistical model (deformation model in short), where deformation parameters and the waveform of a probability distribution are separated. The affine deformation statistical model \(p(\boldsymbol{x},\boldsymbol{\theta})\) is generated from a standard shape distribution \(f(\boldsymbol{z})\) satisfying \[\int f(\boldsymbol{z})d\boldsymbol{z} =1, \tag{1}\] \[\int\boldsymbol{z}f(\boldsymbol{z})d\boldsymbol{z} =0,\] (2) \[\int\boldsymbol{z}\boldsymbol{z}^{\top}f(\boldsymbol{z})d \boldsymbol{z} =I, \tag{3}\] where \(I\) is the identity matrix. The deformation parameter consists of \(\boldsymbol{\theta}=(\boldsymbol{\mu},\Lambda)\in\Theta\) such that \(\boldsymbol{\mu}\) is a vector specifying translation of the location and \(\Lambda\) is a non-singular matrix representing scale changes and rotations of \(\boldsymbol{x}\). Then, \(p(\boldsymbol{x},\boldsymbol{\theta})\) is written as \[p(\boldsymbol{x},\boldsymbol{\theta})=|\Lambda|f\left(\Lambda(\boldsymbol{x}- \boldsymbol{\mu})\right).\] Given a standard \(f\), we have a statistical model parameterized by \(\boldsymbol{\theta}\), \[M_{f}=\left\{p(\boldsymbol{x},\boldsymbol{\theta})\right\}.\] Geometrically, it forms a finite-dimensional statistical manifold, where \(\boldsymbol{\theta}\) plays the role of a coordinate system. The deformation model is a generalization of the location-scale model. Note that this model is often called the location-scatter model in several fields such as statistics and signal processing (Tyler, 1987; Ollila and Tyler, 2014). Let \(T_{\boldsymbol{\theta}}\) denote the affine deformation from \(\boldsymbol{x}\) to \(\boldsymbol{z}\) given by \[\boldsymbol{z}=T_{\boldsymbol{\theta}}\boldsymbol{x}=\Lambda(\boldsymbol{x}- \boldsymbol{\mu}).\] This may be regarded as deformation of shape \(f\) to \(\tilde{T}_{\boldsymbol{\theta}}f\), \[(\tilde{T}_{\boldsymbol{\theta}}f)(\boldsymbol{x})=f\left(T_{\boldsymbol{ \theta}}\boldsymbol{x}\right), \tag{4}\] where \(\tilde{T}_{\boldsymbol{\theta}}\) is an operator to change a standard waveform \(f\) to another waveform \(\tilde{f}=\tilde{T}_{\boldsymbol{\theta}}f\) defined by (4). Let \(\mathcal{F}=\left\{q(\boldsymbol{x})\right\}\) be the space of all smooth positive probability density functions that have mean and covariance. Let \(\mathcal{F}_{S}=\left\{f(\boldsymbol{z})\right\}\) be its subspace consisting of all the standard distributions \(f(\boldsymbol{z})\) satisfying (1), (2) and (3). Then, any \(q(\boldsymbol{x})\in\mathcal{F}\) is uniquely written in the form \[q(\boldsymbol{x})=|\Lambda|f\left(\Lambda(\boldsymbol{x}-\boldsymbol{\mu})\right)\] for \(f\in\mathcal{F}_{S}\) and \(\boldsymbol{\theta}=(\boldsymbol{\mu},\Lambda)\in\Theta\). Hence, \(\mathcal{F}\) is the direct product of \(\mathcal{F}_{S}\) and \(\Theta\): \(\mathcal{F}=\mathcal{F}_{S}\times\Theta\). See Figure 1. It is interesting to see that \(\mathcal{F}_{S}\) is not a manifold. It includes lots of singularities. Let \(O\) be an orthogonal transformation. Then, \(f(\boldsymbol{x})\) and \((Of)(\boldsymbol{x})=f(O\boldsymbol{x})\) give different shape functions in general. However, when \(f\) is rotationally invariant, the shapes of \(f\) and \(Of\) are identical, \(f=Of\). The isotropic Gaussian shape is such an example, and all \(Of\) reduces to one point in such a case. It is also the case when \(f\) is invariant under a subgroup of the orthogonal Figure 1: Decomposition of \({\cal F}\) Figure 2: Singular structure of \({\cal F}_{S}\) group. See Figure 2 for the shape of \(\mathcal{F}_{S}\). It is a covex set, because, for \(f,g\in\mathcal{F}_{S}\), \(\alpha f+(1-\alpha)g\) also belongs to \(\mathcal{F}_{S}\) for \(0\leq\alpha\leq 1\). Geometry of a manifold of probability distributions has so far been studied by information geometry and Wasserstein geometry. The two geometries capture different aspects of a manifold of probability distributions. We use a divergence measure to explain this. Let \(D_{F}[p(\mathbf{x}),q(\mathbf{x})]\) and \(D_{W}[p(\mathbf{x}),q(\mathbf{x})]\) be two divergence measures between distributions \(p(\mathbf{x})\) and \(q(\mathbf{x})\), where subscripts \(F\) and \(W\) represent Fisher-based information geometry and Wasserstein geometry, respectively. Information geometry uses an invariant divergence \(D_{F}\), typically the Kullback-Leibler divergence. Wasserstein divergence \(D_{W}\) is defined by the cost of transporting masses distributed in form \(p(\mathbf{x})\) to another \(q(\mathbf{x})\). Roughly speaking, \(D_{F}\) measures the vertical differences of \(p(\mathbf{x})\) and \(q(\mathbf{x})\), for example, represented by their log-ratio \(\log(p(\mathbf{x})/q(\mathbf{x}))\), at \(\mathbf{x}\), whereas \(D_{W}\) measures the horizontal differences of \(p(\mathbf{x})\) and \(q(\mathbf{x})\) which corresponds to the transportation cost from \(p(\mathbf{x})\) to \(q(\mathbf{x})\). See Figure 3. Information geometry is constructed based on the invariance principle of Chentsov (Chentsov, 1982) such that \(D_{F}[p(\mathbf{x}),q(\mathbf{x})]\) is invariant under invertible transformations of the coordinates \(\mathbf{x}\) of the sample space \(X\). This implies that the divergence does not depend on the coordinate system of \(X\). We then have a unique Riemannian metric, which is Fisher-Rao metric, and also a dual pair of affine connections (Amari and Nagaoka, 2007). This is useful not only for analyzing the performances of statistical inference but also for vision analysis, machine learning, statistical physics, and many others (see Amari, 2016). Wasserstein geometry has an old origin, proposed by G. Monge in 1781 as a problem of transporting mass distributed in the form \(p(\mathbf{x})\) to another \(q(\mathbf{x})\) such that the total transportation cost is minimized. It depends on the transportation cost \(c(\mathbf{x},\mathbf{y})\) between two locations \(\mathbf{x},\mathbf{y}\in X\). The cost is usually a function of the Euclidean distance between \(\mathbf{x}\) and \(\mathbf{y}\). We use the square of the distance as a cost function, which gives \(L^{2}\)-Wasserstein geometry. This Wasserstein geometry directly depends on the Euclidean distance of \(X=\mathbf{R}^{d}\). Therefore, it is responsible for the metric structure of \(X\) and is useful for problems that intrinsically depend on the structure of \(X\), such as the transportation problem, non-equilibrium statistical physics, pattern analysis, machine learning and many others. It is natural to search for the relation between the two geometries. There are a number of such trials, including Amari et al. (2018, 2019); Khan et al. (2022); Rankin and Wong (2023); Ito (2023) and others. Among them, Li and Zhao (2023) gave a unified framework for the two geometries. The present article is based on their framework and focuses on the affine deformation model, for which the standard waveform \(f\) and the deformation parameter \(\mathbf{\theta}\) are Figure 3: (a) \(F\)-divergence. (b) \(W\)-divergence. separated. Li and Zhao (2023) introduced the \(W\)-score function in parallel to the Fisher score function, defining two estimators \(\hat{\mathbf{\theta}}_{F}\) and \(\hat{\mathbf{\theta}}_{W}\) thereby. The former is the maximum likelihood estimator that maximizes the log likelihood. This is the one that minimizes an invariant divergence from the empirical distribution \(\hat{p}(\mathbf{x})\) to parametrized model \(M_{f}\), where the empirical distribution is given based on \(n\) independent and identical observations \(\mathbf{x}_{1},\cdots,\mathbf{x}_{n}\) as \[\hat{p}(\mathbf{x})=\frac{1}{n}\sum_{i}\delta\left(\mathbf{x}-\mathbf{x}_{i}\right),\] and \(\delta(\mathbf{x})\) is the delta function. The latter \(W\)-estimator \(\hat{\mathbf{\theta}}_{W}\) is asymptotically equivalent to the minimizer of the \(W\)-divergence between the empirical distribution and model \(M_{f}\) (see Section 4). Li and Zhao (2023) further defined the \(F\)-efficiency and \(W\)-efficiency of an estimator \(\hat{\mathbf{\theta}}\) given a statistical model \(M=\{p(\mathbf{x},\mathbf{\theta})\}\), proving the Cramer-Rao type inequalities. We apply their theory to analyze the effects of the shape \(f\) on the efficiencies of the two types of estimators. The present paper is organized as follows. In section 2, we introduce two divergences between distributions, one based on the invariance principle and the other based on the transportation cost. The divergences give two Riemannian structures in the space \(\mathcal{F}\) of probability distributions \(p(\mathbf{x})\) over \(X=\mathbf{R}^{d}\). A regular statistical model \(M=\{p(\mathbf{x},\mathbf{\theta})\}\) parameterized by \(\mathbf{\theta}\) is a finite-dimensional submanifold embedded in \(\mathcal{F}\). In section 3, we define the \(F\)- and \(W\)-score functions following Li and Zhao (2023). The Riemannian structure of the tangent space of probability distributions is pulled-back to the model submanifold, giving both the Riemannian metrics and score functions to \(M\). We define the \(F\)- and \(W\)-estimators \(\hat{\mathbf{\theta}}_{F}\) and \(\hat{\mathbf{\theta}}_{W}\) by using the \(F\)- and \(W\)-score functions, respectively. Section 4 defines the affine deformation statistical model. Section 5 studies the elliptically symmetric affine deformation model \(M_{f}\), where \(f\) is a spehrically symmetric standard form. For this model, we show that the \(W\)-score functions are quadratic functions of \(\mathbf{x}\). Hence, it is proved that \(\hat{\mathbf{\theta}}_{W}\) is a moment estimator. We also show that \(M_{f}\) and \(\mathcal{F}_{S}\) are orthogonal in the \(W\)-geometry, implying the separation of the waveform and deformation. In Section 6, we elucidate the role of \(W\)-efficiency from the point of view of robustness to a change in the waveform \(f\) due to observation noise. In Section 7, we prove that the Gaussian shape is a unique model in which \(F\)-estimator and \(W\)-estimator coincide, with \(\hat{\mathbf{\theta}}_{W}\) satisfying the \(F\)-efficiency and \(\hat{\mathbf{\theta}}_{F}\) satisfying \(W\)-efficiency. Section 8 briefly summarizes the paper and mentions future work. ## 2 Riemannian structures in the space of probability densities on \(\mathbf{R}^{d}\) We consider the space \(\mathcal{F}=\{p(\mathbf{x})\}\) of all smooth positive probability density functions on \(\mathbf{R}^{d}\), having the mean and covariance. Later, we may relax the conditions of positivity and smoothness, when we discuss a parametric model, in particular the deformation model. We define a divergence function \(D[p(\mathbf{x}),q(\mathbf{x})]\), which represents the degree of difference between \(p(\mathbf{x})\) and \(q(\mathbf{x})\). The square of the distance between \(p(\mathbf{x})\) and \(q(\mathbf{x})\) plays this role, but a divergence does not necessarily need to be symmetric with respect to \(p(\mathbf{x})\) and \(q(\mathbf{x})\). A divergence function satisfies the following conditions: 1. \(D[p(\mathbf{x}),q(\mathbf{x})]\geq 0\) and the equality holds if and only if \(p(\mathbf{x})=q(\mathbf{x})\). 2. Let \(\delta p(\mathbf{x})\) be an infinitesimally small deviation of \(p(\mathbf{x})\). Then, \(D[p(\mathbf{x}),p(\mathbf{x})+\delta p(\mathbf{x})]\) is approximated by a positive quadratic functional of \(\delta p(\mathbf{x})\). A divergence is said to be invariant if \[D\left[p(\mathbf{x}),q(\mathbf{x})\right]=D\left[\tilde{p}(\mathbf{y}),\tilde{q}(\mathbf{y})\right]\] holds for every smooth reversible transformation \(\mathbf{k}\) of the coordinates from \(\mathbf{x}\in\mathbf{R}^{d}\) to \(\mathbf{y}=\mathbf{k}(\mathbf{x})\), where \[\tilde{p}(\mathbf{y})=\left|\frac{\partial\mathbf{x}}{\partial\mathbf{y}} \right|p(\mathbf{x}).\] A typical invariant divergence is the \(\alpha\)-divergence (\(\alpha\neq\pm 1\)) defined by \[D_{\alpha}[p,q]=\frac{4}{1-\alpha^{2}}\left(1-\int p(\mathbf{x})^{(1+ \alpha)/2}q(\mathbf{x})^{(1-\alpha)/2}d\mathbf{x}\right)\] for \(\alpha\neq\pm 1\). For \(\alpha=1\), we define \(D_{1}[p,q]\) by the Kullback-Leibler divergence \[D_{1}[p,q]=\int p(\mathbf{x})\log\frac{p(\mathbf{x})}{q(\mathbf{x})}d\mathbf{x}.\] For \(\alpha=-1\), we define \(D_{-1}[p:q]=D_{1}[q:p]\). The case \(\alpha=0\) is equivalent to the Hellinger divergence \[H^{2}[p,q]=\frac{1}{2}\int\left(\sqrt{p(\mathbf{x})}-\sqrt{q(\mathbf{x})} \right)^{2}d\mathbf{x}.\] A characterization of the \(\alpha\)-divergence is given in Amari (2016). The \(\alpha\)-divergence gives information-geometric structure to \(\mathcal{F}\). Another divergence is the Wasserstein divergence. Let us transport masses piled in the form \(p(\mathbf{x})\) to another \(q(\mathbf{x})\). To this end, we need to move some mass at \(\mathbf{x}\) to another position \(\mathbf{y}\). Let \(\pi(\mathbf{x},\mathbf{y})\) be a stochastic matrix, showing the probability of mass at \(\mathbf{x}\) to be transported to \(\mathbf{y}\). We call \(\pi\) a transportation plan when it satisfies the following terminal conditions \[\int\pi(\mathbf{x},\mathbf{y})d\mathbf{y} =p(\mathbf{x}), \tag{5}\] \[\int\pi(\mathbf{x},\mathbf{y})d\mathbf{x} =q(\mathbf{y}). \tag{6}\] Let \(c(\mathbf{x},\mathbf{y})\) be the cost of transporting a unit of mass from \(\mathbf{x}\) to \(\mathbf{y}\). Then, the Wasserstein divergence \(D_{W}[p(\mathbf{x}),q(\mathbf{x})]\) is the minimum transporting cost from \(p(\mathbf{x})\) to \(q(\mathbf{x})\). By using stochastic plan \(\pi(\mathbf{x},\mathbf{y})\), the Wasserstein divergence between \(p(\mathbf{x})\) and \(q(\mathbf{x})\) is given by \[D_{W}\left[p(\mathbf{x}),q(\mathbf{x})\right]=\inf_{\pi}\int c(\mathbf{x}, \mathbf{y})\pi(\mathbf{x},\mathbf{y})d\mathbf{x}d\mathbf{y},\] where infimum is taken over all stochastic plans \(\pi\) satisfying (5) and (6). When the cost is the square of the Euclidean distance \[c(\mathbf{x},\mathbf{y})=\|\mathbf{x}-\mathbf{y}\|^{2},\] we call \(D_{W}\) the \(L^{2}\)-Wasserstein divergence. We focus on this divergence in the following. Note that the \(L^{2}\)-Wasserstein divergence is the square of the \(L^{2}\)-Wasserstein distance. The dynamic formulation of the optimal transport problem proposed by Brenier (1999) and developed further by Benamou and Brenier (2000) is useful. Let \(\rho(\mathbf{x},t)\) be a family of probability distributions parameterized by \(t\). It represents the time course \(\rho(\mathbf{x},t)\) of transporting \(p(\mathbf{x})\) to \(q(\mathbf{x})\), satisfying \[\rho(\mathbf{x},0)=p(\mathbf{x}),\quad\rho(\mathbf{x},1)=q(\mathbf{x}).\] We introduce potential \(\Phi(\mathbf{x},t)\) such that its gradient \(\nabla_{\mathbf{x}}\Phi(\mathbf{x},t)\) represents the velocity \[\mathbf{v}(\mathbf{x},t)=\nabla_{\mathbf{x}}\Phi(\mathbf{x},t)\] of mass flow at \(\mathbf{x}\) and \(t\) in the dynamic plan. Then, \(\Phi(\mathbf{x},t)\) satisfies the following equation of continuity \[\partial_{t}\rho(\mathbf{x},t)+\nabla_{\mathbf{x}}\cdot\{\rho(\mathbf{x},t) \nabla_{\mathbf{x}}\Phi(\mathbf{x},t)\}=0.\] The Wasserstein divergence is written in the dynamic formulation as \[D_{W}\left[p(\mathbf{x}),q(\mathbf{x})\right]=\inf_{\Phi}\int_{0}^{1} \int\|\nabla_{\mathbf{x}}\Phi(\mathbf{x},t)\|^{2}\rho(\mathbf{x},t)d\mathbf{x}dt.\] We introduce a Riemannian structure to \(\mathcal{F}\) by the Taylor expansion of \(D[p,p+\delta p]\). The Riemannian metric \(g\) is an operator to give the squared magnitude \(ds^{2}\) of an infinitensimal deviation \(\delta p(\mathbf{x})\) in the tangent space of \(\mathcal{F}\), for example, by \[ds^{2}=\int\delta p(\mathbf{x})g(\mathbf{x},\mathbf{y})\delta p(\mathbf{y})d\mathbf{x}d\mathbf{y}.\] In the case of the invariant divergence, we have \[g_{F}(\mathbf{x},\mathbf{y})=\frac{\delta(\mathbf{x}-\mathbf{y})}{p(\mathbf{x})},\] where \(\delta(\mathbf{x})\) is the delta function and \(g_{F}\) is a positive integral operator. In the case of the \(L^{2}\)-Wasserstein divergence, for infinitesimally small \(\delta t\), the change of distribution from \(\rho(\mathbf{x},0)=p(\mathbf{x})\) at \(t=0\) to \(\rho(\mathbf{x},\delta t)=p(\mathbf{x})+\delta p(\mathbf{x})\) at \(t=\delta t\) satisfies \[\delta p(\mathbf{x})=-\Delta_{p}\Phi(\mathbf{x})dt,\] where \(\Delta_{p}\) is the \(p\)-Laplacian defined by \[\Delta_{p}\Phi=\nabla_{\mathbf{x}}\cdot\left(p(\mathbf{x})\nabla_{\mathbf{x}} \Phi(\mathbf{x})\right).\] The \(L^{2}\)-Wasserstein divergence is given in the quadratic form of \(\delta p\) as \[D_{W}[p,p+\delta p]=-\int\delta p(\mathbf{x})\Delta_{\rho}^{-1}\delta p (\mathbf{x})d\mathbf{x},\] giving the Riemannian metric \(g_{W}\) which is an operator represented by \[g_{W}(p)=-\Delta_{p}^{-1}.\] This is Otto's Riemannian metric (Otto, 2001). See Li and Zhao (2023) for details. ## 3 Score functions and estimators in parametric model We consider a regular statistical model \(M=\{p(\mathbf{x},\mathbf{\theta})\}\) parameterized by \(m\)-dimensional vector \(\mathbf{\theta}\). The tangent space of \(M\) at \(\mathbf{\theta}\) is spanned by \(\partial_{i}p(\mathbf{x},\mathbf{\theta})\) for \(i=1,\cdots,m\), such that \[\delta p(\mathbf{x})=\partial_{i}p(\mathbf{x},\mathbf{\theta})d\theta^{i}, \tag{7}\] where \[\partial_{i}p(\mathbf{x},\mathbf{\theta})=\frac{\partial}{\partial\theta^{i}}p(\mathbf{x },\mathbf{\theta}).\] Hereafter, the summation convention is used, that is, all indices appearing twice, once as upper and the other as lower indices, e.g. \(i\)'s in (7), are summed up. Let us define \(S_{i}(\mathbf{x},\mathbf{\theta})\) from the basis functions \(\partial_{i}p(\mathbf{x},\mathbf{\theta})\) of the tangent space of \(M\) for \(i=1,\cdots,m\) by applying the Riemannian metric operator \(g\): \[S_{i}(\mathbf{x},\mathbf{\theta})=g\circ\partial_{i}p(\mathbf{x},\mathbf{\theta}).\] We call them score functions following the tradition of statistics. In the case of invariant Fisher geometry, the score functions are \[S_{i}^{F}(\mathbf{x},\mathbf{\theta}) =\frac{\partial_{i}p(\mathbf{x},\mathbf{\theta})}{p(\mathbf{x},\mathbf{\theta})} =\partial_{i}l(\mathbf{x},\mathbf{\theta}),\] \[l(\mathbf{x},\mathbf{\theta}) =\log p(\mathbf{x},\mathbf{\theta}),\] which is the derivative of log-likelihood. In Wasserstein geometry, we have \[S_{i}^{W}(\mathbf{x},\mathbf{\theta})=-\Delta_{p}^{-1}\partial_{i}p(\mathbf{x},\mathbf{ \theta}).\] By using the identity \[\int a(\mathbf{x})\Delta_{p}b(\mathbf{x})d\mathbf{x}=-\int\left(\nabla_{\mathbf{x}}a\cdot \nabla_{\mathbf{x}}b\right)p(\mathbf{x})d\mathbf{x}, \tag{8}\] we see that \(S_{i}^{W}(\mathbf{x},\mathbf{\theta})\) are solutions of the following Poisson equations, \[\nabla_{\mathbf{x}}\log p(\mathbf{x},\mathbf{\theta})\cdot\nabla_{\mathbf{x}}S_{i}^{W}(\mathbf{x},\mathbf{\theta})+\Delta_{\mathbf{x}}S_{i}^{W}(\mathbf{x},\mathbf{\theta})+\frac{\partial}{ \partial\theta_{i}}\log p(\mathbf{x},\mathbf{\theta})=0. \tag{9}\] For infinitesimal \(\delta\), the map \(\mathbf{x}\mapsto\mathbf{x}+\delta\nabla_{\mathbf{x}}S_{i}^{W}(\mathbf{x},\mathbf{\theta})\) is the optimal transport map from \(p(\mathbf{x},\mathbf{\theta})\) to \(p(\mathbf{x},\mathbf{\theta}+\delta\mathbf{e}_{i})\) with transportation cost \[D_{W}(p(\mathbf{x},\mathbf{\theta}),p(\mathbf{x},\mathbf{\theta}+\delta\mathbf{e}_{i}))=\int\| \delta\nabla_{\mathbf{x}}S_{i}^{W}(\mathbf{x},\mathbf{\theta})\|^{2}p(\mathbf{x},\mathbf{\theta}) \mathrm{d}x,\] where \(\mathbf{e}_{i}\) is the \(i\)-th standard unit vector. In order to eliminate the indefiniteness due to the integral constant, we pose an additional condition \[\mathrm{E}\left[S_{i}(\mathbf{x},\mathbf{\theta})\right]=0. \tag{10}\] This is automatically satisfied in the Fisherian case. The Riemannian metric tensor \(g_{ij}(\mathbf{\theta})\) is pulled-back from \(g\) in \(\mathcal{F}\) to \(M\), and is derived in terms of the score functions as \[g_{ij}(\mathbf{\theta})=\langle S_{i},g^{-1}S_{j}\rangle,\] where \[\langle a(\mathbf{x}),b(\mathbf{x})\rangle=\int a(\mathbf{x})b(\mathbf{x})d\mathbf{x}.\] In the Fisherian case, \[g_{ij}^{F}(\mathbf{\theta})=\mathrm{E}\left[\partial_{i}l\partial_{j}l\right]=\int p (x,\mathbf{\theta})\partial_{i}l(\mathbf{x},\mathbf{\theta})\partial_{j}l(\mathbf{x},\mathbf{ \theta})d\mathbf{x}.\] In the Wasserstein case, \[g_{ij}^{W}(\mathbf{\theta})=\int p(\mathbf{x},\mathbf{\theta})\nabla_{\mathbf{x}}S_{i}^{W}(\bm {x},\mathbf{\theta})^{\top}\nabla_{\mathbf{x}}S_{j}^{W}(\mathbf{x},\mathbf{\theta})d\mathbf{x}= \mathrm{E}[\nabla_{\mathbf{x}}S_{i}^{W}(\mathbf{x},\mathbf{\theta})^{\top}\nabla_{\mathbf{x}}S _{j}^{W}(\mathbf{x},\mathbf{\theta})], \tag{11}\] where identity (8) is used. The score functions \(S_{i}(\mathbf{x},\mathbf{\theta})\) give a set of estimating functions from (10), which are used to obtain an estimator \(\hat{\mathbf{\theta}}\). Let \(\hat{p}_{\mathrm{emp}}(\mathbf{x})\) be the empirical distribution given by \[\hat{p}_{\mathrm{emp}}(\mathbf{x})=\frac{1}{n}\sum_{j=1}^{n}\delta \left(\mathbf{x}-\mathbf{x}_{j}\right),\] where \(\mathbf{x}_{1},\cdots,\mathbf{x}_{n}\) are \(n\) independent observations. Then, replacing expectation E in (10) by the expectation with respect to the empirical distribution, we have estimating equations, \[\mathrm{E}_{\mathrm{emp}}\left[S_{i}(\mathbf{x},\mathbf{\theta})\right]= \frac{1}{n}\sum_{j=1}^{n}S_{i}\left(\mathbf{x}_{j},\mathbf{\theta}\right)=0,\quad i=1, \cdots,m.\] It is known that the solution \(\hat{\mathbf{\theta}}\) gives a consistent estimator for large \(n\). Roughly speaking, \(\hat{\mathbf{\theta}}\) is the projection of \(\hat{p}_{\mathrm{emp}}\) to the model \(M\) with respect to the metric \(g\) (see Figure 4). It is the solution of \[\langle\hat{p}_{\mathrm{emp}}(\mathbf{x}),S_{i}(\mathbf{x},\mathbf{\theta}) \rangle=0,\] giving a consistent estimator \(\hat{\mathbf{\theta}}\). A consistent estimator is Fisher efficient when the projection is orthogonal with respect to Fisher-Rao metric (Amari and Nagaoka, 2007). Estimator \(\hat{\mathbf{\theta}}_{F}\) (the invariant Fisherian case) is the maximum likelihood estimator, that maximizes the likelihood. Cramer-Rao theorem gives a matrix inequality for any unbiased estimator \(\hat{\mathbf{\theta}}\), \[\mathrm{Cov}\left[\hat{\mathbf{\theta}}-\mathbf{\theta}\right]\succeq \frac{1}{n}g_{F}^{-1}(\mathbf{\theta}),\] where \(\mathrm{Cov}[\cdot]\) is the covariance matrix and \(\succeq\) denotes the matrix order defined by the positive definiteness. The maximum likelihood estimator \(\hat{\mathbf{\theta}}_{F}\) satisfies \[\mathrm{Cov}\left[\hat{\mathbf{\theta}}_{F}-\mathbf{\theta}\right]\approx \frac{1}{n}g_{F}^{-1}(\mathbf{\theta})\] asymptotically. Hence, it minimizes the error covariance matrix and the minimized error covariance is given asymptotically by the inverse of the Fisher metric tensor \(g_{F}\) divided by \(n\). Such a property is called the Fisher efficiency. In parallel to the Fisherian case, we need to study the characteristics of the Wasserstein estimator \(\hat{\mathbf{\theta}}_{W}\) in the following. In the case of the one-dimensional location-scale model, the Wasserstein estimator is asymptotically equivalent to the estimator obtained by minimizing the Wasserstein divergence (transportation cost) from the empirical distribution \(\hat{p}_{\text{emp}}\) to model \(M\), \[\hat{\mathbf{\theta}}_{W}=\operatorname*{arg\,min}_{\mathbf{\theta}}D_{W} \left[\hat{p}_{\text{emp}},p(\mathbf{x},\mathbf{\theta})\right]. \tag{12}\] See the end of Section 5. The properties of \(\hat{\mathbf{\theta}}_{W}\) were studied in detail by Amari and Matsuda (2022) in the case of the one-dimensional location-scale model. ## 4 Affine deformation model Now, we focus on the affine deformation model. Let \(f(\mathbf{z})\) be a standard probability density function satisfying (1), (2), and (3). To define \(M_{f}\), we use affine deformation of \(\mathbf{x}\) to \(\mathbf{z}\) by \[\mathbf{z}=\Lambda(\mathbf{x}-\mathbf{\mu}),\] where \(\mathbf{\mu}\) is a vector representing shift of location and \(\Lambda\) is a non-singular matrix. Hence, \(\mathbf{\theta}=(\mathbf{\mu},\Lambda)\) is \(m=d^{2}+d\) dimensional. The model \(M_{f}\) is defined from \[p(\mathbf{x},\mathbf{\theta})d\mathbf{x}=f(\mathbf{z})d\mathbf{z},\] that is \[p(\mathbf{x},\mathbf{\theta})=|\Lambda|f(\Lambda(\mathbf{x}-\mathbf{\mu})),\] satisfying \[\int\mathbf{x}p(\mathbf{x},\mathbf{\theta})d\mathbf{x} =\mathbf{\mu},\] \[\int\mathbf{x}\mathbf{x}^{\top}p(\mathbf{x},\mathbf{\theta})d\mathbf{x} =\Lambda^{-2}+\mathbf{\mu}\mathbf{\mu}^{\top}.\] This is a generalization of the location-scale model, which is simply obtained by putting \(\Lambda=(1/\sigma)I\), with \(\sigma\) being the scale factor. It should be noted that \(\Lambda\) is decomposed as \(\Lambda=UDO\), where \(U\) and \(O\) are orthogonal matrices and \(D\) is a positive diagonal matrix. In the following, we denote the log probability of standard shape \(f\) by \[l(\mathbf{z})=\log f(\mathbf{z}).\] As we discussed in Introduction, the set of all standard shape functions \(\mathcal{F}_{S}=\{f\}\) does not form a manifold but has an interesting topological structure due to the rotational invariance for some \(f\). For each standard shape function \(f\in\mathcal{F}_{S}\), an affine deformation model \(M_{f}\) parameterized by \(\mathbf{\theta}=(\mathbf{\mu},\Lambda)\) is attached. Thus, \(\mathcal{F}\) is decomposed into the direct product of \(\mathcal{F}_{S}\) and \(M_{f}\), \[\mathcal{F}=\mathcal{F}_{S}\times M_{f}.\] For any \(f\), \(M_{f}\) has cone structure parameterized by \((\mathbf{\mu},D,U,O)\), where \(\Lambda=UDO\) and \(D\) is a diagonal matrix with diagonal elements \(d_{i}>0\). Thus, \(D\) can be identified with a vector in the open positive quadrant \(\mathbf{R}_{+}^{d}\) of \(\mathbf{R}^{d}\), which has the cone structure. Since \(\mathbf{\mu}\in\mathbf{R}^{d}\), and \(U,O\in\mathcal{O}(d)\), we have the decomposition \[M_{f}=\mathbf{R}^{d}\times\mathbf{R}_{+}^{d}\times\mathcal{O}(d)\times\mathcal{O}(d).\] See Takatsu and Yokota (2012) for the cone structure of \(\mathcal{F}\). When \(f\) is Gaussian, its structure is studied in detail by Takatsu (2011). When \(p(\mathbf{x})\) belongs to \(M_{f}\), the waveform of \(p(\mathbf{x})\) is said to be equivalent to that of \(f\). \(M_{f}\) consists of the distributions of all equivalent waveforms. All ellipsoidal shapes are equivalent to a spherical shape. A family of special parallel-piped shapes are equivalent to a cubic form (see Fig. 5). Therefore, our model is useful for separating the effect of the shape from location and affine deformation. We may consider subclasses of the transformation model. One simple example is the location model, in which \(\Lambda\) is fixed to the identity matrix \(I\). A stronger theorem is known in such a simple model (Givens and Shortt, 1984). In our context, it can be expressed as follows. **Proposition 1**.: Wasserstein geometry gives an orthogonal decomposition of the shape and locations, \[D^{W}\left[f_{1}\left(\mathbf{x}-\mathbf{\mu}_{1}\right),f_{2}\left(\mathbf{x}-\mathbf{\mu}_{ 2}\right)\right]=D^{W}\left[f_{1}(\mathbf{x}),f_{2}(\mathbf{x})\right]+\|\mathbf{\mu}_{1} -\mathbf{\mu}_{2}\|^{2}.\] ## 5 Elliptically symmetric deformation model Here, we focus on deformation models that are elliptically symmetric: \[p(\mathbf{x},\mathbf{\theta})=|\Lambda|g(\|\Lambda(\mathbf{x}-\mathbf{\mu})\|), \tag{13}\] where \(f(\mathbf{z})=g(\|\mathbf{z}\|)\) satisfies the standard density conditions (1), (2), and (3). Note that \(f(\mathbf{z})=f(UDO\mathbf{x})=g(\|DO\mathbf{x}\|)\) does not depend on \(U\) and thus the parameter is reduced to \((\mathbf{\mu},D,O)\) in this case. First, we consider the \(F\)-estimator \(\hat{\mathbf{\theta}}_{F}\) (maximum likelihood estimator). The log-likelihood is given by \[\log p(\mathbf{x},\mathbf{\theta})=\log|\Lambda|+\log g(\|\Lambda(\mathbf{x}-\mathbf{\mu})\|).\] When there are \(n\) observations \(\mathbf{x}_{1},\cdots,\mathbf{x}_{n}\), summation is taken over them so that we have the likelihood equations \[\sum_{j=1}^{n}\partial_{\mathbf{\theta}}\log p\left(\mathbf{x}_{j},\mathbf{ \theta}\right)=0.\] The solution \(\hat{\mathbf{\theta}}_{F}\) strongly depends of the shape \(g\). Contrary to this, the \(W\)-estimator \(\hat{\mathbf{\theta}}_{W}\) does not depend on the shape \(g\) as follows. **Lemma 1**.: \[\nabla_{x}\left(\frac{1}{2}x^{\top}Ax+b^{\top}x+c\right)=\frac{A+A ^{\top}}{2}x+b,\quad\Delta_{x}\left(\frac{1}{2}x^{\top}Ax+b^{\top}x+c\right)= \operatorname{tr}(A).\] Proof.: Straightforward calculation. Note that \((A+A^{\top})/2\neq A\) when \(A\) is not symmetric. **Lemma 2**.: _Let \(A,B\in\mathbb{R}^{d\times d}\) be symmetric matrices. If \(A\) is positive semidefinite, then the Sylvester equation \(AX+XA=B\) has the unique solution \(X\), which satisfies \(X^{\top}=X\) and \(\operatorname{tr}(X)=\operatorname{tr}(A^{-1}B)/2\)._ Proof.: From the positive semidefiniteness of \(A\), the spectra of \(A\) and \(-A\) are disjoint. Thus, from Theorem VII.2.1 of Bhatia (1997), the Sylvester equation \(AX+XA=B\) has a unique solution. Let \(X\) be the solution of the Sylvester equation. From \(A^{\top}=A\), we have \(AX^{\top}+X^{\top}A=(AX+XA)^{\top}=B^{\top}=B\), which means that \(X^{\top}\) is also a solution of the Sylvester equation. Figure 5: Equivalent shapes. Since the solution is unique, it implies \(X^{\top}=X\). Also, from the positive semidefiniteness of \(A\) and \(AX+XA=B\), we have \(X+A^{-1}XA=A^{-1}B\). Taking the trace and using \(\operatorname{tr}(A^{-1}XA)=\operatorname{tr}(X)\), we obtain \(\operatorname{tr}X=\operatorname{tr}(A^{-1}B)/2\). **Theorem 1**.: For the elliptically symmetric deformation model (13), the Wasserstein score functions are quadratic. Specifically, the Wasserstein score function for \(\mu_{i}\) is \[S_{\mu_{i}}^{W}(\boldsymbol{x},\boldsymbol{\theta})=x_{i}-\mu_{i},\] and the Wasserstein score function for \(\Lambda_{ij}\) is \[S_{\Lambda_{ij}}^{W}(\boldsymbol{x},\boldsymbol{\theta})=\frac{1}{2} \boldsymbol{x}^{\top}A\boldsymbol{x}+b^{\top}\boldsymbol{x}-\operatorname{E }_{\boldsymbol{\theta}}\left[\frac{1}{2}\boldsymbol{x}^{\top}A\boldsymbol{x}+ b^{\top}\boldsymbol{x}\right],\] where \(A\) is the unique solution of the Sylvester equation \(\Lambda^{2}A+A\Lambda^{2}=-L\) and \(b=-A\mu\). Proof.: We show that the above \(S^{W}\)'s satisfy the Poisson equation (9) directly. First, we consider the mean parameter \(\mu_{i}\). Let \(\boldsymbol{e}_{i}\) be the \(i\)-th standard unit vector. From (13), \[\frac{\partial}{\partial\mu_{i}}\log p(\boldsymbol{x},\boldsymbol{\theta})=- \frac{\partial}{\partial x_{i}}\log p(\boldsymbol{x},\boldsymbol{\theta})=- \nabla_{\boldsymbol{x}}\log p(\boldsymbol{x},\boldsymbol{\theta})^{\top} \boldsymbol{e}_{i}.\] Also, from Lemma 1, \[\nabla_{\boldsymbol{x}}(x_{i}-\mu_{i})=\boldsymbol{e}_{i},\quad\Delta_{ \boldsymbol{x}}(x_{i}-\mu_{i})=0.\] Therefore, \[\nabla_{\boldsymbol{x}}\log p(\boldsymbol{x},\boldsymbol{\theta})^{\top} \nabla_{\boldsymbol{x}}(x_{i}-\mu_{i})+\Delta_{\boldsymbol{x}}(x_{i}-\mu_{i}) +\frac{\partial}{\partial\mu_{i}}\log p(\boldsymbol{x},\boldsymbol{\theta})=0.\] Thus, the Wasserstein score function for the mean parameter \(\mu_{i}\) is \[S_{\mu_{i}}^{W}(\boldsymbol{x},\boldsymbol{\theta})=x_{i}-\mu_{i}.\] Next, we consider the deformation parameter \(\Lambda_{ij}\). Since \[\frac{\partial}{\partial\Lambda_{ij}}\|\Lambda(\boldsymbol{x}- \mu)\| =\frac{1}{2}\|\Lambda(\boldsymbol{x}-\mu)\|^{-1}\frac{\partial}{ \partial\Lambda_{ij}}(\boldsymbol{x}-\mu)^{\top}\Lambda^{2}(\boldsymbol{x}-\mu)\] \[=\frac{1}{2}\|\Lambda(\boldsymbol{x}-\mu)\|^{-1}(\boldsymbol{x}- \mu)^{\top}\frac{\partial\Lambda^{2}}{\partial\Lambda_{ij}}(\boldsymbol{x}-\mu)\] \[=\frac{1}{2}\|\Lambda(\boldsymbol{x}-\mu)\|^{-1}(\boldsymbol{x}- \mu)^{\top}(\Lambda\boldsymbol{e}_{i}\boldsymbol{e}_{j}^{\top}+\boldsymbol{e }_{i}\boldsymbol{e}_{j}^{\top}\Lambda)(\boldsymbol{x}-\mu),\] we have \[\frac{\partial}{\partial\Lambda_{ij}}\log p(\boldsymbol{x},\theta) =\frac{\partial}{\partial\Lambda_{ij}}\log\det\Lambda+\frac{ \partial}{\partial\Lambda_{ij}}\log g(\|\Lambda(\boldsymbol{x}-\mu)\|)\] \[=(\Lambda^{-1})_{ij}+\frac{g^{\prime}(\|\Lambda(\boldsymbol{x}- \mu)\|)}{g(\|\Lambda(\boldsymbol{x}-\mu)\|)}\frac{\partial}{\partial\Lambda_{ ij}}\|\Lambda(\boldsymbol{x}-\mu)\|\] \[=(\Lambda^{-1})_{ij}+\frac{g^{\prime}(\|\Lambda(\boldsymbol{x}- \mu)\|)}{2\|\Lambda(\boldsymbol{x}-\mu)\|g(\|\Lambda(\boldsymbol{x}-\mu)\|)}( \boldsymbol{x}-\mu)^{\top}(\Lambda\boldsymbol{e}_{i}\boldsymbol{e}_{j}^{\top }+\boldsymbol{e}_{i}\boldsymbol{e}_{j}^{\top}\Lambda)(\boldsymbol{x}-\mu)\] \[=-\frac{g^{\prime}(\|\Lambda(\boldsymbol{x}-\mu)\|)}{\|\Lambda( \boldsymbol{x}-\mu)\|g(\|\Lambda(\boldsymbol{x}-\mu)\|)}\left(-\frac{1}{2} \boldsymbol{x}^{\top}L\boldsymbol{x}+\boldsymbol{x}^{\top}L\mu-\frac{1}{2} \mu^{\top}L\mu\right)+(\Lambda^{-1})_{ij},\] where \(L=\Lambda\mathbf{e}_{i}\mathbf{e}_{j}^{\top}+\mathbf{e}_{i}\mathbf{e}_{j}^{\top}\Lambda\). Let \[S(\mathbf{x})=\frac{1}{2}\mathbf{x}^{\top}A\mathbf{x}+b^{\top}\mathbf{x}-\mathrm{E}_{\theta} \left[\frac{1}{2}\mathbf{x}^{\top}A\mathbf{x}+b^{\top}\mathbf{x}\right],\] where \(A\) is the unique solution of the Sylvester equation \(\Lambda^{2}A+A\Lambda^{2}=-L\) and \(b=-A\mu\). From Lemma 1 and Lemma 2, \[\Delta_{\mathbf{x}}S(\mathbf{x})=\mathrm{tr}A=-\frac{1}{2}\mathrm{tr}(\Lambda^{-2}L)= -\frac{1}{2}\mathrm{tr}(\Lambda^{-2}(\Lambda\mathbf{e}_{i}\mathbf{e}_{j}^{\top}+\mathbf{e} _{i}\mathbf{e}_{i}^{\top}\Lambda))=-(\Lambda^{-1})_{ij}.\] Also, \[\nabla_{\mathbf{x}}\log p(\mathbf{x},\mathbf{\theta})^{\top}\nabla_{\mathbf{x}}S( \mathbf{x})\] \[= \frac{g^{\prime}(\|\Lambda(\mathbf{x}-\mu)\|)}{g(\|\Lambda(\mathbf{x}- \mu)\|)}\nabla_{\mathbf{x}}(\|\Lambda(\mathbf{x}-\mu)\|)^{\top}(A\mathbf{x}+b)\] \[= \frac{g^{\prime}(\|\Lambda(\mathbf{x}-\mu)\|)}{\|\Lambda(\mathbf{x}-\mu) \|g(\|\Lambda(\mathbf{x}-\mu)\|)}(\Lambda^{2}(\mathbf{x}-\mu))^{\top}(A\mathbf{x}+b)\] \[= \frac{g^{\prime}(\|\Lambda(\mathbf{x}-\mu)\|)}{\|\Lambda(\mathbf{x}-\mu) \|g(\|\Lambda(\mathbf{x}-\mu)\|)}\left(\frac{1}{2}\mathbf{x}^{\top}(\Lambda^{2}A+A \Lambda^{2})\mathbf{x}+\mathbf{x}^{\top}(\Lambda^{2}b-A\Lambda^{2}\mu)-\mu^{\top} \Lambda^{2}b\right)\] \[= \frac{g^{\prime}(\|\Lambda(\mathbf{x}-\mu)\|)}{\|\Lambda(\mathbf{x}-\mu) \|g(\|\Lambda(\mathbf{x}-\mu)\|)}\left(-\frac{1}{2}\mathbf{x}^{\top}L\mathbf{x}+\mathbf{x}^{ \top}L\mu-\frac{1}{2}\mu^{\top}L\mu\right),\] where we used \[\nabla_{\mathbf{x}}(\|\Lambda(\mathbf{x}-\mu)\|) =\frac{1}{2\|\Lambda(\mathbf{x}-\mu)\|}\nabla_{\mathbf{x}}(\|\Lambda(\mathbf{ x}-\mu)\|^{2})\] \[=\frac{1}{2\|\Lambda(\mathbf{x}-\mu)\|}\nabla_{\mathbf{x}}((\mathbf{x}-\mu)^{ \top}\Lambda^{2}(\mathbf{x}-\mu))\] \[=\frac{1}{\|\Lambda(\mathbf{x}-\mu)\|}\Lambda^{2}(\mathbf{x}-\mu).\] Therefore, \[\nabla_{\mathbf{x}}\log p(\mathbf{x},\mathbf{\theta})^{\top}\nabla_{\mathbf{x}}S(\mathbf{x})+ \Delta_{\mathbf{x}}S(\mathbf{x})+\frac{\partial}{\partial\Lambda_{ij}}\log p(\mathbf{x}, \mathbf{\theta})=0,\] which means that \(S(\mathbf{x})\) is the Wasserstein score function for \(\Lambda_{ij}\). In Theorem 1, we considered \(d^{2}\) elements of \(\Lambda\) as independent parameters. If we impose the symmetric constraint on \(\Lambda\), then the Wasserstein score function for the off-diagonal elements of \(\Lambda\) becomes the twice of the one in Theorem 1. **Corollary 1**.: For the elliptically symmetric deformation model (13), the \(W\)-estimator \(\hat{\mathbf{\theta}}_{W}\) is the second-order moment estimator irrespective of the waveform \(f(\mathbf{z})=g(\|\mathbf{z}\|)\). Proof.: From Theorem 1, the Wasserstein functions are quadratic functions of \(\mathbf{x}\). Thus, the estimating equation for the \(W\)-estimator is linear (non-singular) with respect to the first and second-order empirical moments of \(\mathbf{x}\). Also, the number of estimated parameters is equal to the number of first and second-order moments. Therefore, the \(W\)-estimator coincides with the second-order moment estimator. Note that Gelbrich (1990) showed that the \(L^{2}\)-Wasserstein divergence for the elliptically symmetric deformation model (13) does not depend on the waveform and is given by \[D_{W}(p(\mathbf{x},\mathbf{\theta_{1}}),p(\mathbf{x},\mathbf{\theta_{2}}))=\|\mu_{1}-\mu_{2}\|^{ 2}+\operatorname{tr}(\Lambda_{1}^{-2}+\Lambda_{2}^{-2}-2(\Lambda_{1}^{-1} \Lambda_{2}^{-2}\Lambda_{1}^{-1})^{1/2}).\] It is an interesting future problem to derive the Wasserstein score function and \(W\)-estimator for general affine deformation models. Regarding the geometric structure of the elliptically symmetric deformation model (13), we obtain the following. See Figure 1. **Theorem 2**.: When \(f\) is spherically symmetric, the model \(M_{f}\) is orthogonal to \(\mathcal{F}_{S}\) at the origin \(\mathbf{\mu}=0,\Lambda=I\) of \(M_{f}\) with respect to the Wasserstein metric. Proof.: Let \(\delta p(\mathbf{x})\) be a tangent vector of \(\mathcal{F}_{S}\) at the origin. Since all \(p(\mathbf{x})\) in \(\mathcal{F}_{S}\) satisfy the standard conditions (1), (2), and (3), \(\delta p(\mathbf{x})\) is orthogonal to any quadratic function of \(\mathbf{x}\). Since the \(W\)-score functions of \(M_{S}\) are quadratic functions from Theorem 1, it implies that \(\delta p(\mathbf{x})\) is orthogonal to the Wasserstein functions of \(\mathbf{x}\), which form the basis of the tangent space of \(M_{f}\), with respect to the Wasserstein metric. Note that \(M_{f}\) is orthogonal to \(\mathcal{F}_{S}\) only when \(f\) is Gaussian in the case of the Fisher-Rao metric. Here, we discuss the relation between the current \(W\)-estimator and the estimator (12). Amari and Matsuda (2022) studied the estimator \(\hat{\mathbf{\theta}}_{W}\) in (12) for the one-dimensional location-scale model by using the order statistics \(x_{(i)}\). This estimator minimizes the Wasserstein distance between the empirical distribution and the model. Here, we show that the estimator \(\hat{\mathbf{\theta}}_{W}=(\hat{\mathbf{\mu}}_{W},\hat{\sigma}_{W})\) given by (12) is asymptotically equivalent to the \(W\)-estimator, which coincides with the second-order moment estimator from Theorem 1. We assume \(\mu=0\) without loss of generality. The estimator (12) of the location is \[\hat{\mu}_{W}=\frac{1}{n}\sum_{i=1}^{n}x_{(i)}=\frac{1}{n}\sum_{i=1}^{n}x_{i},\] which is the empirical mean and coincides with the moment estimator. Also, the estimator (12) of the scale is \[\hat{\sigma}_{W}=\sum_{i=1}^{n}k_{i}x_{(i)},\] where \[k_{i}=\int_{z_{i-1}}^{z_{i}}zf(z)dz.\] Here, \(z_{i}\) is the \(i\)-th equipartition point of \(f(z)\) defined by \[z_{i}=F^{-1}\left(\frac{i}{n}\right),\] where \(F\) is the cumulative distribution function of \(f(z)\). From \(\mu=0\), we have \(x_{(i)}\approx\sigma z_{i}\) asymptotically. Hence, \[k_{i}\approx\frac{1}{n}z_{i}\approx\frac{1}{n}\frac{x_{(i)}}{\sigma},\] which leads to \[\hat{\sigma}_{W}=\sum_{i=1}^{n}k_{i}x_{(i)}\approx\frac{1}{n\sigma} \sum_{i=1}^{n}x_{(i)}^{2}=\frac{1}{n\sigma}\sum_{i=1}^{n}x_{i}^{2}.\] Since \(\hat{\sigma}_{W}\approx\sigma\) asymptotically, \[\hat{\sigma}_{W}^{2}\approx\frac{1}{n}\sum_{i=1}^{n}x_{i}^{2}.\] This shows that \(\hat{\mathbf{\theta}}_{W}=(\hat{\mathbf{\mu}}_{W},\hat{\sigma}_{W})\) asymptotically coincides with the second-order moment estimator. ## 6 W-efficiency implies robustness to waveform change Following Li and Zhao (2023), we define the Wasserstein covariance (\(W\)-covariance) matrix \(\mathrm{Var}_{\theta}^{\mathrm{W}}[\hat{\mathbf{\theta}}]\) of an estimator \(\hat{\mathbf{\theta}}\) by the positive semidefinite matrix given by \[\mathrm{Var}_{\theta}^{\mathrm{W}}[\hat{\mathbf{\theta}}]=(\mathrm{ E}_{\theta}[(\nabla_{\mathbf{x}}\hat{\mathbf{\theta}}_{a})^{\top}(\nabla_{\mathbf{x}}\hat{ \mathbf{\theta}}_{b})])_{ab}. \tag{14}\] Li and Zhao (2023) showed the Wasserstein-Cramer-Rao inequality \[\mathrm{Var}_{\theta}^{\mathrm{W}}(\hat{\mathbf{\theta}})\succeq \left(\frac{\partial}{\partial\theta}\mathrm{E}_{\theta}[\hat{\mathbf{\theta}}] \right)^{\top}G_{\mathrm{W}}(\theta)^{-1}\left(\frac{\partial}{\partial\theta }\mathrm{E}_{\theta}[\hat{\mathbf{\theta}}]\right), \tag{15}\] where \[\frac{\partial}{\partial\theta}\mathrm{E}_{\theta}[\hat{\mathbf{ \theta}}]:=\left(\frac{\partial}{\partial\theta_{j}}\mathrm{E}_{\theta}[\hat{ \mathbf{\theta}}_{i}]\right)_{ij}.\] A consistent estimator \(\hat{\mathbf{\theta}}\) is said to be Wasserstein efficient (\(W\)-efficient) if its Wasserstein covariance asymptotically satisfies (15) with equality. We give a proof of the Wasserstein-Cramer-Rao inequality based on the Cauchy-Schwarz inequality in the Appendix. We show that the \(W\)-covariance can be viewed as a measure of robustness of an estimator to noise. Suppose that \(X\sim p(\mathbf{x},\mathbf{\theta})\) and we estimate \(\mathbf{\theta}\) from noisy observation \(X+Z\) where \(X\) and \(Z\) are independent, \(\mathrm{E}[Z]=0\) and \(\mathrm{Var}[Z]=\sigma^{2}I\) with \(\sigma^{2}\) sufficiently small. **Theorem 3**.: _The Wasserstein covariance satisfies_ \[\mathrm{Var}_{\theta}^{\mathrm{W}}[\hat{\mathbf{\theta}}]_{ab}= \lim_{\sigma^{2}\to 0}\frac{\mathrm{Var}_{\theta}[\hat{\mathbf{ \theta}}(X+Z)]_{ab}-\mathrm{Var}_{\theta}[\hat{\mathbf{\theta}}(X)]_{ab}}{\sigma^{ 2}}\] \[\qquad-\frac{1}{2}\left(\mathrm{Cov}_{\theta}[\hat{\mathbf{\theta}}_{a }(X),\Delta\hat{\mathbf{\theta}}_{b}(X)]+\mathrm{Cov}_{\theta}[\hat{\mathbf{\theta}}_{ b}(X),\Delta\hat{\mathbf{\theta}}_{a}(X)]\right),\] _where \(\Delta\) is the Laplacian. In particular, when \(\hat{\mathbf{\theta}}\) is quadratic in \(\mathbf{x}\),_ \[\mathrm{Var}_{\theta}^{\mathrm{W}}[\hat{\mathbf{\theta}}]=\lim_{ \sigma^{2}\to 0}\frac{\mathrm{Var}_{\theta}[\hat{\mathbf{\theta}}(X+Z)]- \mathrm{Var}_{\theta}[\hat{\mathbf{\theta}}(X)]}{\sigma^{2}}.\] Proof.: By Taylor expansion, for sufficiently small \(z\), \[\hat{\mathbf{\theta}}_{a}(x+z)\approx\hat{\mathbf{\theta}}_{a}(x)+\sum_{i}\frac{\partial \hat{\mathbf{\theta}}_{a}}{\partial x_{i}}(x)z_{i}+\frac{1}{2}\sum_{i,j}\frac{ \partial^{2}\hat{\mathbf{\theta}}_{a}}{\partial x_{i}\partial x_{j}}(x)z_{i}z_{j}.\] From \(\mathrm{E}[Z]=0\), \(\mathrm{Var}[Z]=\sigma^{2}I\) and the independence of \(X\) and \(Z\), \[\mathrm{E}_{\theta}[\hat{\mathbf{\theta}}_{a}(X+Z)] =\mathrm{E}_{\theta}[\hat{\mathbf{\theta}}_{a}(X)]+\sum_{i}\mathrm{E }_{\theta}\left[\frac{\partial\hat{\mathbf{\theta}}_{a}}{\partial x_{i}}(X)\right] \mathrm{E}[z_{i}]+\frac{1}{2}\sum_{i,j}\mathrm{E}_{\theta}\left[\frac{ \partial^{2}\hat{\mathbf{\theta}}_{a}}{\partial x_{i}\partial x_{j}}(X)\right] \mathrm{E}[z_{i}z_{j}]\] \[=\mathrm{E}_{\theta}[\hat{\mathbf{\theta}}_{a}(X)]+\frac{1}{2} \mathrm{E}_{\theta}[\Delta\hat{\mathbf{\theta}}_{a}(X)]\sigma^{2}.\] Also, \[\mathrm{E}_{\theta}[\hat{\mathbf{\theta}}_{a}(X+Z)\hat{\mathbf{\theta}}_{ b}(X+Z)] =\mathrm{E}_{\theta}[\hat{\mathbf{\theta}}_{a}(X)\hat{\mathbf{\theta}}_{ b}(X)]+\frac{1}{2}\mathrm{E}_{\theta}[\hat{\mathbf{\theta}}_{a}(X)\Delta\hat{\mathbf{ \theta}}_{b}(X)+\hat{\mathbf{\theta}}_{b}(X)\Delta\hat{\mathbf{\theta}}_{a}(X)]\sigma^ {2}\] \[\quad+\mathrm{E}_{\theta}[(\nabla\hat{\mathbf{\theta}}_{a}(X))^{\top} (\nabla\hat{\mathbf{\theta}}_{b}(X))]\sigma^{2}+o(\sigma^{2})\] \[=\mathrm{E}_{\theta}[\hat{\mathbf{\theta}}_{a}(X)\hat{\mathbf{\theta}}_{ b}(X)]+\frac{1}{2}\mathrm{E}_{\theta}[\hat{\mathbf{\theta}}_{a}(X)\Delta\hat{\mathbf{ \theta}}_{b}(X)+\hat{\mathbf{\theta}}_{b}(X)\Delta\hat{\mathbf{\theta}}_{a}(X)]\sigma ^{2}\] \[\quad+\mathrm{Var}_{\theta}[\hat{\mathbf{\theta}}]_{ab}\sigma^{2}+o( \sigma^{2}).\] Then, \[\mathrm{Var}_{\theta}[\hat{\mathbf{\theta}}(X+Z)]_{ab}\] \[= \mathrm{E}_{\theta}[\hat{\mathbf{\theta}}_{a}(X+Z)\hat{\mathbf{\theta}}_ {b}(X+Z)]-\mathrm{E}_{\theta}[\hat{\mathbf{\theta}}_{a}(X+Z)]\mathrm{E}_{\theta}[ \hat{\mathbf{\theta}}_{b}(X+Z)]\] \[= \mathrm{Var}_{\theta}[\hat{\mathbf{\theta}}(X)]_{ab}+\mathrm{Var}_{ \theta}^{\mathrm{W}}[\hat{\mathbf{\theta}}]_{ab}\sigma^{2}+\frac{1}{2}\left( \mathrm{Cov}_{\theta}[\hat{\mathbf{\theta}}_{a}(X),\Delta\hat{\mathbf{\theta}}_{b}(X)] +\mathrm{Cov}_{\theta}[\hat{\mathbf{\theta}}_{b}(X),\Delta\hat{\mathbf{\theta}}_{a}(X) ]\right)\sigma^{2}+o(\sigma^{2}),\] where the covariance term vanishes when \(\hat{\mathbf{\theta}}\) is quadratic in \(\mathbf{x}\). Thus, the Wasserstein covariance quantifies the robustness of estimators against the waveform change due to noise. In particular, from Corollary 1, the Wasserstein covariance quantifies the robustness of the Wasserstein estimator for elliptically symmetric deformation models. The Wasserstein-Cramer-Rao inequality gives the limit of robustness. It is an interesting future problem to investigate when the Wasserstein estimator attains the Wasserstein efficiency. Note that the Fisher efficiency (in finite samples), which is defined by the conventional Cramer-Rao inequality, is attained by the maximum likelihood estimator if and only if the estimand is the expectation parameter of an exponential family. ## 7 Contribution of waveform \(f\) to \(F\)- and \(W\)-efficiencies We study how the waveform \(f\) contributes to the \(F\)-efficiency and \(W\)-efficiency of estimators. We first show the following theorem. **Theorem 4**.: The Gaussian waveform \(f\) has the following properties: When and only when \(f\) is Gaussian, \(F\)-estimator \(\hat{\mathbf{\theta}}_{F}\) and \(W\)-estimator \(\hat{\mathbf{\theta}}_{W}\) are identical, and they are both \(F\)- and \(W\)-efficient. Proof.: For the standard Gaussian \(f\), \[f(\mathbf{z})=\frac{1}{(\sqrt{2x})^{d}}\exp\left\{-\frac{|\mathbf{z}|^{2}}{2}\right\},\] the \(F\)-score functions are \[S_{F}(\mathbf{x},\mathbf{\theta})=\partial_{\mathbf{\theta}}\log p(\mathbf{x},\mathbf{\theta})=- \mathbf{z}\cdot\frac{\partial\mathbf{z}}{\partial\mathbf{\theta}}.\] Hence, the \(F\)-score functions are quadratic with respect to \(\mathbf{x}\). So they are equivalent to the \(W\)-score functions. On the contrary, when the score functions are quadratic with respect to \(\mathbf{x}\), the waveform \(f\) is Gaussian. When \(f\) is not Gaussian, the \(F\)-efficiency of \(\hat{\mathbf{\theta}}_{W}\) and \(W\)-efficiency of \(\hat{\mathbf{\theta}}_{F}\) degrade. When \(f\) is close to Gaussian, their cumulants of order larger than two are small. We use the Gram-Charlie expansion of \(f\) to represent how cumulants of the waveform \(f\) contribute to the \(F\)-efficiency of \(\hat{\mathbf{\theta}}_{W}\). We study how waveform \(f\) contributes to the amounts of Fisher information \(g_{F}\) and Wasserstein information \(g_{W}\), when \(f\) is close to the Gaussian distribution. We use the Gram-Charlie expansion (McCullagh, 2018) \[f(\mathbf{x})=S(\mathbf{x})\left\{1+\frac{\kappa_{3}}{3!}\circ h_{3}(\mathbf{x})+\frac{ \kappa_{4}}{4!}\circ h_{4}(\mathbf{x})\right\},\] where \(\kappa_{i}\) are the \(i\)th order cumulant tensors and \(h_{i}(\mathbf{x})\) are the \(i\)th order tensorial Hermite polynomials, and \(\circ\) denotes the tensorial inner product such as \[\kappa_{3}\circ h_{3}=\sum\kappa_{3,ijk}h_{3}^{ijk}.\] Note that \(\kappa_{1}=0,\kappa_{2}=1\) when \(f\) is the standard Gaussian distribution \(\phi\), \[\phi(\mathbf{x})=\frac{1}{(\sqrt{2\pi})}\exp\left\{-\frac{\mathbf{x}\cdot\mathbf{x}}{2} \right\}.\] The logarithm of \(p(\mathbf{x},\mathbf{\theta})\) is expanded as \[l(\mathbf{x},\mathbf{\theta})=-\log|\Lambda|+\left\{-\frac{1}{2}|\mathbf{z}|^{2}+\frac{ \kappa_{3}}{3!}\circ h_{3}(\mathbf{z})+\frac{\kappa_{4}}{4!}\circ h_{4}(\mathbf{z}) \right\},\] in terms of \(\mathbf{z}=\Lambda(\mathbf{x}-\mathbf{\mu})\), where higher order terms of \(\kappa_{3}\) and \(\kappa_{4}\) and terms of \(\kappa_{i}\), \(i\geq 5\) are neglected. Fisher information \(g_{F}\) is given by \[g_{F}=-\mathrm{E}\left[\partial_{\mathbf{\theta}}\partial_{\mathbf{\theta}}l(\mathbf{x}, \mathbf{\theta})\right].\] We have \[\partial_{\mathbf{\theta}}l =\frac{\partial l}{\partial\mathbf{z}}\cdot\frac{\partial\mathbf{z}}{ \partial_{\mathbf{\theta}}}\] \[\partial_{\mathbf{\theta}}\partial_{\mathbf{\theta}}l =\frac{\partial^{2}l}{\partial\mathbf{z}\partial\mathbf{z}}\cdot\left( \frac{\partial\mathbf{z}}{\partial_{\mathbf{\theta}}}\frac{\partial\mathbf{z}}{\partial_{ \mathbf{\theta}}}\right)+\frac{\partial l}{\partial\mathbf{z}}\cdot\frac{\partial^{2} \mathbf{z}}{\partial_{\mathbf{\theta}}\partial_{\mathbf{\theta}}},\] where \(\cdot\) denotes the inner product with respect to the indices of \(\mathbf{z}\). From the derivative of Hermite polynomials, we have \[\partial_{\mathbf{z}}l =\Lambda^{-1}\left\{-\mathbf{z}+\frac{\kappa_{3}}{2}\circ h_{2}(\mathbf{z} )+\frac{\kappa_{4}}{6}\circ h_{3}(\mathbf{z})\right\}\] \[\partial_{\mathbf{z}}\partial_{\mathbf{z}}l =\Lambda^{-1}\left\{-I+\kappa_{3}\circ h_{1}(\mathbf{z})+\frac{\kappa _{4}}{2}\circ h_{2}(\mathbf{z})\right\},\] where \(\kappa_{i}\circ h_{j}(\mathbf{z})\) are tensorial polynomials of \(\mathbf{z}\). On the other hand, \(\partial\mathbf{z}/\partial\mathbf{\theta}\) are given by \[\frac{\partial\mathbf{z}}{\partial\mathbf{\mu}} =\Lambda,\] \[\frac{\partial\mathbf{z}}{\partial\mathbf{\Lambda}} =\mathbf{x}-\mathbf{\mu},\] In order to avoid complicated tensorial calculations, we study only the case of \(d=1\), that is the location-scale model. We show the results after simple calculations. \[\frac{\partial^{2}l}{\partial\mu\partial\mu} =\lambda^{2}\left\{-1+\kappa_{3}z+\frac{\kappa_{4}}{2}\left(z^{2} -1\right)\right\}\] \[\frac{\partial^{2}l}{\partial\lambda\partial\mu} =\lambda^{2}\left\{-2z+\frac{\kappa_{3}}{2}\left(3z^{2}-1\right) +\frac{\kappa_{4}}{3}\left(2z^{3}-3z\right)\right\}\] \[\frac{\partial^{2}l}{\partial\lambda\partial\lambda} =\lambda^{2}\left\{\left(-3z^{2}+1\right)+\frac{\kappa_{3}}{2} \left(4z^{3}-3z\right)+\frac{\kappa_{4}}{6}\left(4z^{4}-6z^{2}\right)\right\}.\] Here, \(z\) is subject to \(f(z)\), so we have \[g_{F}(\mu,\lambda)=\lambda^{2}\left[\begin{array}{cc}1&-\kappa_{3}\\ -\kappa_{3}&2-\kappa_{4}\end{array}\right].\] This shows how \(g_{F}(\mu,\lambda)\) deviates from the Gaussian case depending on \(\kappa_{3}\) and \(\kappa_{4}\). It is also interesting to consider the case when \(f\) has high-frequency wavy structure. Since \(F\)-score functions are derivatives of the log probability, high-frequency components are sensitive to them, contributing to the \(F\)-metric. However, by adding small noises to \(\mathbf{x}\), those components are smoothed out. Hence the \(W\)-metric is insensitive to the high-frequency components. We observe that, when \(f\) includes a high frequency component such as \[f(x)=\phi(\mathbf{x})\left\{1+\varepsilon\sin\tau x\right\},\] where \(\varepsilon\) is small and \(\tau\) is the frequency of small deviation, \(\partial^{2}l/\partial z^{2}\) has a component proportional to \(\varepsilon\tau^{2}\). Hence, the increment due to the high-frequency component is proportional to \(\tau^{2}\), implying that the increment of Fisher information is proportional to \(\tau^{2}\). Hence, high frequency ripples of waveform \(f\) increases \(g_{F}\). On the other hand, the \(W\)-estimator \(\mathbf{\hat{\theta}}_{W}\) is not \(F\)-efficient except for the Gaussian case. The loss of \(F\)-efficiency depends on the waveform \(f\). We again use the Gram-Charlie expansion and see the effect of \(\kappa_{3}\) and \(\kappa_{4}\), assuming they are small. We show this only in the location-scale model with \(d=1\). Let us define the empirical moments of order \(s\) by \[m_{r}=\frac{1}{n}\sum_{i=1}^{n}x_{i}^{r},\quad r=1,2,\cdots\] \(\hat{\mathbf{\theta}}_{W}\) uses only \(m_{1}\) and \(m_{2}\), discarding higher-order moments. We calculate Fisher information \(g_{F,W}(\mathbf{\theta})\) included in \(\hat{\mathbf{\theta}}_{W}\). The Fisher information is covariance matrix of \(F\)-score \(\nabla l(\mathbf{x},\mathbf{\theta})\). We classify \(\mathbf{x}=(x_{1},\cdots,x_{n})\) into classes specified by \(M=(m_{1},m_{2})\), such that class \(C_{M}\) consists of \(\mathbf{x}\) having moments \(s_{1},s_{2}\) \[C_{M}=\left\{\mathbf{x}\left|m_{1}(\mathbf{x})=s_{1},m_{2}(\mathbf{x})=s_{2} \right.\right\}.\] Then, covariance of \(\nabla l\) is decomposed into the sum of within-class covariance and between class covariance, \[\text{Cov}[\nabla l]=\text{E}_{M}[\text{Cov}[\nabla l|M]]+\text{Cov}[\text{E} [\nabla l|M]],\] where \(\text{E}[\cdot|M]\) and \(\text{Cov}[\cdot|M]\) denotes the conditional expectation and conditional covariance conditioned on \(M\). Since \(\hat{\mathbf{\theta}}_{W}\) does not use higher-order moment information not included in \(M\), the Fisher information included in \(\hat{\mathbf{\theta}}_{W}\) is only the between-class covariance. The loss of information in \(\hat{\mathbf{\theta}}_{W}\) is \[\Delta g_{F,W}=\text{E}_{M}\left[\text{Cov}[\nabla l|M]\right].\] The conditional expectation of \(\nabla l\) is \[E\left[\partial_{\mu}l\left|s_{1},s_{2}\right.\right] =-s_{1}+\frac{\kappa_{3}}{2}\left(s_{2}-1\right)+\frac{\kappa_{4} }{6}\left(\text{E}\left[s_{3}\left|M\right.\right]-3s_{1}\right)\] \[E\left[\partial_{\lambda}l\left|s_{1},s_{2}\right.\right] =-s_{2}+\frac{\kappa_{3}}{2}\left\{\left(\text{E}\left[s_{3} \left|M\right.\right]\right)-s_{1}\right\}+\frac{\kappa_{4}}{6}\left\{\text{E }\left[s_{4}\left|M\right.\right]-3s_{2}\right\}.\] Hence, the conditional covariances are \[\text{Cov}\left[\partial_{\mu}l\left|M\right.\right] =\frac{\kappa_{4}^{2}}{36}\text{Cov}\left[s_{3}\left|M\right. \right],\] \[\text{Cov}\left[\partial_{\lambda}l\left|M\right.\right] =\frac{\kappa_{3}^{2}}{4}\text{Cov}\left[s_{3}\left|M\right. \right]+\frac{\kappa_{4}^{2}}{36}\text{Cov}\left[s_{4}\left|M\right.\right].\] \[\text{Cov}\left[\partial_{\mu}l,\partial_{\lambda}l\left|M\right.\right] =\frac{\kappa_{3}\kappa_{4}}{12}\text{E}\left[s_{3}|M\right]\text{E}\left[s_{ 4}|M\right].\] It should be noted that \(s_{3}\) and \(s_{4}\) are asymptotically independent of \(M\), because \((s_{1},s_{2},s_{3},s_{4})\) are jointly Gaussian and asymtptically independent. We thus have asymptotically \[\text{E}\left[s_{3}\left|M\right.\right] =\text{E}\left[x^{3}\right]=\frac{\kappa_{3}}{\lambda^{3}},\] \[\text{E}\left[s_{4}\left|M\right.\right] =\text{E}\left[x^{4}\right]=\frac{3\left(1+\kappa_{4}\right)}{ \lambda^{4}}+\frac{6}{\lambda^{2}}\mu^{2}+\mu^{4},\] and so on. Summing up all these results, we have \(\Delta g_{F,W}\) in terms of \(\kappa_{3}\) and \(\kappa_{4}\). Conclusion Statistical inference has so far been studied mostly based on information geometry from the Fisherian point of view, with remarkable success. It is based on the likelihood principle, and the invariant divergence has played a fundamental role. However, Wasserstein divergence gives another viewpoint, which is based on the geometric structure of the sample space \(X\). There are many applications of the Wasserstein geometry not only to the transportation problem but to vision analysis, signal analysis and AI in which the geometry of \(X\) is sensible. We studied the Wasserstein statistics using the framework of Li and Zhao (2023), proving that the Wasserstein covariance quantifies robustness against the convolutional waveform deformation due to observation noise. We further studied \(W\)-statistics of the affine deformation model. We showed \(F\)-efficiency and \(W\)-efficiency of estimators \(\hat{\mathbf{\theta}}_{F}\) and \(\hat{\mathbf{\theta}}_{W}\). We elucidated how the waveform \(f\) contributes to the efficiencies. The Gaussian distribution gives the only waveform in which the \(F\)-estimator and \(W\)-estimator coincide, satisfying both efficiencies. The present paper is only a first step to construct general Wasserstein statistics. In future work, we need to use more general statistical models. We also need to extend our approach to statistical theories of hypothesis testing, pattern classification, clustering and many other statistical problems based on the Wasserstein geometry. ## Acknowledgements We thank Asuka Takatsu and Tomonari Sei for helpful comments. We thank Emi Namioka for drawing the figures. Takeru Matsuda was supported by JSPS KAKENHI Grant Numbers 19K20220, 21H05205, 22K17865 and JST Moonshot Grant Number JPMJMS2024.